41 Commits (2f9c334e62ece7bf9c05c82c5b4ad3874059594d)
 

Author SHA1 Message Date
Shad Amethyst 2f9c334e62 🔥 WIP implementation of arbitrary neural network ADG
2 years ago
Shad Amethyst 38bd61fed5 🎨 Clean up, move error types to src/err.rs
2 years ago
Shad Amethyst 972b177767 Add NeuraIsolateLayer, for more versatility in resnets
2 years ago
Shad Amethyst 872cb3a6ce Allow using NeuraResidual with NeuraBackprop
2 years ago
Shad Amethyst 520fbcf317 🎨 Clean up NeuraResidual a bit
2 years ago
Shad Amethyst b34b1e630b Implement NeuraNetwork for NeuraResidual and NeuraResidualNode
2 years ago
Shad Amethyst 741cf1ced6 🔥 🎨 Remove deprecated traits (NeuraOldTrainableNetwork and NeuraGradientSolver*)
2 years ago
Shad Amethyst 4da4be22b4 Working implementation of NeuraNetwork for NeuraSequential
2 years ago
Shad Amethyst ee4b57b00c 🔥 WIP implementation of NeuraNetwork for NeuraSequential
2 years ago
Shad Amethyst 1f007bc986 ♻️ Split NeuraTrainableLayerBase into ~ and NeuraTrainableLayerEval
2 years ago
Shad Amethyst d82cab788b Create NeuraNetwork traits, WIP
2 years ago
Shad Amethyst 060b801ad6 tmp
2 years ago
Shad Amethyst dd278e7b90 WIP implementation for backprop in residual networks
2 years ago
Shad Amethyst 83dc763746 Initial implementation of residual neural networks
2 years ago
Shad Amethyst fa0bc0be9f Lock layers
2 years ago
Shad Amethyst 2ea5502575 🎨 Small cleanup
2 years ago
Shad Amethyst d40098d2ef 🔥 Refactor of NeuraTrainableLayer, split it into multiple traits
2 years ago
Shad Amethyst f3752bd411 Add thorough backpropagation test
2 years ago
Shad Amethyst c1473a6d5c Add integration test for training
2 years ago
Shad Amethyst b3b97f76bd Return training and validation losses in train(), plot them out
2 years ago
Shad Amethyst a5237a8ef1 Inefficient but working forward-forward implementation
2 years ago
Shad Amethyst 6d45eafbe7 🚚 rename optimize to gradient_solver
2 years ago
Shad Amethyst 81de6ddbcd 🎨 Generic way of computing backpropagation and other gradient solvers
2 years ago
Shad Amethyst cb862f12cc Softmax layer
2 years ago
Shad Amethyst 0c97a65013 🎨 Remove From<f64> requirement in dropout, working bivariate layer, add builder pattern
2 years ago
Shad Amethyst 969fa3197a 🎨 Clean up types for NeuraLoss and NeuraDensePartialLayer
2 years ago
Shad Amethyst 9b821b92b0 🎨 Clean up NeuraSequential
2 years ago
Shad Amethyst 2edbff860c 🔥 🚚 ♻️ Refactoring the previous layer system
2 years ago
Shad Amethyst cc7686569a Block convolution, max pooling
2 years ago
Shad Amethyst a6da11b125 Pooling layers
2 years ago
Shad Amethyst d7eb6de34e 1D convolution layer
2 years ago
Shad Amethyst 6c1d6874d7 ♻️ Implement and transition to NeuraMatrix and NeuraVector, to prevent stack overflows
2 years ago
Shad Amethyst 920bca4a48 🚚 Move files and traits around, extract stuff out of train.rs
2 years ago
Shad Amethyst b4a97694a6 🎨 Clean things up, add unit tests, add one hot layer
2 years ago
Shad Amethyst bca56a5557 Re-order arguments of neura_layer, implement softmax and normalization
2 years ago
Shad Amethyst 220c61ff6b Dropout layers
2 years ago
Shad Amethyst 8ac82e20e2 Working backpropagation :3
2 years ago
Shad Amethyst 7a6921a1c1 🔥 Semi-working training, although it seems to be only want to converge to zero
2 years ago
Shad Amethyst d3d5f57a2b 🔥 Attempt at backpropagation
2 years ago
Shad Amethyst 5a20acf595 Add NeuraNetwork
2 years ago
Shad Amethyst 7759a6615d 🎉 First commit
2 years ago