Shad Amethyst
060b801ad6
tmp
2 years ago
Shad Amethyst
dd278e7b90
✨ WIP implementation for backprop in residual networks
2 years ago
Shad Amethyst
83dc763746
✨ Initial implementation of residual neural networks
2 years ago
Shad Amethyst
fa0bc0be9f
✨ Lock layers
2 years ago
Shad Amethyst
2ea5502575
🎨 Small cleanup
2 years ago
Shad Amethyst
d40098d2ef
🔥 Refactor of NeuraTrainableLayer, split it into multiple traits
2 years ago
Shad Amethyst
f3752bd411
✅ Add thorough backpropagation test
2 years ago
Shad Amethyst
c1473a6d5c
✅ Add integration test for training
2 years ago
Shad Amethyst
b3b97f76bd
✨ Return training and validation losses in train(), plot them out
2 years ago
Shad Amethyst
a5237a8ef1
✨ Inefficient but working forward-forward implementation
2 years ago
Shad Amethyst
6d45eafbe7
🚚 rename optimize to gradient_solver
2 years ago
Shad Amethyst
81de6ddbcd
✨ 🎨 Generic way of computing backpropagation and other gradient solvers
2 years ago
Shad Amethyst
cb862f12cc
✨ Softmax layer
2 years ago
Shad Amethyst
0c97a65013
🎨 Remove From<f64> requirement in dropout, working bivariate layer, add builder pattern
2 years ago
Shad Amethyst
969fa3197a
🎨 Clean up types for NeuraLoss and NeuraDensePartialLayer
2 years ago
Shad Amethyst
9b821b92b0
🎨 Clean up NeuraSequential
2 years ago
Shad Amethyst
2edbff860c
🔥 🚚 ♻️ Refactoring the previous layer system
...
It was becoming almost impossible to manage the dimensions of the layers,
especially with convolution layers. Generic consts are nice, but they are a bit too early
to have right now for this use-case. We'll probably be expanding the implementations to accept
const or dynamically-sized layers at some point, for performance-critical applications.
2 years ago
Shad Amethyst
cc7686569a
✨ Block convolution, max pooling
2 years ago
Shad Amethyst
a6da11b125
✨ Pooling layers
2 years ago
Shad Amethyst
d7eb6de34e
✨ 1D convolution layer
...
Nailed it on the first try :3c
(or not, and I'll regret writing this in a few years)
2 years ago
Shad Amethyst
6c1d6874d7
♻️ Implement and transition to NeuraMatrix and NeuraVector, to prevent stack overflows
2 years ago
Shad Amethyst
920bca4a48
🚚 Move files and traits around, extract stuff out of train.rs
2 years ago
Shad Amethyst
b4a97694a6
🎨 Clean things up, add unit tests, add one hot layer
2 years ago
Shad Amethyst
bca56a5557
✨ Re-order arguments of neura_layer, implement softmax and normalization
2 years ago
Shad Amethyst
220c61ff6b
✨ Dropout layers
2 years ago
Shad Amethyst
8ac82e20e2
✨ Working backpropagation :3
2 years ago
Shad Amethyst
7a6921a1c1
✨ 🔥 Semi-working training, although it seems to be only want to converge to zero
2 years ago
Shad Amethyst
d3d5f57a2b
✨ 🔥 Attempt at backpropagation
...
My head is tired :/
2 years ago
Shad Amethyst
5a20acf595
✨ Add NeuraNetwork
2 years ago
Shad Amethyst
7759a6615d
🎉 First commit
2 years ago