Shad Amethyst
|
38bd61fed5
|
🎨 Clean up, move error types to src/err.rs
|
2 years ago |
Shad Amethyst
|
520fbcf317
|
🎨 Clean up NeuraResidual a bit
|
2 years ago |
Shad Amethyst
|
b3b97f76bd
|
✨ Return training and validation losses in train(), plot them out
|
2 years ago |
Shad Amethyst
|
a5237a8ef1
|
✨ Inefficient but working forward-forward implementation
|
2 years ago |
Shad Amethyst
|
cb862f12cc
|
✨ Softmax layer
|
2 years ago |
Shad Amethyst
|
0c97a65013
|
🎨 Remove From<f64> requirement in dropout, working bivariate layer, add builder pattern
|
2 years ago |
Shad Amethyst
|
6c1d6874d7
|
♻️ Implement and transition to NeuraMatrix and NeuraVector, to prevent stack overflows
|
2 years ago |
Shad Amethyst
|
920bca4a48
|
🚚 Move files and traits around, extract stuff out of train.rs
|
2 years ago |
Shad Amethyst
|
b4a97694a6
|
🎨 Clean things up, add unit tests, add one hot layer
|
2 years ago |
Shad Amethyst
|
bca56a5557
|
✨ Re-order arguments of neura_layer, implement softmax and normalization
|
2 years ago |
Shad Amethyst
|
220c61ff6b
|
✨ Dropout layers
|
2 years ago |
Shad Amethyst
|
8ac82e20e2
|
✨ Working backpropagation :3
|
2 years ago |