-
b8c654ebb1
🎨 Run clippy on the project :)
main
Shad Amethyst
2023-05-10 15:52:31 +0200
-
6fbfd4e38c
🎨 Introduce builder pattern for NeuraBatchedTrainer
Shad Amethyst
2023-05-09 18:33:55 +0200
-
93fa7e238a
✨ Implement NeuraGraphBackprop
Shad Amethyst
2023-05-09 14:41:29 +0200
-
99d0cb4408
🎨 Clean up axis operators, move them to crate root
Shad Amethyst
2023-05-09 12:17:40 +0200
-
fdc906c220
🐛 Fix broken tests, add NeuraSequentialLast
Shad Amethyst
2023-05-09 11:52:41 +0200
-
72ffce457a
🔥 ♻️ Combine all NeuraTrainable* traits into NeuraLayer
Shad Amethyst
2023-05-08 22:11:22 +0200
-
41711d4668
✨ 🔥 Implement backpropagation for NeuraGraph (untested)
Shad Amethyst
2023-05-07 18:28:19 +0200
-
251e4d02d2
✨ Implement From<NeuraSequential> and NeuraLayer for NeuraGraph
Shad Amethyst
2023-05-07 11:28:27 +0200
-
efaed91f83
✨ Working NeuraGraph construction
Shad Amethyst
2023-05-06 14:01:42 +0200
-
2f9c334e62
🔥 WIP implementation of arbitrary neural network ADG
Shad Amethyst
2023-05-04 00:31:56 +0200
-
38bd61fed5
🎨 Clean up, move error types to src/err.rs
Shad Amethyst
2023-05-03 12:54:20 +0200
-
972b177767
✨ Add NeuraIsolateLayer, for more versatility in resnets
Shad Amethyst
2023-05-01 19:42:35 +0200
-
872cb3a6ce
✨ Allow using NeuraResidual with NeuraBackprop
Shad Amethyst
2023-04-29 16:27:43 +0200
-
520fbcf317
🎨 Clean up NeuraResidual a bit
Shad Amethyst
2023-04-29 14:50:47 +0200
-
b34b1e630b
✨ Implement NeuraNetwork for NeuraResidual and NeuraResidualNode
Shad Amethyst
2023-04-28 23:20:34 +0200
-
741cf1ced6
🔥 🎨 Remove deprecated traits (NeuraOldTrainableNetwork and NeuraGradientSolver*)
Shad Amethyst
2023-04-28 12:56:10 +0200
-
4da4be22b4
✨ Working implementation of NeuraNetwork for NeuraSequential
Shad Amethyst
2023-04-28 12:45:40 +0200
-
ee4b57b00c
🔥 WIP implementation of NeuraNetwork for NeuraSequential
Shad Amethyst
2023-04-28 01:05:39 +0200
-
1f007bc986
♻️ Split NeuraTrainableLayerBase into ~ and NeuraTrainableLayerEval
Shad Amethyst
2023-04-27 17:39:47 +0200
-
d82cab788b
✨ Create NeuraNetwork traits, WIP
Shad Amethyst
2023-04-27 17:18:12 +0200
-
060b801ad6
tmp
Shad Amethyst
2023-04-27 10:17:29 +0200
-
dd278e7b90
✨ WIP implementation for backprop in residual networks
Shad Amethyst
2023-04-25 00:14:10 +0200
-
83dc763746
✨ Initial implementation of residual neural networks
Shad Amethyst
2023-04-24 15:58:36 +0200
-
fa0bc0be9f
✨ Lock layers
Shad Amethyst
2023-04-22 23:34:16 +0200
-
2ea5502575
🎨 Small cleanup
Shad Amethyst
2023-04-22 14:13:58 +0200
-
d40098d2ef
🔥 Refactor of NeuraTrainableLayer, split it into multiple traits
Shad Amethyst
2023-04-22 10:31:46 +0200
-
f3752bd411
✅ Add thorough backpropagation test
Shad Amethyst
2023-04-22 12:12:05 +0200
-
c1473a6d5c
✅ Add integration test for training
Shad Amethyst
2023-04-22 11:16:20 +0200
-
b3b97f76bd
✨ Return training and validation losses in train(), plot them out
Shad Amethyst
2023-04-21 20:56:06 +0200
-
a5237a8ef1
✨ Inefficient but working forward-forward implementation
Shad Amethyst
2023-04-21 16:08:36 +0200
-
6d45eafbe7
🚚 rename optimize to gradient_solver
Shad Amethyst
2023-04-20 18:37:22 +0200
-
81de6ddbcd
✨ 🎨 Generic way of computing backpropagation and other gradient solvers
Shad Amethyst
2023-04-20 13:50:48 +0200
-
cb862f12cc
✨ Softmax layer
Shad Amethyst
2023-04-20 10:53:07 +0200
-
0c97a65013
🎨 Remove From<f64> requirement in dropout, working bivariate layer, add builder pattern
Shad Amethyst
2023-04-19 19:32:35 +0200
-
969fa3197a
🎨 Clean up types for NeuraLoss and NeuraDensePartialLayer
Shad Amethyst
2023-04-19 17:34:59 +0200
-
9b821b92b0
🎨 Clean up NeuraSequential
Shad Amethyst
2023-04-19 14:11:53 +0200
-
2edbff860c
🔥 🚚 ♻️ Refactoring the previous layer system
Shad Amethyst
2023-04-19 00:54:30 +0200
-
cc7686569a
✨ Block convolution, max pooling
Shad Amethyst
2023-04-18 01:21:55 +0200
-
a6da11b125
✨ Pooling layers
Shad Amethyst
2023-04-17 18:46:16 +0200
-
d7eb6de34e
✨ 1D convolution layer
Shad Amethyst
2023-04-17 00:48:11 +0200
-
6c1d6874d7
♻️ Implement and transition to NeuraMatrix and NeuraVector, to prevent stack overflows
Shad Amethyst
2023-04-16 12:38:18 +0200
-
920bca4a48
🚚 Move files and traits around, extract stuff out of train.rs
Shad Amethyst
2023-04-16 00:57:01 +0200
-
b4a97694a6
🎨 Clean things up, add unit tests, add one hot layer
Shad Amethyst
2023-04-15 21:29:55 +0200
-
bca56a5557
✨ Re-order arguments of neura_layer, implement softmax and normalization
Shad Amethyst
2023-04-15 13:05:33 +0200
-
220c61ff6b
✨ Dropout layers
Shad Amethyst
2023-04-13 01:19:52 +0200
-
8ac82e20e2
✨ Working backpropagation :3
Shad Amethyst
2023-04-12 17:16:04 +0200
-
7a6921a1c1
✨ 🔥 Semi-working training, although it seems to be only want to converge to zero
Shad Amethyst
2023-04-12 11:46:55 +0200
-
d3d5f57a2b
✨ 🔥 Attempt at backpropagation
Shad Amethyst
2023-04-11 23:37:24 +0200
-
5a20acf595
✨ Add NeuraNetwork
Shad Amethyst
2023-04-11 22:17:49 +0200
-
7759a6615d
🎉 First commit
Shad Amethyst
2023-04-11 16:15:34 +0200