16 Commits (969fa3197a5fee96d3f293b9aea131445610ad29)
 

Author SHA1 Message Date
Shad Amethyst 969fa3197a 🎨 Clean up types for NeuraLoss and NeuraDensePartialLayer
2 years ago
Shad Amethyst 9b821b92b0 🎨 Clean up NeuraSequential
2 years ago
Shad Amethyst 2edbff860c 🔥 🚚 ♻️ Refactoring the previous layer system
2 years ago
Shad Amethyst cc7686569a Block convolution, max pooling
2 years ago
Shad Amethyst a6da11b125 Pooling layers
2 years ago
Shad Amethyst d7eb6de34e 1D convolution layer
2 years ago
Shad Amethyst 6c1d6874d7 ♻️ Implement and transition to NeuraMatrix and NeuraVector, to prevent stack overflows
2 years ago
Shad Amethyst 920bca4a48 🚚 Move files and traits around, extract stuff out of train.rs
2 years ago
Shad Amethyst b4a97694a6 🎨 Clean things up, add unit tests, add one hot layer
2 years ago
Shad Amethyst bca56a5557 Re-order arguments of neura_layer, implement softmax and normalization
2 years ago
Shad Amethyst 220c61ff6b Dropout layers
2 years ago
Shad Amethyst 8ac82e20e2 Working backpropagation :3
2 years ago
Shad Amethyst 7a6921a1c1 🔥 Semi-working training, although it seems to be only want to converge to zero
2 years ago
Shad Amethyst d3d5f57a2b 🔥 Attempt at backpropagation
2 years ago
Shad Amethyst 5a20acf595 Add NeuraNetwork
2 years ago
Shad Amethyst 7759a6615d 🎉 First commit
2 years ago