Skip to content

Latest commit

 

History

History
16 lines (13 loc) · 457 Bytes

README.md

File metadata and controls

16 lines (13 loc) · 457 Bytes

tutor-grad-mlp

An explicit back propagation example in numpy for MLP on MNIST. Uses gradient descent with momentum to achieve acceptable accuracy on vectorized inputs of the MNIST images. Check out the notebook for an example on how to create the network, train it, and evaluate it.



Currently working activation functions:

  • softmax (last layer only)
  • sigmoid
  • relu
  • tanh

Currently working layer functions:

  • fully connected with bias