This is my note on learning Micrograd and implementing it from scratch using basic library.
Micrograd library originally created by Karpathy, et al.
Written in Indonesian+English
- Forgot to add radd => reverse add to the class so when summing up the forward pass it gets error
- Forgot to reset gradients before each backward pass. The gradient will accumulate (+=) all passes and not changing each gradient for each pass/iteration, this is why the gradients need to be reseted
- Neural Networks: Mathematical expressions that take data, weights, and parameters as input.
- Forward Pass: The process where input data is passed through the network to generate predictions.
- Loss Function: Measures the accuracy of predictions. Lower loss indicates better performance.
- Backpropagation: Used to calculate the gradient of the loss function to adjust parameters.
- Gradient Descent: An iterative process to minimize the loss by following the gradient.
- Training: Involves using different loss functions and updates, like cross-entropy loss for predicting sequences.
- Micrograd: Micrograd is an automatic gradient engine that implements backpropagation to evaluate gradients of a loss function with respect to neural network weights, enabling weight tuning to minimize loss.
- Special methods in Python
- Derivative of math expression
- Chain Rule
- Get to know pytorch, just a basic
Noted and Created by Han Summer 2024
Part of The 20th Summer Project