Skip to content

Commit

Permalink
Fix typo in LaTeX equation of loss function.
Browse files Browse the repository at this point in the history
  • Loading branch information
Karan Desai committed Dec 27, 2016
1 parent b7e729f commit 194570b
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion cifar10_classical_vs_maxmin_baseline.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -494,7 +494,7 @@
"If you're unfamiliar with Keras, one needs to \"compile\" the model after its declaration. Here's where we set the target **Loss Function** and the **Optimizer** to use for Gradient Descent. The paper mentions usage of Stochastic Gradient Descent, but for the sake of faster convergence, we'll use the **Adam Optimizer**. Also for categorical targets, we'll use **Categorical Cross Entropy**. Both of these are pre implemented and provided by Keras.\n",
"\n",
"* **Categorical Cross Entropy: **\n",
"$$C=−\\frac{1}{n} \\sum [y ln(\\hat{y}) + (1 y) ln(1 \\hat{y})]$$ \n",
"$$C = - \\frac{1}{n} \\sum [y ln(\\hat{y}) + (1 - y) ln(1 - \\hat{y})]$$ \n",
"where $y$ and $\\hat{y}$ would be one hot encoded vectors. \n",
"\n",
"\n",
Expand Down

0 comments on commit 194570b

Please sign in to comment.