In this tutorial, we'll create our first neural network classifier in Tensorflow. The implemented network architecture is presented in the following figure. You can read our step-by-step Tutorial on writing the code for this network, or skip it and see the implementation Code.
Fig. 1- Sample Neural Network architecture with two layers implemented for classifying MNIST digits
In the previous tutorial on the Linear Classifier, we saw that the decision boundaries generated by them to separate classes are linear! This is very limiting and performs poor when it comes to non-linearly separable data points. This gives rise to the idea of Neural Networks which is like using stacks of linear classifiers linked to each other by a non-linearity (ReLU in Fig. 1). To learn more about this, read this article.
Fig. 2- Sample decision boundries generate by: (a) linear classifier, (b) neural network.
To learn more about the Logistic Regression, read this and for a detailed tutorial on the basics of neural networks, see the following links:
Neural Networks Part 1: Setting up the Architecture
Neural Networks Part 2: Setting up the Data and the Loss
Neural Networks Part 3: Learning and Evaluation
- 1. Neural Network A two-layer neural network classifier for classifying MNIST data implemented in TensorFlow
- 2. Neural_Network+TensorBoard Visualizing the classifier structure, learning process and performance using TensorBoard