You can learn and practice a concept in two ways:
Option 1: You can learn the entire theory on a particular subject and then look for ways to apply those concepts. So, you read up how an entire algorithm works, the maths behind it, its assumptions, limitations and then you apply it. Robust but time taking approach.
Option 2: Start with simple basics and develop an intuition on the subject. Next, pick a problem and start solving it. Learn the concepts while you are solving the problem. Keep tweaking and improving your understanding. So, you read up how to apply an algorithm - go out and apply it. Once you know how to apply it, try it around with different parameters, values, limits and develop an understanding of the algorithm.
I prefer Option 2 and take that approach to learning any new topic. I might not be able to tell you the entire math behind an algorithm, but I can tell you the intuition. I can tell you the best scenarios to apply an algorithm based on my experiments and understanding.
In my interactions with people, I find that people don't take time to develop this intuition and hence they struggle to apply things in the right manner.
In this article, I will discuss the building block of a neural network from scratch and focus more on developing this intuition to apply Neural networks. We will code in both "Python" and "R". By end of this article, you will understand how Neural networks work, how do we initialize weigths and how do we update them using back-propagation.
Table of Contents:
Simple intuition behind Neural networks
Multi Layer Perceptron and its basics
Steps involved in Neural Network methodology
Visualizing steps for Neural Network working methodology
Implementing NN using Numpy (Python)
Implementing NN using R
[Optional] Mathematical Perspective of Back Propagation Algorithm
Simple intuition behind neural networks
If you have been a developer or seen one work - you know how it is to search for bugs in a code. You would fire various test cases by varying the inputs or circumstances and look for the output. The change in output provides you a hint on where to look for the bug - which module to check, which lines to read. Once you find it, you make the changes and the exercise continues until you have the right code / application.
Neural networks work in very similar manner. It takes several input, processes it through multiple neurons from multiple hidden layers and returns the result using an output layer. This result estimation process is technically known as "Forward Propagation".
Next, we compare the result with actual output. The task is to make the output to neural network as close to actual (desired) output. Each of these neurons are contributing some error to final output. How do you reduce the error?
We try to minimize the value/ weight of neurons those are contributing more to the error and this happens while traveling back to the neurons of the neural network and finding where the error lies. This process is known as "Backward Propagation".
In order to reduce these number of iterations to minimize the error, the neural networks use a common algorithm known as "Gradient Descent", which helps to optimize the task quickly and efficiently.
That's it - this is how Neural network works! I know this is a very simple representation, but it would help you understand things in a simple manner.
Multi-Layer Perceptron and its basics
Just like atoms form the basis of any material on earth - the basic forming unit of a neural network is a perception. So, what is a perception?
A perception can be understood as anything that takes multiple inputs and produces one output. For example, look at the image below. View More