visit
In this post, we will see how to implement the feedforward neural network from scratch in python. This is a follow up to my previous post on the feedforward neural networks.
Feedforward neural networks are also known as Multi-layered Network of Neurons (MLN). These network of models are called feedforward because the information only travels forward in the neural network, through the input nodes then through the hidden layers (single or many layers) and finally through the output nodes.
Generic Network with Connections Traditional models such as McCulloch Pitts, Perceptron and Sigmoid neuron models capacity is limited to linear functions. To handle the complex non-linear decision boundary between input and the output we are using the Multi-layered Network of Neurons. To understand the feedforward neural network learning algorithm and the computations present in the network, kindly refer to my previous post on Feedforward Neural Networks.
Deep Learning: Feedforward Neural Networks Explained
Photo by on In the coding section, we will be covering the following topics.
To generate data randomly we will use make_blobs to generate blobs of points with a Gaussian distribution. I have generated 1000 data points in 2D space with four blobs centers=4 as a multi-class classification prediction problem. Each data point has two inputs and 0, 1, 2 or 3 class labels. The code present in Line 9, 10 helps to visualize the data using a scatter plot. We can see that they are 4 centers present and the data is linearly separable (almost).
Multi-Class Data In the above plot, I was able to represent 3 Dimensions — 2 Inputs and class labels as colors using a simple scatter plot. Note that make_blobs() function will generate linearly separable data, but we need to have non-linearly separable data for binary classification.
labels_orig = labels
labels = np.mod(labels_orig, 2)
One way to convert the 4 classes to binary classification is to take the remainder of these 4 classes when they are divided by 2 so that I can get the new labels as 0 and 1.
Binary Class Data From the plot, we can see that the centers of blobs are merged such that we now have a binary classification problem where the decision boundary is not linear. Once we have our data ready, I have used the train_test_split function to split the data for training and validation in the ratio of 90:10
Before we start training the data on the sigmoid neuron, We will build our model inside a class called SigmoidNeuron.
<a href="//medium.com/media/8c03e8051322c3f383933b6de3311fbc/href">//medium.com/media/8c03e8051322c3f383933b6de3311fbc/href</a> In the class SigmoidNeuron we have 9 functions, I will walk you through these functions one by one and explain what they are doing.def __init__(self):
self.w = None
self.b = None
The __init__ function (constructor function) helps to initialize the parameters of sigmoid neuron w weights and b biases to None.
#forward pass
def perceptron(self, x):
return np.dot(x, self.w.T) + self.b
def sigmoid(self, x):
return 1.0/(1.0 + np.exp(-x))
Next, we will define two functions perceptron and sigmoid which characterizes the forward pass. In case of a sigmoid neuron forward pass involves two steps
perceptron — Computes the dot product between the input x & weights w and adds bias b
The next four functions characterize the gradient computation. I have written two separate functions for updating weights w and biases b using mean squared error loss and cross-entropy loss.
def fit(self, X, Y, epochs=1, learning_rate=1, initialise=True, loss_fn="mse", display_loss=False):
.....
return
Next, we define ‘fit’ method that accepts a few parameters,
X — Inputs
Y — Labels
epochs — Number of epochs we will allow our algorithm through iterate on the data, default value set to 1
learning_rate — The magnitude of change for our weights during each step through our training data, default value set to 1
intialise — To randomly initialize the parameters of the model or not. If it is set to True weights will be initialized, you can set it to False if you want to retrain the trained model.
loss_fn — To select the loss function for the algorithm to update the parameters. It can be “mse” or “ce”
display_loss — Boolean Variable indicating whether to show the decrease of loss for each epoch
In the fit method, we go through the data passed through parameters X and Y and compute the update values for the parameters either using mean squared loss or cross entropy loss. Once we the update value we go and update the weights and bias terms (Line 49–62).
def predict(self, X):
Now we define our predict function takes inputs X as an argument, which it expects to be an numpy array. In the predict function, we will compute the forward pass of each input with the trained model and send back a numpy array which contains the predicted value of each input data.
<a href="//medium.com/media/199189875b31f27959206942ce7c2477/href">//medium.com/media/199189875b31f27959206942ce7c2477/href</a>
Now we will train our data on the sigmoid neuron which we created. First, we instantiate the Sigmoid Neuron Class and then call the fit method on the training data with 1000 epochs and learning rate set to 1 (These values are arbitrary not the optimal values for this data, you can play around these values and find the best number of epochs and the learning rate). By default, the loss function is set to mean square error loss but you can change it to cross entropy loss as well.
Sigmoid Neuron Loss Variation As you can see that loss of the Sigmoid Neuron is decreasing but there is a lot of oscillations may be because of the large learning rate. You can decrease the learning rate and check the loss variation. Once we trained the model, we can make predictions on the testing data and binarise those predictions by taking 0.5 as the threshold. We can compute the training and validation accuracy of the model to evaluate the performance of the model and check for any scope of improvement by changing the number of epochs or learning rate.
#visualizing the results
plt.scatter(X_train[:,0], X_train[:,1], c=Y_pred_binarised_train, cmap=my_cmap, s=15*(np.abs(Y_pred_binarised_train-Y_train)+.2))
plt.show()
To know which of the data points that the model is predicting correctly or not for each point in the training set. we will use the scatter plot function from matplotlib.pyplot. The function takes two inputs as the first and second features, for the color I have used Y_pred_binarised_train and defined a custom ‘cmap’ for visualization. As you can see that the size of each point is different in the below plot.
4D Scatter Plot The size of each point in the plot is given by a formula,
s=15*(np.abs(Y_pred_binarised_train-Y_train)+.2)
The formula takes the absolute difference between the predicted value and the actual value.
4D Scatter Plot In this plot, we are able to represent 4 Dimensions — Two input features, color to indicate different labels and size of the point indicates whether it is predicted correctly or not. The important note from the plot is that sigmoid neuron is not able to handle the non-linearly separable data. If you want to learn sigmoid neuron learning algorithm in detail with math check out my previous post.
Simple Feedforward Network
Similar to the Sigmoid Neuron implementation, we will write our neural network in a class called FirstFFNetwork.
<a href="//medium.com/media/a2c48bf1d7df37d4aa7fa29f16b35dc4/href">//medium.com/media/a2c48bf1d7df37d4aa7fa29f16b35dc4/href</a> In the class FirstFFNetworkwe have 6 functions, we will go over these functions one by one.def __init__(self):
.....
The __init__ function initializes all the parameters of the network including weights and biases. Unlike the sigmoid neuron where we have only two parameters in the neural network, we have 9 parameters to be initialized. All the 6 weights are initialized randomly and 3 biases are set to zero.
def sigmoid(self, x):
return 1.0/(1.0 + np.exp(-x))
Next, we define the sigmoid function used for post-activation for each of the neurons in the network.
def forward_pass(self, x):
#forward pass - preactivation and activation
self.x1, self.x2 = x
self.a1 = self.w1*self.x1 + self.w2*self.x2 + self.b1
self.h1 = self.sigmoid(self.a1)
self.a2 = self.w3*self.x1 + self.w4*self.x2 + self.b2
self.h2 = self.sigmoid(self.a2)
self.a3 = self.w5*self.h1 + self.w6*self.h2 + self.b3
self.h3 = self.sigmoid(self.a3)
return self.h3
Now we have the forward pass function, which takes an input x and computes the output. First, I have initialized two local variables and equated to input x which has 2 features.
For each of these 3 neurons, two things will happen,Pre-activation represented by ‘a’: It is a weighted sum of inputs plus the bias.
Activation represented by ‘h’: Activation function is Sigmoid function.The pre-activation for the first neuron is given by,
a₁ = w₁ * x₁ + w₂ * x₂ + b₁
To get the post-activation value for the first neuron we simply apply the logistic function to the output of pre-activation a₁.
h₁ = sigmoid(a₁)
Repeat the same process for the second neuron to get a₂ and h₂.
The outputs of the two neurons present in the first hidden layer will act as the input to the third neuron. The pre-activation for the third neuron is given by,
a₃ = w₅ * h₁ + w₆ * h₂ + b₃
and applying the sigmoid on a₃ will give the final predicted output.
def grad(self, x, y):
#back propagation
......
Next, we have the grad function which takes inputs x and y as arguments and computes the forward pass. Based on the forward pass it computes the partial derivates of these weights with respect to the loss function, which is mean squared error loss in this case.
Note: In this post, I am not explaining how do we arrive at these partial derivatives for the parameters. Just consider this function as a black box for now, in my next article I will explain how do we compute these partial derivatives in backpropagation.
def fit(self, X, Y, epochs=1, learning_rate=1, initialise=True, display_loss=False):
......
Then, we have the fit function similar to the sigmoid neuron. In this function, we iterate through each data point, compute the partial derivates by calling the grad function and store those values in a new variable for each parameter (Line 63–75). Then, we go ahead and update the values of all the parameters (Line 77–87). We also have the display_loss condition, if set to True it will display the plot of network loss variation across all the epochs.
def predict(self, X):
#predicting the results on unseen data
.....
Finally, we have the predict function that takes a large set of values as inputs and compute the predicted value for each input by calling the forward_pass function on each of the input.
FF Network Loss
#visualize the predictions
plt.scatter(X_train[:,0], X_train[:,1], c=Y_pred_binarised_train, cmap=my_cmap, s=15*(np.abs(Y_pred_binarised_train-Y_train)+.2))
plt.show()
To get a better idea about the performance of the neural network, we will use the same 4D visualization plot that we used in sigmoid neuron and compare it with the sigmoid neuron model.
Single Sigmoid Neuron (Left) & Neural Network (Right) As you can see most of the points are classified correctly by the neural network. The key takeaway is that just by combining three sigmoid neurons we are able to solve the problem of non-linearly separable data.
Note: In this case, I am considering the network for binary classification only.
Generic Feedforward Network Before we start to write code for the generic neural network, let us understand the format of indices to represent the weights and biases associated with a particular neuron.
W₁₁₁ — Weight associated with the first neuron present in the first hidden layer connected to the first input. W₁₁₂ — Weight associated with the first neuron present in the first hidden layer connected to the second input. b₁₁ — Bias associated with the first neuron present in the first hidden layer. b₁₂ — Bias associated with the second neuron present in the first hidden layer.
W(Layer number)(Neuron number in the layer)(Input number)b(Layer number)(Bias number associated for that input)a(Layer number) (Input number)
def __init__(self, n_inputs, hidden_sizes=[2]):
#intialize the inputs
self.nx = n_inputs
self.ny = 1 #one final neuron for binary classification.
self.nh = len(hidden_sizes)
self.sizes = [self.nx] + hidden_sizes + [self.ny]
.....
The __init__ function takes a few arguments,
n_inputs — Number of inputs going into the network.
hidden_sizes — Expects a list of integers, represents the number of neurons present in the hidden layer.
In this function, we initialize two dictionaries W and B to store the randomly initialized weights and biases for each hidden layer in the network.
def forward_pass(self, x):
self.A = {}
self.H = {}
self.H[0] = x.reshape(1, -1)
....
In the forward_pass function, we have initialized two dictionaries A and H and instead of representing the inputs as X I am representing it as H₀ so that we can save that in the post-activation dictionary H. Then, we will loop through all the layers and compute the pre-activation & post-activation values and store them in their respective dictionaries. The pre-activation output of the final layer is the same as the predicted value of our network. The function will return this value outside. So that we can use this value to calculate the loss of the neuron.
Remember that in the previous class FirstFFNetwork, we have hardcoded the computation of pre-activation and post-activation for each neuron separately but this not the case in our generic class.
def grad_sigmoid(self, x):
return x*(1-x)
def grad(self, x, y):
self.forward_pass(x)
.....
Next, we define two functions which help to compute the partial derivatives of the parameters with respect to the loss function.
def fit(self, X, Y, epochs=1, learning_rate=1, initialise=True, display_loss=False):
# initialise w, b
if initialise:
for i in range(self.nh+1):
self.W[i+1] = np.random.randn(self.sizes[i], self.sizes[i+1])
self.B[i+1] = np.zeros((1, self.sizes[i+1]))
Then, we define our fit function which is essentially the same but in here we loop through each of the input and update the weights and biases in generalized fashion rather than updating the individual parameter.
def predict(self, X):
#predicting the results on unseen data
.....
Finally, we have the predict function that takes a large set of values as inputs and compute the predicted value for each input by calling the forward_pass function on each of the input.
From the plot, we see that the loss function falls a bit slower than the previous network because in this case, we have two hidden layers with 2 and 3 neurons respectively. Because it is a large network with more parameters, the learning algorithm takes more time to learn all the parameters and propagate the loss through the network.
#visualize the predictions
plt.scatter(X_train[:,0], X_train[:,1], c=Y_pred_binarised_train, cmap=my_cmap, s=15*(np.abs(Y_pred_binarised_train-Y_train)+.2))
plt.show()
Again we will use the same 4D plot to visualize the predictions of our generic network. Remember that, small points indicate these observations are correctly classified and large points indicate these observations are miss-classified.
You can play with the number of epochs and the learning rate and see if can push the error lower than the current value. Also, you can create a much deeper network with many neurons in each layer and see how that network performs.
def forward_pass(self, x):
self.A = {}
self.H = {}
self.H[0] = x.reshape(1, -1)
for i in range(self.nh):
self.A[i+1] = np.matmul(self.H[i], self.W[i+1]) + self.B[i+1]
self.H[i+1] = self.sigmoid(self.A[i+1])
self.A[self.nh+1] = np.matmul(self.H[self.nh], self.W[self.nh+1]) + self.B[self.nh+1]
self.H[self.nh+1] = self.softmax(self.A[self.nh+1])
return self.H[self.nh+1]
Since we have multi-class output from the network, we are using softmax activation instead of sigmoid activation at the output layer. At Line 29–30 we are using softmax layer to compute the forward pass at the output layer.
def cross_entropy(self,label,pred):
yl=np.multiply(pred,label)
yl=yl[yl!=0]
yl=-np.log(yl)
yl=np.mean(yl)
return yl
Next, we have our loss function. In this case, instead of the mean square error, we are using the cross-entropy loss function. By using the cross-entropy loss we can find the difference between the predicted probability distribution and actual probability distribution to compute the loss of the network.
Again we will use the same 4D plot to visualize the predictions of our generic network. To plot the graph we need to get the one final predicted label from the network, in order to get that predicted value I have applied the argmax function to get the label with the highest probability. Using that label we can plot our 4D graph and compare it with the actual input data scatter plot.
Original Labels (Left) & Predicted Labels(Right) There you have it, we have successfully built our generic neural network for multi-class classification from scratch.
Photo by on
LEARN BY CODINGIn this article, we have used make_blobs function to generate toy data and we have seen that make_blobs generate linearly separable data. If you want to generate some complex non-linearly separable data to train your feedforward neural network, you can use make_moons function from sklearn package.
Make Moons Function Data The make_moons function generates two interleaving half circular data essentially gives you a non-linearly separable data. Also, you can add some Gaussian noise into the data to make it more complex for the neural network to arrive at a non-linearly separable decision boundary. Using our generic neural network class you can create a much deeper network with more number of neurons in each layer (also different number of neurons in each layer) and play with learning rate & a number of epochs to check under which parameters neural network is able to arrive at best decision boundary possible. The entire code discussed in the article is present in this GitHub repository. Feel free to fork it or download it.
In this post, we have built a simple neuron network from scratch and seen that it performs well while our sigmoid neuron couldn't handle non-linearly separable data. Then we have seen how to write a generic class which can take ’n’ number of inputs and ‘L’ number of hidden layers (with many neurons for each layer) for binary classification using mean squared error as loss function. After that, we extended our generic class to handle multi-class classification using softmax and cross-entropy as loss function and saw that it’s performing reasonably well.
Recommended Reading
In my next post, I will explain backpropagation in detail along with some math. So make sure you follow me on medium to get notified as soon as it drops. Until then Peace :) NK. is an intern at HSBC Analytics division. He is passionate about deep learning and AI. He is one of the top writers at in . Connect with me on or follow me on for updates about upcoming articles on deep learning and Artificial Intelligence.