ML-LAB-VI-SEM

Exercise 09

Building an Artificial Neural Network (ANN) using Backpropagation

Aim

To build a basic Artificial Neural Network (ANN) and implement the Backpropagation algorithm for training using Python.

Procedure/Program

import numpy as np
import matplotlib.pyplot as plt

# sigmoid activation function
def sigmoid(x):
    return 1 / (1 + np.exp(-x))

# derivative of sigmoid function
def sigmoid_derivative(x):
    return x * (1 - x)

# training dataset: Logical XOR Problem
X = np.array([
  [0, 0],
  [0, 1],
  [1, 0],
  [1, 1]])

y = np.array([[0], [1], [1], [0]])

# initialize neural network parameters
input_layer_neurons = X.shape[1]  # 2 features (X1 and X2)
hidden_layer_neurons = 4          # Number of neurons in the hidden layer
output_layer_neurons = 1          # Output layer (binary classification)

# initialize weights and biases
np.random.seed(42)
hidden_weights = np.random.rand(input_layer_neurons, hidden_layer_neurons)
hidden_bias = np.random.rand(1, hidden_layer_neurons)
output_weights = np.random.rand(hidden_layer_neurons, output_layer_neurons)
output_bias = np.random.rand(1, output_layer_neurons)

# learning rate
learning_rate = 0.1
epochs = 10000

# list to store error history for plotting
error_history = []

# training the ANN using Backpropagation
for epoch in range(epochs):
    # forward Propagation
    hidden_layer_input = np.dot(X, hidden_weights) + hidden_bias
    hidden_layer_output = sigmoid(hidden_layer_input)

    output_layer_input = np.dot(hidden_layer_output, output_weights) + output_bias
    predicted_output = sigmoid(output_layer_input)

    # backpropagation (Error Calculation)
    error = y - predicted_output
    d_predicted_output = error * sigmoid_derivative(predicted_output)

    # error at Hidden Layer
    error_hidden_layer = d_predicted_output.dot(output_weights.T)
    d_hidden_layer = error_hidden_layer * sigmoid_derivative(hidden_layer_output)

    # update weights and biases using Gradient Descent
    output_weights += hidden_layer_output.T.dot(d_predicted_output) * learning_rate
    output_bias += np.sum(d_predicted_output, axis=0, keepdims=True) * learning_rate
    hidden_weights += X.T.dot(d_hidden_layer) * learning_rate
    hidden_bias += np.sum(d_hidden_layer, axis=0, keepdims=True) * learning_rate

    # append the mean error to error_history
    if epoch % 1000 == 0:
        error_history.append(np.mean(np.abs(error)))
        print(f"Epoch {epoch} | Error: {np.mean(np.abs(error))}")

# final predictions after training
print("\nFinal Predicted Output:")
print(predicted_output)

# plotting the error curve
plt.plot(range(0, epochs, 1000), error_history)  # only plot every 1000th epoch
plt.xlabel("Epochs")
plt.ylabel("Mean Absolute Error")
plt.title("Error Curve during Backpropagation Training")
plt.show()

Output/Explanation

This program illustrates how to implement a simple neural network with backpropagation for binary classification tasks (such as XOR). The network learns over multiple epochs and minimizes error using the Gradient Descent technique.