Build a Simple Neural Network with Keras: Step-by-Step Guide
In the previous lesson, we explored backpropagation and optimization techniques like Gradient Descent and Adam. These methods help neural networks learn by adjusting weights to minimize errors. Now, we will apply these concepts practically by building a simple neural network using Keras, a powerful deep learning library. This lesson will guide you through defining, compiling, and training a neural network model step by step.
Use-Case: Predicting House Prices
Let me share a use-case I faced while working on a project. I needed to predict house prices based on features like size, location, and number of rooms. To solve this, I built a simple neural network using Keras. The model took these features as input and predicted the price. This example will help you understand how to implement a neural network for similar tasks.
Step 1: Defining the Neural Network Model
The first step is to define the model architecture. In Keras, we use the Sequential class to create a linear stack of layers. For our house price prediction model, we start with an input layer, followed by hidden layers, and end with an output layer. Here’s how you can define a simple model:
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(10, input_dim=8, activation='relu')) # Input layer with 8 features
model.add(Dense(8, activation='relu')) # Hidden layer
model.add(Dense(1, activation='linear')) # Output layer for regression
In this example, the Dense layer is used to create fully connected layers. The input_dim parameter specifies the number of input features, and activation defines the activation function.
Step 2: Compiling the Model
Once the model is defined, the next step is to compile it. Compiling involves setting the optimizer, loss function, and evaluation metric. For our house price prediction model, we use the Adam optimizer, mean squared error (MSE) as the loss function, and mean absolute error (MAE) as the evaluation metric.
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_absolute_error'])
The Adam optimizer is a popular choice because it adapts the learning rate during training. MSE is suitable for regression tasks, and MAE helps us understand the average error in predictions.
Step 3: Training the Model
After compiling, we train the model using the fit method. Training involves feeding the model with data and letting it learn the patterns. For our example, we use a dataset with house features and prices. Here’s how you can train the model:
history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_split=0.2)
In this code, X_train and y_train are the training data and labels. The epochs parameter defines the number of times the model sees the entire dataset, and batch_size specifies the number of samples processed before updating weights. The validation_split parameter reserves a portion of the data for validation.
Step 4: Interpreting Results
Once the model is trained, we can evaluate its performance. The history object stores the training and validation loss and metrics. We can plot these values to understand how the model improves over epochs.
import matplotlib.pyplot as plt
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
This plot helps us identify if the model is overfitting or underfitting. If the validation loss stops decreasing, it might indicate overfitting, and we may need to adjust the model.
Conclusion
In this lesson, we built a simple neural network using Keras to predict house prices. We defined the model architecture, compiled it with an optimizer and loss function, and trained it on a dataset. By interpreting the results, we can improve the model’s performance. In the next lesson, we will dive deeper into training, validation, and testing data to ensure our model generalizes well to unseen data.
Comments
There are no comments yet.