Stochastic Gradient Descent Tensorflow

Home ยป Learn AI

Machine learning is all about recognizing patterns by reducing the gap between predicted outcomes and actual results. At the heart of this process is the optimization method called “stochastic gradient descent” (SGD), an algorithm with roots stretching back to the 1950s, thanks to mathematicians Herbert Robbins and Sutton Monro.

So, what is stochastic gradient descent? It’s an innovative twist on the traditional gradient descent method. The key difference between stochastic gradient descent vs. gradient descent lies in how they update and compute the gradients, which plays a vital role in model training.

What is Stochastic Gradient Descent?

At a high level, Stochastic Gradient Descent (SGD) is an optimization algorithm used to minimize (or maximize) a function, commonly employed in machine learning and deep learning for training models. The primary goal is to adjust the model’s parameters iteratively to reduce the error between the predicted outcomes and the actual results. Here’s a high-level description:

  1. Randomness: Unlike traditional gradient descent, which calculates the gradient using the entire dataset, SGD randomly selects one data point from the dataset at each iteration to compute the gradient. This “stochastic” nature leads to faster but noisier updates.
  2. Iterative Updates: For every randomly selected data point, the model’s parameters (like weights in a neural network) are updated in the direction that reduces the error for that particular data point.
  3. Convergence: Due to its stochastic nature, the path taken by SGD towards the optimal solution can be somewhat erratic, leading to oscillations. However, on average, it moves in the right direction and often converges faster than the traditional method, especially for large datasets.
  4. Learning Rate: The size of the steps taken during each update is controlled by a parameter called the learning rate. Proper tuning of the learning rate is essential; too large can cause the algorithm to overshoot the optimal solution, while too small can make the convergence slow.
  5. Advantages: SGD’s primary advantage is its speed, especially for large datasets. Since it updates the parameters using only one data point at a time, it can start improving the model right away and doesn’t need to wait to see the entire dataset.
Stochastic Gradient Descent

In essence, Stochastic Gradient Descent is a faster but noisier version of gradient descent, leveraging the power of randomness to quickly find an approximate solution to optimization problems in machine learning.

Implementing with Tensorflow

import tensorflow as tf
import numpy as np
# Generate synthetic data
np.random.seed(42)
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
# Hyperparameters
learning_rate = 0.1
n_epochs = 50
batch_size = 1  # Since we're doing SGD
# Convert data to tf.data.Dataset for easy batching
dataset = tf.data.Dataset.from_tensor_slices((X, y)).shuffle(buffer_size=100).batch(batch_size)
# Model variables (weights and bias for linear regression)
a = tf.Variable(np.random.randn(), dtype=tf.float64)
b = tf.Variable(np.random.randn(), dtype=tf.float64)
# Linear regression model
def model(X):
    return a * X + b
# Loss function (Mean Squared Error)
def loss_fn(y_true, y_pred):
    return tf.reduce_mean(tf.square(y_true - y_pred))
# Training loop
for epoch in range(n_epochs):
    for X_batch, y_batch in dataset:
        with tf.GradientTape() as tape:
            y_pred = model(X_batch)
            loss = loss_fn(y_batch, y_pred)
            
        gradients = tape.gradient(loss, [a, b])
        
        # Update weights and bias using SGD
        a.assign_sub(learning_rate * gradients[0])
        b.assign_sub(learning_rate * gradients[1])
        
    if epoch % 10 == 0:
        print(f"Epoch {epoch}, Loss: {loss.numpy()}")
print(f"Final parameters: a = {a.numpy()}, b = {b.numpy()}")

Explanation:

  1. We first create a simple linear dataset based on the equation y=4+3x+noise.
  2. Hyperparameters are set, including a learning rate and the number of epochs.
  3. The data is converted into a tf.data.Dataset format, which helps with batching and shuffling.
  4. We initialize model variables a and b, which represent the weight and bias of our linear regression model, respectively.
  5. Our model function represents a linear relationship.
  6. The loss function computes the Mean Squared Error (MSE) between the true and predicted values.
  7. The training loop iterates over each epoch and each batch. For each batch:
    • We compute the predicted values using our model.
    • Calculate the loss.
    • Compute the gradients of the loss with respect to our variables.
    • Update our variables (a and b) using SGD.
  8. Finally, we print the trained parameters a and b.

By the end of the training, the values of a and b should be close to 3 and 4, respectively, which are the actual coefficients used to generate the synthetic data.

Read More From AI Buzz