Caleb Madrigal

Programming, Hacking, Math, and Art

 

Linear Regression with Gradient Descent

I'm taking the Stanford Machine Learning class. The first algorithm we covered is Linear Regression using Gradient Descent. I implemented this algorithm is Python. Here's what it looks like:

import subprocess

def create_hypothesis(slope, y0):
    return lambda x: slope*x + y0

def linear_regression(data_points, learning_rate=0.01, variance=0.001):
    """ Takes a set of data points in the form: [(1,1), (2,2), ...] and outputs (slope, y0). """

    slope_guess = 1.
    y0_guess = 1.

    last_slope = 1000.
    last_y0 = 1000.
    num_points = len(data_points)

    while (abs(slope_guess-last_slope) > variance or abs(y0_guess - last_y0) > variance):
        last_slope = slope_guess
        last_y0 = y0_guess

        hypothesis = create_hypothesis(slope_guess, y0_guess)

        y0_guess = y0_guess - learning_rate * (1./num_points) * sum([hypothesis(point[0]) - point[1] for point in data_points])
        slope_guess = slope_guess - learning_rate * (1./num_points) * sum([ (hypothesis(point[0]) - point[1]) * point[0] for point ...