Introduction to Linear Regression

Adapted from Chapter 3 of An Introduction to Statistical Learning

continuous categorical
supervised regression classification
unsupervised dimension reduction clustering

Motivation

Why are we learning linear regression?

  • widely used
  • runs fast
  • easy to use (not a lot of tuning required)
  • highly interpretable
  • basis for many other methods

Libraries

Will be using Statsmodels for teaching purposes since it has some nice characteristics for linear modeling. However, we recommend that you spend most of your energy on scikit-learn since it provides significantly more useful functionality for machine learning in general.

In [1]:
# imports
import pandas as pd
import matplotlib.pyplot as plt

# this allows plots to appear directly in the notebook
%matplotlib inline

Example: Advertising Data

Let’s take a look at some data, ask some questions about that data, and then use linear regression to answer those questions!

In [2]:
# read data into a DataFrame
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
data.head()
Out[2]:
TV Radio Newspaper Sales
1 230.1 37.8 69.2 22.1
2 44.5 39.3 45.1 10.4
3 17.2 45.9 69.3 9.3
4 151.5 41.3 58.5 18.5
5 180.8 10.8 58.4 12.9

What are the features?

  • TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
  • Radio: advertising dollars spent on Radio
  • Newspaper: advertising dollars spent on Newspaper

What is the response?

  • Sales: sales of a single product in a given market (in thousands of widgets)
In [3]:
# print the shape of the DataFrame
data.shape
Out[3]:
(200, 4)

There are 200 observations, and thus 200 markets in the dataset.

In [4]:
# visualize the relationship between the features and the response using scatterplots
fig, axs = plt.subplots(1, 3, sharey=True)
data.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16, 8))
data.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1])
data.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2])
Out[4]:

Questions About the Advertising Data

Let’s pretend you work for the company that manufactures and markets this widget. The company might ask you the following: On the basis of this data, how should we spend our advertising money in the future?

This general question might lead you to more specific questions:

  1. Is there a relationship between ads and sales?
  2. How strong is that relationship?
  3. Which ad types contribute to sales?
  4. What is the effect of each ad type of sales?
  5. Given ad spending in a particular market, can sales be predicted?

We will explore these questions below!

Simple Linear Regression

Simple linear regression is an approach for predicting a quantitative response using a single feature (or “predictor” or “input variable”). It takes the following form:

$y = beta_0 + beta_1x$

What does each term represent?

  • $y$ is the response
  • $x$ is the feature
  • $beta_0$ is the intercept
  • $beta_1$ is the coefficient for x

Together, $beta_0$ and $beta_1$ are called the model coefficients. To create your model, you must “learn” the values of these coefficients. And once we’ve learned these coefficients, we can use the model to predict Sales!

Estimating (“Learning”) Model Coefficients

Generally speaking, coefficients are estimated using the least squares criterion, which means we are find the line (mathematically) which minimizes the sum of squared residuals (or “sum of squared errors”):

Confidence in our Model

Question: Is linear regression a high bias/low variance model, or a low bias/high variance model?

Answer: High bias/low variance. Under repeated sampling, the line will stay roughly in the same place (low variance), but the average of those models won’t do a great job capturing the true relationship (high bias). Note that low variance is a useful characteristic when you don’t have a lot of training data!

A closely related concept is confidence intervals. Statsmodels calculates 95% confidence intervals for our model coefficients, which are interpreted as follows: If the population from which this sample was drawn was sampled 100 times, approximately 95 of those confidence intervals would contain the “true” coefficient.

In [12]:
# print the confidence intervals for the model coefficients
lm.conf_int()
Out[12]:
0 1
Intercept 6.129719 7.935468
TV 0.042231 0.052843

Keep in mind that we only have a single sample of data, and not the entire population of data. The “true” coefficient is either within this interval or it isn’t, but there’s no way to actually know. We estimate the coefficient with the data we do have, and we show uncertainty about that estimate by giving a range that the coefficient is probably within.

Note that using 95% confidence intervals is just a convention. You can create 90% confidence intervals (which will be more narrow), 99% confidence intervals (which will be wider), or whatever intervals you like.

Hypothesis Testing and p-values

Closely related to confidence intervals is hypothesis testing. Generally speaking, you start with a null hypothesis and an alternative hypothesis (that is opposite the null). Then, you check whether the data supports rejecting the null hypothesis or failing to reject the null hypothesis.

(Note that “failing to reject” the null is not the same as “accepting” the null hypothesis. The alternative hypothesis may indeed be true, except that you just don’t have enough data to show that.)

As it relates to model coefficients, here is the conventional hypothesis test:

  • null hypothesis: There is no relationship between TV ads and Sales (and thus $beta_1$ equals zero)
  • alternative hypothesis: There is a relationship between TV ads and Sales (and thus $beta_1$ is not equal to zero)

How do we test this hypothesis? Intuitively, we reject the null (and thus believe the alternative) if the 95% confidence interval does not include zero. Conversely, the p-value represents the probability that the coefficient is actually zero:

In [13]:
# print the p-values for the model coefficients
lm.pvalues
Out[13]:
Intercept    1.406300e-35
TV           1.467390e-42
dtype: float64

If the 95% confidence interval includes zero, the p-value for that coefficient will be greater than 0.05. If the 95% confidence interval does not include zero, the p-value will be less than 0.05. Thus, a p-value less than 0.05 is one way to decide whether there is likely a relationship between the feature and the response. (Again, using 0.05 as the cutoff is just a convention.)

In this case, the p-value for TV is far less than 0.05, and so we believe that there is a relationship between TV ads and Sales.

Note that we generally ignore the p-value for the intercept.

How Well Does the Model Fit the data?

The most common way to evaluate the overall fit of a linear model is by the R-squared value. R-squared is the proportion of variance explained, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the null model. (The null model just predicts the mean of the observed response, and thus it has an intercept and no slope.)

R-squared is between 0 and 1, and higher is better because it means that more variance is explained by the model. Here’s an example of what R-squared “looks like”:

Adapted from Chapter 3 of An Introduction to Statistical Learning

continuous categorical
supervised regression classification
unsupervised dimension reduction clustering

Motivation

Why are we learning linear regression?

  • widely used
  • runs fast
  • easy to use (not a lot of tuning required)
  • highly interpretable
  • basis for many other methods

Libraries

Will be using Statsmodels for teaching purposes since it has some nice characteristics for linear modeling. However, we recommend that you spend most of your energy on scikit-learn since it provides significantly more useful functionality for machine learning in general.

In [1]:
# imports
import pandas as pd
import matplotlib.pyplot as plt

# this allows plots to appear directly in the notebook
%matplotlib inline

Example: Advertising Data

Let’s take a look at some data, ask some questions about that data, and then use linear regression to answer those questions!

In [2]:
# read data into a DataFrame
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
data.head()
Out[2]:
TV Radio Newspaper Sales
1 230.1 37.8 69.2 22.1
2 44.5 39.3 45.1 10.4
3 17.2 45.9 69.3 9.3
4 151.5 41.3 58.5 18.5
5 180.8 10.8 58.4 12.9

What are the features?

  • TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
  • Radio: advertising dollars spent on Radio
  • Newspaper: advertising dollars spent on Newspaper

What is the response?

  • Sales: sales of a single product in a given market (in thousands of widgets)
In [3]:
# print the shape of the DataFrame
data.shape
Out[3]:
(200, 4)

There are 200 observations, and thus 200 markets in the dataset.

In [4]:
# visualize the relationship between the features and the response using scatterplots
fig, axs = plt.subplots(1, 3, sharey=True)
data.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16, 8))
data.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1])
data.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2])
Out[4]:

Questions About the Advertising Data

Let’s pretend you work for the company that manufactures and markets this widget. The company might ask you the following: On the basis of this data, how should we spend our advertising money in the future?

This general question might lead you to more specific questions:

  1. Is there a relationship between ads and sales?
  2. How strong is that relationship?
  3. Which ad types contribute to sales?
  4. What is the effect of each ad type of sales?
  5. Given ad spending in a particular market, can sales be predicted?

We will explore these questions below!

Simple Linear Regression

Simple linear regression is an approach for predicting a quantitative response using a single feature (or “predictor” or “input variable”). It takes the following form:

$y = beta_0 + beta_1x$

What does each term represent?

  • $y$ is the response
  • $x$ is the feature
  • $beta_0$ is the intercept
  • $beta_1$ is the coefficient for x

Together, $beta_0$ and $beta_1$ are called the model coefficients. To create your model, you must “learn” the values of these coefficients. And once we’ve learned these coefficients, we can use the model to predict Sales!

Estimating (“Learning”) Model Coefficients

Generally speaking, coefficients are estimated using the least squares criterion, which means we are find the line (mathematically) which minimizes the sum of squared residuals (or “sum of squared errors”):

 

 

Leave a comment

Your email address will not be published. Required fields are marked *