Data Analysis with Python
We will be conducting data analysis by various techniques using Python on an automobile dataset. The topics covered include data acquisition, wrangling, normalization, and visualization. We will also create a machine learning model and evaluate it.
- Attribute Information:
- Data Acquisition
- Data Wrangling
- Data Normalization
- Model Development
- Model Evaluation using Visualization
- Polynomial Regression and Pipelines
- Measures for In-Sample Evaluation
- Prediction and Decision Making
- Model Evaluation and Refinement
We will be using a dataset about cars from back in 1985. This data set consists of three types of entities:
- the specification of an auto in terms of various characteristics,
- its assigned insurance risk rating,
- its normalized losses in use as compared to other cars.
The second rating corresponds to the degree to which the auto is more risky than its price indicates. Cars are initially assigned a risk factor symbol associated with its price. Then, if it is more risky (or less), this symbol is adjusted by moving it up (or down) the scale. Actuarians call this process "symboling". A value of +3 indicates that the auto is risky, -3 that it is probably pretty safe. The third factor is the relative average loss payment per insured vehicle year. This value is normalized for all autos within a particular size classification (two-door small, station wagons, sports/specialty, etc…), and represents the average loss per car per year.
Attribute Information:
symboling: -3, -2, -1, 0, 1, 2, 3
normalized-losses: continuous from 65 to 256
make: alfa-romero, audi, bmw, chevrolet, dodge, honda, isuzu, jaguar, mazda, mercedes-benz, mercury, mitsubishi, nissan, peugot, plymouth, porsche, renault, saab, subaru, toyota, volkswagen, volvo
fuel-type: diesel, gas
aspiration: std, turbo
num-of-doors: four, two
body-style: hardtop, wagon, sedan, hatchback, convertible
drive-wheels: 4wd, fwd, rwd
engine-location: front, rear
wheel-base: continuous from 86.6 120.9
length: continuous from 141.1 to 208.1
width: continuous from 60.3 to 72.3
height: continuous from 47.8 to 59.8
curb-weight: continuous from 1488 to 4066
engine-type: dohc, dohcv, l, ohc, ohcf, ohcv, rotor
num-of-cylinders: eight, five, four, six, three, twelve, two
engine-size: continuous from 61 to 326
fuel-system: 1bbl, 2bbl, 4bbl, idi, mfi, mpfi, spdi, spfi
bore: continuous from 2.54 to 3.94
stroke: continuous from 2.07 to 4.17
compression-ratio: continuous from 7 to 23
horsepower: continuous from 48 to 288
peak-rpm: continuous from 4150 to 6600
city-mpg: continuous from 13 to 49
highway-mpg: continuous from 16 to 54
price: continuous from 5118 to 45400.
import pandas as pd
import numpy as np
# read the online file and assign it to the variable 'df'
path = 'imports-85.data'
df = pd.read_csv(path, header=None)
# print the first 10 rows of the dataset
print('The first 10 rows of the dataframe')
df.head(10)
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
print("headers\n", headers)
df.columns = headers
df.dtypes
df.describe()
df.describe(include='all')
df[['length', 'compression-ratio']].describe()
df.info
df.replace('?', np.nan, inplace=True)
# use ".isnull()" or ".notnull()"
missing_data = df.isnull() # True stands for missing value
missing_data.head(10)
for column in missing_data.columns.values.tolist():
print(column)
print(missing_data[column].value_counts())
print("")
In this dataset, none of the columns are empty enough to drop entirely.
Replace by mean:
"normalized-losses": 41 missing data, replace them with mean
"bore": 4 missing data, replace them with mean
"stroke": 4 missing data, replace them with mean
"horsepower": 2 missing data, replace them with mean
"peak-rpm": 2 missing data, replace them with mean
Replace by frequency:
"num-of-doors": 2 missing data, replace them with "four"
Reason: 84% sedans are four-door. Since four doors is most frequent, it is most likely to occur
Drop the whole row:
"price": 4 missing data, simply delete the whole row
Reason: Price is what we want to predict. Any data entry without price data cannot be used for prediction; therefore any row now without price data is not useful to us.
Replace by mean
# calculate average of the column. astype('float') saves the mean value in float dtype.
avg_norm_loss = df['normalized-losses'].astype('float').mean(axis=0)
print('Average of normalized-losses:', avg_norm_loss)
# replace NaN by the mean value
df['normalized-losses'].replace(np.nan, avg_norm_loss, inplace=True)
# calculate average of the column. astype('float') saves the mean value in float dtype.
avg_bore = df['bore'].astype('float').mean(axis=0)
print('Average of bore:', avg_bore)
# replace NaN by the mean value
df['bore'].replace(np.nan, avg_norm_loss, inplace=True)
# calculate average of the column. astype('float') saves the mean value in float dtype.
avg_stroke = df['stroke'].astype('float').mean(axis=0)
print('Average of stroke:', avg_stroke)
# replace NaN by the mean value
df['stroke'].replace(np.nan, avg_stroke, inplace=True)
# calculate average of the column. astpye('float') saves the mean value in flaot dtype
avg_hp = df['horsepower'].astype('float').mean(axis=0)
print('Average of horsepower: ', avg_hp)
# replace NaN by the ean value
df['horsepower'].replace(np.nan, avg_hp, inplace=True)
# calculate average of the column. astype('float') saves the mean value in float dtype.
avg_peakrpm = df['peak-rpm'].astype('float').mean(axis=0)
print('Average of peak-rpm:', avg_peakrpm)
# replace NaN by the mean value
df['peak-rpm'].replace(np.nan, avg_peakrpm, inplace=True)
df['num-of-doors'].value_counts()
df['num-of-doors'].value_counts().idxmax()
df['num-of-doors'].replace(np.nan, 'four', inplace=True)
df.dropna(subset=['price'], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
df.head()
df.dtypes
# use double brackets when including multiple columns in one statement
df[['bore', 'stroke', 'price', 'peak-rpm', 'horsepower']] = df[['bore', 'stroke', 'price', 'peak-rpm', 'horsepower']].astype('float')
df['normalized-losses'] = df['normalized-losses'].astype('int')
df.dtypes
# replace (original value) by (original value)/(maximum value)
df['length'] = df['length']/df['length'].max()
df['width'] = df['width']/df['width'].max()
df['height'] = df['height']/df['height'].max()
df[['length', 'width', 'height']].head()
df['horsepower'].describe()
In our dataset, "horsepower" is a real valued variable ranging from 48 to 288, it has 58 unique values. What if we only care about the price difference between cars with high horsepower, medium horsepower, and little horsepower (3 types)?
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
plt.pyplot.hist(df["horsepower"])
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
bins = np.linspace(min(df['horsepower']), max(df['horsepower']), 4)
bins
group_names = ['low', 'medium', 'high']
df['horsepower-binned'] = pd.cut(df['horsepower'], bins, labels=group_names, include_lowest=True)
df[['horsepower', 'horsepower-binned']].head(10)
df['horsepower-binned'].value_counts()
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
pyplot.bar(group_names, df["horsepower-binned"].value_counts())
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
a = (0, 1, 2)
# draw histogram of attribute 'horsepower' with bins=3
plt.pyplot.hist(df['horsepower'], bins=3)
# set x/y labels and plot title
plt.pyplot.xlabel('horsepower')
plt.pyplot.ylabel('count')
plt.pyplot.title('horsepower bins')
df['fuel-type'].unique()
We see the column "fuel-type" has two unique values, "gas" or "diesel". Regression doesn't understand words, only numbers. To use this attribute in regression analysis, we convert "fuel-type" into indicator variables.
dummy_variable_1 = pd.get_dummies(df['fuel-type'])
dummy_variable_1.head()
df.columns
dummy_variable_1.rename(columns={'fuel-tpye-diesel':'gas', 'fuel-type-diesel':'diesel'}, inplace=True)
dummy_variable_1.head()
df.columns
We now have the value 0 to represent "gas" and 1 to represent "diesel" in the column "fuel-type".
df = pd.concat([df, dummy_variable_1], axis=1)
# drop original column 'fuel-type' from 'df'
df.drop('fuel-type', axis=1, inplace=True)
The last two columns are now the indicator variable representation of the fuel-type variable. It's all 0s and 1s now.
dummy_variable_2 = pd.get_dummies(df['aspiration'])
dummy_variable_2.rename(columns={'std': 'aspiration-std', 'turbo': 'aspiration-turbo'}, inplace=True)
dummy_variable_2.head()
df = pd.concat([df, dummy_variable_2], axis=1)
# drop the column 'aspiration'
df.drop('aspiration', axis=1, inplace=True)
df.head()
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Continuous numerical variables
Continuous numerical variables are variables that may contain any value within some range. Continuous numerical variables can have the type "int64" or "float64". A great way to visualize these variables is by using scatterplots with fitted lines.
Correlation
We can calculate the correlation between variables of type 'int64' or 'float64' using the method 'corr'.
df.corr()
df[['bore', 'stroke', 'compression-ratio', 'horsepower']].corr()
Continuos numerical variables
Continuous numerical variables are variables that may contain any value within some range. Continuous numerical variables can have the type "int64" or "float64". A great way to visualize these variables is by using scatterplots with fitted lines.
Positive Linear Relationship
sns.regplot(x='engine-size', y='price', data=df)
plt.ylim(0,)
As the engine-size goes up, the price goes up: this indicates a positive direct correlation between these two variables. Engine size seems like a pretty good predictor of price since the regression line is almost a perfect diagonal line.
df[['engine-size', 'price']].corr()
As the highway-mpg goes up, the price goes down: this indicates an inverse/negative relationship between these two variables.
df[['highway-mpg', 'price']].corr()
sns.regplot(x='peak-rpm', y='price', data=df)
Peak rpm does not seem like a good predictor of the price at all since the regression line is close to horizontal. Also, the data points are very scattered and far from the fitted line, showing lots of variability. Therefore it's it is not a reliable variable.
df[['peak-rpm', 'price']].corr()
sns.boxplot(x='body-style', y='price', data=df)
We see that the distributions of price between the different body-style categories have a significant overlap, and so body-style would not be a good predictor of price.
sns.boxplot(x='engine-location', y='price', data=df)
We see that the distribution of price between these two engine-location categories, front and rear, are distinct enough to take engine-location as a potential good predictor of price.
sns.boxplot(x='drive-wheels', y='price', data=df)
We see that the distribution of price between the different drive-wheels categories differs; as such, drive-wheels could potentially be a predictor of price.
df.describe()
df.describe(include='object')
value_counts is a good way of understanding how many units of each characteristic/variable we have. The method "value_counts" only works on Pandas series, not Pandas Dataframes. As a result, we only include one bracket "df['drive-wheels']" not two brackets "df[['drive-wheels']]".
df['drive-wheels'].value_counts()
df['drive-wheels'].value_counts().to_frame()
drive_wheels_counts = df['drive-wheels'].value_counts().to_frame()
drive_wheels_counts.rename(columns={'drive-wheels': 'value_counts'}, inplace=True)
drive_wheels_counts
drive_wheels_counts.index.name = 'drive-wheels'
drive_wheels_counts
engine_loc_counts = df['engine-location'].value_counts().to_frame()
engine_loc_counts.rename({'engine-location': 'value_counts'}, inplace=True)
engine_loc_counts.index.name = 'engine-location'
engine_loc_counts
The value counts of the engine location would not be a good predictor variable for the price. This is because we only have 3 cars with a rear engine and 198 with an engine in the front; this result is skewed. Thus, we are not able to draw any conclusions about the engine location.
df['drive-wheels'].unique()
If we want to know on average, which type of drive wheel is most valuable, we can group 'drive-wheels' and then average them.
df_group_one = df[['drive-wheels', 'body-style', 'price']]
# calculate the average price for each of the different categories of data
df_group_one = df_group_one.groupby(['drive-wheels'], as_index=False).mean()
df_group_one
It seems that rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel drive and front-wheel drive are approximately the same price.
df_gptest = df[['drive-wheels', 'body-style', 'price']]
grouped_test1 = df_gptest.groupby(['drive-wheels', 'body-style'], as_index=False).mean()
grouped_test1
This grouped data is much easier to visualize when it is made into a pivot table. A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. We can convert the dataframe to a pivot table using the method 'pivot' to create a pivot table from the groups.
grouped_pivot = grouped_test1.pivot(index='drive-wheels', columns='body-style')
grouped_pivot
Often, we won't have data for some of the pivot cells. We can fill these missing cells with the value 0, but any other value could potentially be used as well.
grouped_pivot = grouped_pivot.fillna(0)
grouped_pivot
df_gptest_2 = df[['body-style', 'price']]
grouped_test_bodystyle = df_gptest_2.groupby(['body-style'], as_index=False).mean()
grouped_test_bodystyle
Use a heat map to visualize the relationship between Body Style vs Price
plt.pcolor(grouped_pivot, cmap='RdBu')
plt.colorbar()
plt.show()
The heatmap plots the target variable (price) proportional to colour with respect to the variables 'drive-wheel' and 'body-style' in the vertical and horizontal axis respectively. This allows us to visualize how the price is related to 'drive-wheel' and 'body-style'.
fig, ax = plt.subplots()
im = ax.pcolor(grouped_pivot, cmap='RdBu')
#label names
row_labels = grouped_pivot.columns.levels[1]
col_labels = grouped_pivot.index
#move ticks and labels to the center
ax.set_xticks(np.arange(grouped_pivot.shape[1]) + 0.5, minor=False)
ax.set_yticks(np.arange(grouped_pivot.shape[0]) + 0.5, minor=False)
#insert labels
ax.set_xticklabels(row_labels, minor=False)
ax.set_yticklabels(col_labels, minor=False)
#rotate label if too long
plt.xticks(rotation=90)
fig.colorbar(im)
plt.show()
Correlation and Causation
Correlation: a measure of the extent of interdependence between variables.
Causation: the relationship between cause and effect between two variables.
Correlation doesn't imply causation.
Persaon Correlation: It measures the linear dependence between two variables X and Y.
The resulting coefficient is a value between -1 and 1 inclusive, where:
- 1: Total positive linear correlation.
- 0: No linear correlation, the two variables most likely do not affect each other.
- -1: Total negative linear correlation
Pearson Correlation is the default method of the function "corr". Like before we can calculate the Pearson Correlation of the 'int64' or 'float64' variables.
df.corr()
df['horsepower'].unique()
To know the significance of the correlation estimate, we calculate the P-value.
The P-value is the probability value that the correlation between these two variables is statistically significant. Normally, we choose a significance level of 0.05, which means that we are 95% confident that the correlation between the variables is significant.
By convention, when the
- p-value is < 0.001: we say there is strong evidence that the correlation is significant.
- p-value is < 0.05: there is moderate evidence that the correlation is significant.
- p-value is < 0.1: there is weak evidence that the correlation is significant.
- p-value is > 0.1: there is no evidence that the correlation is significant.
from scipy import stats
pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn't extremely strong (~0.585)
pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between horsepower and price is statistically significant, and the linear relationship is quite strong (~0.809, close to 1)
pearson_coef, p_value = stats.pearsonr(df['length'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between length and price is statistically significant, and the linear relationship is moderately strong (~0.691).
pearson_coef, p_value = stats.pearsonr(df['width'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between width and price is statistically significant, and the linear relationship is quite strong (~0.751).
pearson_coef, p_value = stats.pearsonr(df['curb-weight'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between curb-weight and price is statistically significant, and the linear relationship is quite strong (~0.834).
pearson_coef, p_value = stats.pearsonr(df['engine-size'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between engine-size and price is statistically significant, and the linear relationship is very strong (~0.872).
pearson_coef, p_value = stats.pearsonr(df['bore'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between bore and price is statistically significant, but the linear relationship is only moderate (~0.521).
pearson_coef, p_value = stats.pearsonr(df['city-mpg'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between city-mpg and price is statistically significant, and the coefficient of ~ -0.687 shows that the relationship is negative and moderately strong.
pearson_coef, p_value = stats.pearsonr(df['highway-mpg'], df['price'])
print('The Pearson Correlation Coefficient is ', pearson_coef, ' with a P-value of P=', p_value)
Since the p-value is < 0.001, the correlation between highway-mpg and price is statistically significant, and the coefficient of ~ -0.705 shows that the relationship is negative and moderately strong.
ANOVA (Analyis of Variance)
The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups. ANOVA returns two parameters:
F-test score: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means.
P-value: P-value tells how statistically significant is our calculated score value.
If our price variable is strongly correlated with the variable we are analyzing, expect ANOVA to return a sizeable F-test score and a small p-value.
Since ANOVA analyzes the difference between different groups of the same variable, the groupby function will come in handy. Because the ANOVA algorithm averages the data automatically, we do not need to take the average before hand.
# group the data
grouped_test2 = df_gptest[['drive-wheels', 'price']].groupby(['drive-wheels'])
grouped_test2.head(2)
df_gptest
grouped_test2.get_group('4wd')['price']
f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'], grouped_test2.get_group('4wd')['price'])
print("ANOVA results: F=", f_val, ", P =", p_val)
This is a great result, with a large F test score showing a strong correlation and a P value of almost 0 implying almost certain statistical significance. But does this mean all three tested groups are all this highly correlated?
f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val )
f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('rwd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val)
f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('fwd')['price'])
print("ANOVA results: F=", f_val, ", P =", p_val)
Conclusion: Important Variables
We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. We have narrowed it down to the following variables:
Continuous numerical variables:
Length
Width
Curb-weight
Engine-size
Horsepower
City-mpg
Highway-mpg
Wheel-base
Bore
Categorical variables:
Drive-wheels
df.head()
Simple Linear Regression
Simple Linear Regression is a method to help us understand the relationship between two variables:
- The predictor/independent variable (X)
- The response/dependent variable (that we want to predict)(Y)
The result of Linear Regression is a linear function that predicts the response (dependent) variable as a function of the predictor (independent) variable.
Y: Response Variable
X: Predictor Varaible
Linear function: 𝑌ℎ𝑎𝑡 = 𝑎 + 𝑏𝑋
- a refers to the intercept of the regression line, in other words: the value of Y when X is 0
- b refers to the slope of the regression line, in other words: the value with which Y changes when X increases by 1 unit
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm
X = df[['highway-mpg']]
Y = df['price']
# fit the linear model using highway-mpg
lm.fit(X,Y)
Yhat = lm.predict(X)
Yhat[0:5]
lm.intercept_
lm.coef_
Final estimated linear model:
price = 38423.31 - 821.73 * highway-mpg
X = df[['engine-size']]
Y = df['price']
# fit the linear model using highway-mpg
lm.fit(X,Y)
Yhat = lm.predict(X)
Yhat[0:5]
lm.intercept_
lm.coef_
Final estimated linear model:
Price = -7963.34 + 166.86 * Engine-size
If we want to use more variables in our model to predict car price, we can use Multiple Linear Regression.
This method is used to explain the relationship between one continuous response (dependent) variable and two or more predictor (independent) variables. Most of the real-world regression models involve multiple predictors.
𝑌ℎ𝑎𝑡 = 𝑎 + 𝑏1𝑋1 + 𝑏2𝑋2 + 𝑏3𝑋3 + 𝑏4𝑋4
From the previous section we know that other good predictors of price could be:
- Horsepower
- Curb-weight
- Engine-size
- Highway-mpg
Z = df[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]
lm.fit(Z, df['price'])
lm.intercept_
lm.coef_
Final estimated linear model:
Price = -15678.74 + 52.65851272 horsepower + 4.699 curb-weight + 81.96 engine-size + 33.58 highway-mpg
lm.fit(df[['normalized-losses', 'highway-mpg']], df['price'])
lm.intercept_
lm.coef_
Final estimated linear model:
Price = 38201.31 + 1.498 normalized-losses - 820.45 highway-mpg
import seaborn as sns
%matplotlib inline
Regression Plot for Simple Linear Regression
This plot will show a combination of a scattered data points (a scatter plot), as well as the fitted linear regression line going through the data. This will give us a reasonable estimate of the relationship between the two variables, the strength of the correlation, as well as the direction (positive or negative correlation).
width = 12
height = 10
plt.figure(figsize=(width, height))
sns.regplot(x='highway-mpg', y='price', data=df)
plt.ylim(0,)
We can see from this plot that price is negatively correlated to highway-mpg, since the regression slope is negative. One thing to keep in mind when looking at a regression plot is to pay attention to how scattered the data points are around the regression line. This will give you a good indication of the variance of the data, and whether a linear model would be the best fit or not. If the data is too far off from the line, this linear model might not be the best model for this data.
plt.figure(figsize=(width, height))
sns.regplot(x='peak-rpm', y='price', data=df)
plt.ylim(0,)
Comparing the regression plot of "peak-rpm" and "highway-mpg" we see that the points for "highway-mpg" are much closer to the generated line and on the average decrease. The points for "peak-rpm" have more spread around the predicted line, and it is much harder to determine if the points are decreasing or increasing as the "highway-mpg" increases.
df[['peak-rpm', 'highway-mpg', 'price']].corr()
As we can see, highway-mpg is more strongly correlated with price as compared to peak-rpm.
Residual Plot to visualize variance of data
The difference between the observed value (y) and the predicted value (Yhat) is called the residual (e). When we look at a regression plot, the residual is the distance from the data point to the fitted regression line.
A residual plot is a graph that shows the residuals on the vertical y-axis and the independent variable on the horizontal x-axis.
We look at the spread of the residuals:
- If the points in a residual plot are randomly spread out around the x-axis, then a linear model is appropriate for the data.
- Randomly spread out residuals means that the variance is constant, and thus the linear model is a good fit for this data.
width = 12
height = 10
plt.figure(figsize=(width, height))
sns.residplot(df['highway-mpg'], df['price'])
plt.show()
We can see from this residual plot that the residuals are not randomly spread around the x-axis, which leads us to believe that maybe a non-linear model is more appropriate for this data.
Distribution Plot for Multiple Linear Regression
You cannot visualize Multiple Linear Regression with a regression or residual plot.
One way to look at the fit of the model is by looking at the distribution plot. We can look at the distribution of the fitted values that result from the model and compare it to the distribution of the actual values.
Z = df[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]
lm.fit(Z, df['price'])
Y_hat = lm.predict(Z)
plt.figure(figsize=(width, height))
ax1 = sns.distplot(df['price'], hist=False, color='r', label='Actual Value')
sns.distplot(Yhat, hist=False, color='b', label='Fitted Values', ax=ax1)
plt.title('Actual vs Fitted Values for Price')
plt.xlabel('Price (in dollars)')
plt.ylabel('Proportion of Cars')
plt.show()
plt.close()
We can see that the fitted values are reasonably close to the actual values, since the two distributions overlap a bit. However, there is definitely some room for improvement.
Polynomial Regression and Pipelines
Polynomial regression is a particular case of the general linear regression model or multiple linear regression models.
We get non-linear relationships by squaring or setting higher-order terms of the predictor variables.
There are different orders of polynomial regression:
- Quadratic - 2nd order
𝑌ℎ𝑎𝑡=𝑎+𝑏1𝑋2+𝑏2𝑋2 - Cubic - 3rd order
𝑌ℎ𝑎𝑡=𝑎+𝑏1𝑋2+𝑏2𝑋2+𝑏3𝑋3 - Higher order:
𝑌=𝑎+𝑏1𝑋2+𝑏2𝑋2+𝑏3𝑋3....
We saw earlier that a linear model did not provide the best fit while using highway-mpg as the predictor variable. Let's see if we can try fitting a polynomial model to the data instead.
def PlotPolly(model, independent_variable, dependent_variable, Name):
x_new = np.linspace(15, 55, 100)
y_new = model(x_new)
plt.plot(independent_variable, dependent_variable, '.', x_new, y_new, '-')
plt.title('Polynomial Fit with Matplotlib for Price ~ Length')
ax = plt.gca()
ax.set_facecolor((0.898, 0.898, 0.898))
fig = plt.gcf()
plt.xlabel(Name)
plt.ylabel('Price of Cars')
plt.show()
plt.close()
x = df['highway-mpg']
y = df['price']
# we use a polynomial of the 3rd order
f = np.polyfit(x, y, 3)
# use the poly1d function to display the polynomial function
p = np.poly1d(f)
print(p)
PlotPolly(p, x, y, 'highway-mpg')
np.polyfit(x, y, 3)
We can already see from plotting that this polynomial model performs better than the linear model. This is because the generated polynomial function "hits" more of the data points.
f1 = np.polyfit(x, y, 11)
p1 = np.poly1d(f1)
print(p1)
PlotPolly(p1, x, y, 'highway-mpg')
We see that by using very high order polynomials, overfitting is observed.
Multivariate Polynomial Function
The analytical expression for Multivariate Polynomial function gets complicated. For example, the expression for a second-order (degree=2)polynomial with two variables is given by:
𝑌ℎ𝑎𝑡=𝑎+𝑏1𝑋1+𝑏2𝑋2+𝑏3𝑋1𝑋2+𝑏4𝑋21+𝑏5𝑋22
We will now perform a polynomial transform on multiple features.
from sklearn.preprocessing import PolynomialFeatures
pr = PolynomialFeatures(degree=2)
pr
Z_pr = pr.fit_transform(Z)
Z.shape
Z_pr.shape
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
# create a list of tuples inlcuding the name of the model/estimator and its corresponding constructor
Input = [('scale', StandardScaler()), ('polynomial', PolynomialFeatures(include_bias=False)), ('model', LinearRegression())]
pipe = Pipeline(Input)
pipe
pipe.fit(Z,y)
ypipe = pipe.predict(Z)
ypipe[0:10]
Measures for In-Sample Evaluation
When evaluating our models, not only do we want to visualize the results, but we also want a quantitative measure to determine how accurate the model is.
Two very important measures that are often used in Statistics to determine the accuracy of a model are:
- R^2 / R-squared
- Mean Squared Error (MSE)
R-squared: R squared, also known as the coefficient of determination, is a measure to indicate how close the data is to the fitted regression line. The value of the R-squared is the percentage of variation of the response variable (y) that is explained by a linear model.
Mean Squared Error (MSE): The Mean Squared Error measures the average of the squares of errors, that is, the difference between actual value (y) and the estimated value (ŷ).
X = df[['highway-mpg']]
Y = df['price']
# highway_mpg_fit
lm.fit(X,Y)
# calculate the R^2
print('The R-square is:', lm.score(X,Y))
We can say that ~ 49.659% of the variation of the price is explained by this simple linear model "highway_mpg_fit".
Yhat = lm.predict(X)
print('The output of the first four predicted values is', Yhat[0:4])
from sklearn.metrics import mean_squared_error
# calculate the MSE
mse = mean_squared_error(df['price'], Yhat)
print('The mean square error of price and predicted value is: ', mse)
lm.fit(Z, df['price'])
# find the R^2
print('The R-square value is: ', lm.score(Z, df['price']))
We can say that ~ 80.93 % of the variation of price is explained by this multiple linear regression "multi_fit".
Y_predict_multifit = lm.predict(Z)
print('The mean square error of price and predicted value using multifit is: ', mean_squared_error(df['price'], Y_predict_multifit))
from sklearn.metrics import r2_score
r_squared = r2_score(y, p(x))
print('The R-square value is: ', r_squared)
We can say that ~ 67.419 % of the variation of price is explained by this polynomial fit.
mean_squared_error(df['price'], p(x))
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
new_input = np.arange(1, 100, 1).reshape(-1, 1)
lm.fit(X,Y)
lm
yhat = lm.predict(new_input)
yhat[0:5]
plt.plot(new_input, yhat)
plt.show()
Decision Making: Determing a Good Model Fit
Now that we have visualized the different models, and generated the R-squared and MSE values for the fits, how do we determine a good model fit?
What is a good R-squared value? When comparing models, the model with the higher R-squared value is a better fit for the data.
What is a good MSE? When comparing models, the model with the smallest MSE value is a better fit for the data.
Let's take a look at the values for the different models.
Simple Linear Regression: Using Highway-mpg as a Predictor Variable of Price.
- R-squared: 0.49659118843391759
- MSE: 3.16 x10^7
Multiple Linear Regression: Using Horsepower, Curb-weight, Engine-size, and Highway-mpg as Predictor Variables of Price.
- R-squared: 0.80896354913783497
- MSE: 1.2 x10^7
Polynomial Fit: Using Highway-mpg as a Predictor Variable of Price.
- R-squared: 0.6741946663906514
- MSE: 2.05 x 10^7
Simple Linear Regression model (SLR) vs Multiple Linear Regression model (MLR)
Usually, the more variables you have, the better your model is at predicting, but this is not always true. Sometimes you may not have enough data, you may run into numerical problems, or many of the variables may not be useful and or even act as noise. As a result, you should always check the MSE and R^2.
So to be able to compare the results of the MLR vs SLR models, we look at a combination of both the R-squared and MSE to make the best conclusion about the fit of the model.
- MSE: The MSE of SLR is 3.16x10^7 while MLR has an MSE of 1.2 x10^7. The MSE of MLR is much smaller.
-
R-squared: In this case, we can also see that there is a big difference between the R-squared of the SLR and the R-squared of the MLR. The R-squared for the SLR (0.497) is very small compared to the R-squared for the MLR (0.809).
This R-squared in combination with the MSE show that MLR seems like the better model fit in this case, compared to SLR.
Simple Linear Model (SLR) vs Polynomial Fit
- MSE: We can see that Polynomial Fit brought down the MSE, since this MSE is smaller than the one from the SLR.
-
R-squared: The R-squared for the Polyfit is larger than the R-squared for the SLR, so the Polynomial Fit also brought up the R-squared quite a bit.
Since the Polynomial Fit resulted in a lower MSE and a higher R-squared, we can conclude that this was a better fit model than the simple linear regression for predicting Price with Highway-mpg as a predictor variable.
Multiple Linear Regression (MLR) vs Polynomial Fit
- MSE: The MSE for the MLR is smaller than the MSE for the Polynomial Fit.
- R-squared: The R-squared for the MLR is also much larger than for the Polynomial Fit.
Conclusion:
Comparing these three models, we conclude that the MLR model is the best model to be able to predict price from our dataset. This result makes sense, since we have 27 variables in total, and we know that more than one of those variables are potential predictors of the final car price.
import pandas as pd
import numpy as np
path = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/module_5_auto.csv'
df = pd.read_csv(path)
df = df._get_numeric_data()
df.head()
from IPython.display import display
from IPython.html import widgets
from IPython.display import display
from ipywidgets import interact, interactive, fixed, interact_manual
def DistributionPlot(RedFunction, BlueFunction, RedName, BlueName, Title):
width = 12
height = 10
plt.figure(figsize=(width, height))
ax1 = sns.distplot(RedFunction, hist=False, color='r', label=RedName)
ax2 = sns.distplot(BlueFunction, hist=False, color='b', label=BlueName, ax=ax1)
plt.title(Title)
plt.xlabel('Price (in dollars)')
plt.ylabel('Proportion of Cars')
plt.show()
plt.close()
def PollyPlot(xtrain, xtest, y_train, y_test, lr,poly_transform):
#training data
#testing data
# lr:linear regression object #poly_transform:polynomial transformation object
width = 12
height = 10
plt.figure(figsize=(width, height))
xmax=max([xtrain.values.max(), xtest.values.max()])
xmin=min([xtrain.values.min(), xtest.values.min()])
x=np.arange(xmin, xmax, 0.1)
plt.plot(xtrain, y_train, 'ro', label='Training Data')
plt.plot(xtest, y_test, 'go', label='Test Data')
plt.plot(x, lr.predict(poly_transform.fit_transform(x.reshape(-1, 1))), label='Predicted Function')
plt.ylim([-10000, 60000])
plt.ylabel('Price')
plt.legend()
y_data = df['price']
x_data = df.drop('price', axis=1)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.1, random_state=1)
# test_size setes the proportion of data that is split. The testing set is set to 10% of the total dataset.
# use the same random_state value throughout your code
print('number of test samples: ', x_test.shape[0])
print('number of training samples: ', x_train.shape[0])
from sklearn.linear_model import LinearRegression
# create Linear Regression object
lre = LinearRegression()
# fit the model using the feature 'horsepower'
lre.fit(x_train[['horsepower']], y_train)
lre.score(x_test[['horsepower']], y_test)
lre.score(x_train[['horsepower']], y_train)
We can see that the R^2 is much smaller using the test data.
from sklearn.model_selection import cross_val_score
Rcross = cross_val_score(lre, x_data[['horsepower']], y_data, cv=4)
Rcross
The default scoring is R^2. Each element in the array has the average R^2 value in the fold.
print('The mean of the folds is', Rcross.mean(), 'and the standard deviation is', Rcross.std())
We can use negative squared error as a score by setting the parameter 'scoring' metric to 'neg_mean_squared_error'.
-1 * cross_val_score(lre, x_data[['horsepower']], y_data, cv=4, scoring='neg_mean_squared_error')
Use the function 'cross_val_predict' to predict the output. The function splits up the data into the specified number of folds, using one fold to get a prediction while the rest of the folds are used as test data. First import the function.
from sklearn.model_selection import cross_val_predict
yhat= cross_val_predict(lre, x_data[['horsepower']], y_data, cv=4)
yhat[0:5]
Overfitting, Underfitting and Model Selection
It turns out that the test data sometimes referred to as the out of sample data is a much better measure of how well your model performs in the real world. One reason for this is overfitting; let's go over some examples. It turns out these differences are more apparent in Multiple Linear Regression and Polynomial Regression so we will explore overfitting in that context.
lr = LinearRegression()
lr.fit(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_train)
y_hat_train = lr.predict(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])
y_hat_train[0:5]
y_hat_test = lr.predict(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])
y_hat_test[0:5]
Lets perform some model evaluation using our training and testing data separately.
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
Title = 'Distribution Plot of Predicted Value using Training Data vs Training Data distribution'
DistributionPlot(y_train, y_hat_train, 'Actual Values (Train)', 'Predicted Values (Train)', Title)
Figure 1: Plot of predicted values using the training data compared to the training data.
So far the model seems to be doing well in learning from the training dataset. But what happens when the model encounters new data from the testing dataset?
Title = 'Distribution Plot of Predicted Value using Test Data vs Data Distribution of Test Data'
DistributionPlot(y_test, y_hat_test, 'Actual Values (Test)', 'Predicted Values (Test)', Title)
Figure 2: Plot of predicted value using the test data compared to the test data.
When the model generates new values from the test data, we see the distribution of the predicted values is much different from the actual target values.
Comparing Figure 1 and Figure 2, it is evident the distribution of the training data in Figure 1 is much better at fitting the data. This difference in Figure 2 is apparent where the ranges are from 5000 to 15000. This is where the distribution shape is exceptionally different.
Let's see if polynomial regression also exhibits a drop in the prediction accuracy when analysing the test dataset.
from sklearn.preprocessing import PolynomialFeatures
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.45, random_state=0)
pr = PolynomialFeatures(degree=5)
x_train_pr = pr.fit_transform(x_train[['horsepower']])
x_test_pr = pr.fit_transform(x_test[['horsepower']])
pr
poly = LinearRegression()
# train the model
poly.fit(x_train_pr, y_train)
y_hat = poly.predict(x_test_pr)
yhat[0:5]
print('Predicted values:', yhat[0:4])
print('True values', y_test[0:4].values)
PollyPlot(x_train[['horsepower']], x_test[['horsepower']], y_train, y_test, poly, pr)
Figure 3: A polynomial regression model; red dots represent training data, green dots represent test data, and the blue line represents the model prediction.
We see that the estimated function appears to track the data but around 200 horsepower, the function begins to diverge from the data points.
poly.score(x_train_pr, y_train)
poly.score(x_test_pr, y_test)
We see the R^2 for the training data is 0.5567 while the R^2 on the test data was -29.87. The lower the R^2, the worse the model, a Negative R^2 is a sign of overfitting.
Let's see how the R^2 changes on the test data for different order polynomials and plot the results.
Rsqu_test = []
order = [1, 2, 3, 4]
for n in order:
pr = PolynomialFeatures(degree=n)
x_train_pr = pr.fit_transform(x_train[['horsepower']])
x_test_pr = pr.fit_transform(x_test[['horsepower']])
lr.fit(x_train_pr, y_train)
Rsqu_test.append(lr.score(x_test_pr, y_test))
plt.plot(order, Rsqu_test)
plt.xlabel('order')
plt.ylabel('R^2')
plt.title('R^2 Using Test Data')
plt.text(3, 0.75, 'Maximum R^2 ')
We see the R^2 gradually increases until an order three polynomial is used. Then the R^2 dramatically decreases at four.
def f(order, test_data):
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=test_data, random_state=0)
pr = PolynomialFeatures(degree=order)
x_train_pr = pr.fit_transform(x_train[['horsepower']])
x_test_pr = pr.fit_transform(x_test[['horsepower']])
poly = LinearRegression()
poly.fit(x_train_pr,y_train)
PollyPlot(x_train[['horsepower']], x_test[['horsepower']], y_train,y_test, poly, pr)
The following interface allows you to experiment with different polynomial orders and different amounts of data.
interact(f, order=(0, 6, 1), test_data=(0.05, 0.95, 0.05))
pr = PolynomialFeatures(degree=2)
x_train_pr = pr.fit_transform(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg', 'normalized-losses', 'symboling']])
x_test_pr = pr.fit_transform(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg','normalized-losses','symboling']])
from sklearn.linear_model import Ridge
RidgeModel = Ridge(alpha=0.1)
RidgeModel.fit(x_train_pr, y_train)
yhat = RidgeModel.predict(x_test_pr)
print('predicted:', yhat[0:4])
print('test set:', y_test[0:4].values)
Select the value of Alpha that minimizes the test error. For e.g., we can use a loop.
Rsqu_test = []
Rsqu_train = []
dummy1 = []
ALPHA = 10 * np.array(range(0,1000))
for alfa in ALPHA:
RigeModel = Ridge(alpha=alfa)
RigeModel.fit(x_train_pr, y_train)
Rsqu_test.append(RigeModel.score(x_test_pr, y_test))
Rsqu_train.append(RigeModel.score(x_train_pr, y_train))
width = 12
height = 10
plt.figure(figsize=(width, height))
plt.plot(ALPHA,Rsqu_test, label='validation data ')
plt.plot(ALPHA,Rsqu_train, 'r', label='training Data ')
plt.xlabel('alpha')
plt.ylabel('R^2')
plt.legend()
Figure 6: The blue line represents the R^2 of the test data, and the red line represents the R^2 of the training data. The x-axis represents the different values of Alpha.
from sklearn.model_selection import GridSearchCV
parameters1 = [{'alpha': [0.001, 0.1, 1, 10, 100, 1000, 10000, 100000]}]
parameters1
RR = Ridge()
RR
Grid1 = GridSearchCV(RR, parameters1, cv=4)
Grid1.fit(x_data[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_data)
The object finds the best parameter values on the validation data. We can obtain the estimator with the best parameters and assign it to the variable BestRR.
BestRR = Grid1.best_estimator_
BestRR
BestRR.score(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_test)
parameters2= [{'alpha': [0.001,0.1,1, 10, 100, 1000,10000,100000,100000],'normalize':[True,False]} ]
Grid2 = GridSearchCV(Ridge(), parameters2,cv=4)
Grid2.fit(x_data[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']],y_data)
Grid2.best_estimator_