Need of Polynomial Regression.

Need of Polynomial Regression.

Linear Regression is one of the most used techniques for fitting a straight line to a linear data and is given by:

Y = θ‌0 + θ1x

where,                                                                                                                                                                                                  θ‌0 is the bias term which is the y-intercept of line                                                                                                        x is the feature                                                                                                                                                                      θ1 is the parameter for x

But what if our data is non-linear?

Let’s create some non-linear data.

import numpy as np
x = 2 – 1.75 * np.random.normal(0, 1, 20)
y = x – 2 * (x ** 2) + 0.5 * (x ** 3) + np.random.normal(-3, 3, 20)

Click here to visualize the data.

Although our data is non linear, yet we can use a linear model to fit it. This can easily be done by adding the polynomial terms of the existing features to the available features and then training a linear model on the newly created set of features. This technique is called Polynomial Regression. Below is one way to represent polynomial regression for 2 features:

Y = θ‌0 + θ‌1×1 + θ‌2×2 + θ‌3x1x2 + θ‌4(x1)^2 + θ‌5(x2)^2

Polynomial Regression is often referred to as a special case of multivariate linear regression as the coefficients associated with features are still linear and the polynomial terms are just features.

So, the above polynomial equation can be written as:

Y = θ‌0 + θ‌1×1 + θ‌2×2 + θ‌3×3 + θ‌4×4 + θ‌5×5

where,
x3,x4,x5 are the polynomial terms.

Let’s consider a scenario in which we have a dataset of housing price prediction which has only two features of length & breadth and our goal is to train a model with high accuracy. In this case, we can create new relevant features such as area of the house by using polynomial regression. Polynomial Regression is really helpful when a dataset lacks good features.

Lets try to plot different curves on the above non-linear data.

  • y = θ‌0 + θ‌1×1
  • y = θ‌0 + θ‌1×1 + θ‌2(x1)^2
  • y =θ‌0 + θ‌1×1 + θ‌2(x1)^2 + θ‌2(x1)^3

Click here to visualize linear and polynomial fit & here for cubic fit.

It turns out that adding different degree of polynomial terms results in different fit to the data. A quadratic term creates a curve with one hump and a cubic creates two humps one facing upward and other facing downward.

You must be thinking what degree of polynomial terms should we generally add?

Well, that depends on the type of data and kind of output you expect. Adding high degree polynomial terms sometimes may result in over-fitting i.e., our model might learn the noises in our data and fail to generalize on new data. So, before adding polynomial features we need to:

  •  Analyze the data.
  • Have an idea of what our output looks like.
  • Last but not least, take care of  bias-variance trade off

By: Satvik Tiwari

 

Leave a Reply

Your email address will not be published. Required fields are marked *