## Switching from Linear to Polynomial Regression

In the field of Machine Learning, *Regression* is a common term used to define the prediction values of *Continuous Dependant* Variable.

Depending upon distribution of data, we can determine whether to use linear regression or non-linear regression.

Linear regression is used when the relationship between dependant and *independent *variable is linear.

When it comes to non-linear data, the primary question which comes in our mind is how can we generate a curve which could capture most part of our data ?

To answer it, polynomial regression comes in handy. It is the simple approach to build non-linear models. It tries to add quadratic terms(square,cube) to a regression.

Why to use Polynomial Regression then ?

When using Linear regression over non-linear data, the outcome is in form of straight line i.e. the straight line doesn’t capture patterns in the data.

This scenario is termed as under-fitting (Highly Biased Model as model results are prone to give more training error).

To give a brief example about it, I have included video which explains polynomial regression using Boston Housing Dataset Example (https://www.kaggle.com/c/boston-housing).

Here is the link to it: https://youtu.be/Vgb9XFa7YyQ

Link to code : https://github.com/Gaurav9112/Polynomial-Regression

By: Gaurav

dependant variable linear machine learning polynomial regression regression