LASSO – A Regularization Method

LASSO – A Regularization Method

February 14, 2019 DATAcated Challenge 0

Least Absolute Shrinkage and Selection Operator (LASSO)

The lasso is a regularization process which minimizes the residual sum of squares and tends to produce the coefficients of some features to be absolutely zero. The lasso penalties are useful for fitting a wide variety of models.

Data analysts are not satisfied with the OLS (Ordinary Least Square) estimates. The OLS estimates have low bias but high variance, which can be improved by setting some coefficients in the model to zero. Also, it was observed that, with a large number of predictors, smaller subsets mostly show the strongest effect on model accuracy.

The methods before lasso were introduced: the best subset selection and ridge regression.

In best subset selection, regressors are either retained or dropped from the model, but small changes in data can result in different models being selected leading to lower prediction accuracy.

In ridge regression, the regressor coefficients are shrunk by imposing a penalty of to the RSS to reduce the residual sum of square error, but it does not set coefficient to zero, leading to less interpretable models.

In the lasso regression, a penalty term (constraint) is introduced in the RSS (Residual Sum of Squares) model, which is the modulus summation of all the beta-parameters in RSS is less than some constant parameter “t” also called the tuning parameter. The penalty term ensures that the sum of coefficients to be less than the full square estimates; hence leading to an overall reduction in the residual sum of squares.

By: Arshiya Tripathy

 

Leave a Reply

Your email address will not be published. Required fields are marked *