Blog

Decision made Easy with Random Forest!!!

When We, Human always want to have a second opinion why not our algorithms? Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Let’s understand decision tree…
Read more

In the end, it’s all a question of balance!

“In the end, it’s all a question of balance!”. This quote fits perfectly to all the machine learning or deep learning models (especially supervised) where we try to find an optimal balance between the input variable and the final output variable. In such scenarios, we can’t learn less & we just can’t cross a certain…
Read more

Discuss the trade off between bias and variance

Bias is error due to erroneous or overly simplistic assumptions in the learning algorithm you’re using. This can lead to the model underfitting your data, making it hard for it to have high predictive accuracy and for you to generalize your knowledge from the training set to the test set. Variance is error due to too much…
Read more

LASSO – A Regularization Method

Least Absolute Shrinkage and Selection Operator (LASSO) The lasso is a regularization process which minimizes the residual sum of squares and tends to produce the coefficients of some features to be absolutely zero. The lasso penalties are useful for fitting a wide variety of models. Data analysts are not satisfied with the OLS (Ordinary Least Square)…
Read more

Vectorzation -The number game for text

Introduction Machine Learning has become the hottest topic in Data Industry with an increasing demand for professionals who can work in this domain. There is a large amount of textual data present on internet and giant servers around the world. Just for some facts 1,209,600 new data producing social media users each day. 656 million…
Read more

Bias/Variance Trade-off

Bias/Variance trade-off is an important concept in learning theory. This post discusses topic from both – theoretical and practical perspective. General Main goal of any learning algorithm is to predict and generalise well. More formally, this goal is equivalent to ‘minimise expected error on unseen data’ – thus taking closer look at the components of…
Read more

Trade off between bias and variance

Bias is the simplifying assumptions made by the model to make the target function easier to approximate. Variance is the amount that the estimate of the target function will change given different training data. Trade-off is tension between the error introduced by the bias and the variance. By: Sachin Narang

Decision Trees: Simplifying the decision-making process

As kids, you must have played the game of “Guess the animal”: Let’s translate it into a graph : Source This is exactly how a Decision Tree is created for any ML Problem. Decision Tree A Decision Tree is a supervised learning algorithm that uses a set of binary rules to calculate the target. It is…
Read more

Random Forest Classifier in Layman Terms

Random Forest Classifier is an ensemble algorithm, which creates a set of decision trees from a randomly selected subset of the training set, which then aggregates the votes from different decision trees to decide the final class of the test object. For better understanding, let us say Ram is planning to buy a car. After…
Read more

Bias vs Variance – Much like raising a child

The two errors that are critical to understand model prediction are: Bias and Variance. These concepts can be applied to nearly all sorts of learning in our life. For example, the higher-level understanding of these two concepts can be done through the analogy of raising(training) a young child. Let, Home = Training set Instill best…
Read more