Bias vs Variance – Much like raising a child

Bias vs Variance – Much like raising a child

February 12, 2019 DATAcated Challenge 0

children
The two errors that are critical to understand model prediction are: Bias and Variance. These concepts can be applied to nearly all sorts of learning in our life. For example, the higher-level understanding of these two concepts can be done through the analogy of raising(training) a young child.

Let,
Home = Training set
Instill best qualities = Model training
Real world = Test set

Bias: It is the inability of a model to learn the true relationship in the training data. In this case, it is the inability to inculcate in the child all the qualities that make an ideal human.
Variance: It is the error in the prediction of the model if different(test) data was used. Meaning the child’s inability to adapt to the unseen real world.

Consider the below two extremes,
Under fitting (High Bias): This situation occurs when a model was not able to learn the inherent pattern in the training data i.e., the child learnt very little from the all the training at home and is making far from correct decisions(predictions) in the real world.

Over fitting (High Variance): This is a result of a model being too specific and picking up the noise along with the inherent pattern of the data stopping it from generalizing better. In our case, the child is so well trained at home that he/she do not change with the changing world.

Practically either of the two extremes is not a good situation. Hence our model can neither be too simple, nor can it be too rigid/complex. Meaning we must find the right balance in raising the child. That balance is found in the below graph where bias and variance (moving in opposite directions) meet at a compromise point where the total error is minimum.

bias vs variance

By: Adil Ahmed

 

Leave a Reply

Your email address will not be published. Required fields are marked *