In the end, it’s all a question of balance!

In the end, it’s all a question of balance!

February 14, 2019 DATAcated Challenge 0

“In the end, it’s all a question of balance!”. This quote fits perfectly to all the machine learning or deep learning models (especially supervised) where we try to find an optimal balance between the input variable and the final output variable. In such scenarios, we can’t learn less & we just can’t cross a certain threshold by overdoing the training part. That act of balancing, where we make sure that the final model is almost perfectly balanced & free from any underfitting (high bias) or overfitting (high variance), can be regarded as managing the ‘Bias-Variance Tradeoff’.

In simple words, bias is high when model is unable to capture the true relationship between underlying variable i.e. it is not able to fit the given training data (underfitting). On the other hand, high variance means, though your model is doing pretty good with a certain training dataset but when you get a new dataset to predict (i.e. a test set) your model will perform poorly (overfitting).

Let’s take an example, we are using our model to classify images into either class 0 (i.e. Cat) or into class 1 (i.e. Dog).

Fig. Cat vs Dog

For classifying the images, our training and test dataset will have a lot of features for both Cat as well as Dog, but we are not going into that much detail. We will only consider the errors we get at the time of training & testing and by comparing those we will try to understand the difference between various possible cases of bias & variance.

Below table lists the 4 possible cases:

High Bias High Variance High Bias – High Variance Low Bias – Low Variance
Training Error 20% 1% 15% 0.5%
Testing Error 22% 11% 30% 1%

As you can see from the above table, we have all cases of bias-variance trade-off listed here, high bias leads to underfitting and then high-training as well as high-test error. High variance on the other hand will have less training error but when you test it with a test data-set, the error will be high and that’s because of overfitting.

With the help of graphs, we can also visualize this trade-off like this,

Fig. Underfitting

Overfitting

On the concluding note, I would like to mention some possible solutions for handling bias-variance trade-off and those are use of regularization, boosting & bagging.

By: Amit Bishnoi

 

Leave a Reply

Your email address will not be published. Required fields are marked *