## đ Bias – Variance Tradeoff

âAs a Data Scientist â should I be a specialist or generalist? After all, data science is an ocean!â

As someone who was in his first semester pursuing his Masterâs in Analytics degree, this is the question I had in my mind after the professorsâ introduced a plethora of new terminologies to me in every class, trying to find what I should focus on.

The answer that I have figured out over the course of these eight long months is â you need to hit the âsweet spotâ and be both!

Interestingly, the Bias-Variance trade-off has the same principle i.e. your predictive model should hit a sweet spot between being too specific to your data and being too general.

Letâs start with defining the two terms:

**Bias** â how much the average model overall training sets differs from the desired âtrueâ model i.e. ability of an algorithm to accurately model the problem. High Bias leads to a model with poor predictive power. This leads to a problem called âUnderfittingâ.

**Variance** â how much the models estimated on a different training data differ from each other i.e. different accuracies on different training data. A High Variance leads to a problem called âOverfittingâ.

**Goal **

The main goal of your predictive model is to minimize the expected error on test data (unseen data). To do this, an ideal model should have **low bias and low variance.**

Generalization error:

Error = Bias^2 + Variance + Noise (irreducible)

**Problem** – As model complexity increases, although the bias decreases, it becomes too specific to the data it is being trained upon (overfit) and thus the variance increases (Low training error and high test error may indicate high variance).

**Tradeoff **â Lower Bias models have high variance and vice â versa.

**Solution** â

- Use more data.
- Choose sampling strategy carefully and understand how it is sampled.
- Use Cross â Validation techniques
- Use Regularization (Penalizes a highly complex model)

– Archit Shorey