Why you should not blindly trust your deep learning algorithm?

Why you should not blindly trust your deep learning algorithm?

January 24, 2019 DATAcated Challenge 0

Why you should not blindly trust your deep learning algorithm?

Deep learning is the new poster boy in the world of data science. It has become the heart of of designing intelligent systems. However, all that glitters is not gold! There are surprisingly a few limitations to the deep learning models. Let us explore them one by one.

1. Deep learning models lack common sense which is hinders its decision making process in relation to any problem that requires personal discretion or common sense. As humans, we are able to apply common sense to the things we see for the first time and can make reliable estimates/judgments. Deep learning cannot do so on the fly. It needs prior training data.

2. Assured output is not guaranteed in case of Deep Learning. Even after feeding the deep learning algorithm with training data, it can only estimate the output. There is no 100% assurance that the output is exact. There are approximations in the output produced by deep learning algorithms.

3. Deep learning suffers from the evils of ‘Garbage in Garbage out problem’. If the algorithm is fed with incorrect input or incomplete data, it will produce wrong results. It lacks the intelligence to identify what data is useless, inaccurate or incomplete from the point of view of producing legitimate and accurate results.

4. A deep learning algorithm’s knowledge bank and experience comes from the training data that has been fed into it. This constrains the domain expertise of deep learning methodologies. It only gets exposed to the domains that we allow in form of training datasets. This severely restricts deep learning’s ability to make decisions in dynamic situations.

5. Deep Learning methods lack global generalization. They cannot anticipate different potential scenarios and transfer the knowledge from the learned concepts to the new, unseen situations. On the other hands, humans can think ahead of time and plan for long-term situations.

6. Deep learning algorithms seem less effective beyond classification or dimensionality reduction problems. Therefore, there is a danger of limited areas of application.

In conclusion, it should be noted that no technique or method in data science is free from flaws or limitations. Even something as advanced as deep learning suffers from certain limitations and they must be borne in mind before interpreting the results it produces.

By: Tanmayee Waghmare

 

Leave a Reply

Your email address will not be published. Required fields are marked *