Limits of data hungry Deep Learning!

Limits of data hungry Deep Learning!

January 22, 2019 DATAcated Challenge 0

   Deep learning is often compared to the mechanisms that underlie the human mind!

There’s no doubt that machine learning and deep learning are super-efficient for many tasks. However, they’re not a silver bullet that will solve all problems and override all previous technologies. Deep neural networks, which power deep learning algorithms, have several hidden layers between the input and output nodes, making them capable of making much more complicated classifications of data.

deep neural networks

The top 5 limitations that definitely stand out are:

1. Deep learning thus far is data hungry

Human beings can learn abstract relationships in a few trials. Deep learning currently lacks a mechanism for learning abstractions through explicit, verbal definition, and works best when there are thousands, millions or even billions of training examples.

In problems where data are limited, deep learning often is not an ideal solution. So what happens when deep learning algorithm doesn’t have enough quality training data? It can fail spectacularly, such as mistaking a rifle for a helicopter, or humans for gorillas.

2. Deep learning  is shallow and has limited capacity for transfer

Deep learning algorithms is that they’re very good at mapping inputs to outputs but not so much at understanding the context of the data they’re handling. In fact, the word “deep” in deep learning is much more a reference to the architecture of the technology and the number of hidden layers it contains rather than an allusion to its deep understanding of what it does.

While decisions made by rule-based software can be traced back to the last if and else, the same can’t be said about machine learning and deep learning algorithms. This lack of transparency in deep learning is what we call the “black box” problem. Deep learning algorithms pass through millions of data points to find patterns and correlations that often go unnoticed to human experts. The decision they make based on these findings often confound even the engineers who created them.

3. Deep learning thus far has no natural way to deal with hierarchical structure

Recurrent Neural Networks could generalize well when the differences between training and test are comparable or small. But when generalization requires systematic configuration or creative skills, RNNs fails. Human can generalize and anticipate different possible problem cases and provides solutions and performs long-term planning for that.

4. Deep learning presumes a largely stable world, in ways that may be problematic 

The logic of deep learning is such that it is likely to work best in highly static worlds. It fails where rules are changing and very dynamic. These neural networks still lacks learning by itself or handling new unforeseen data.

5. Lacks domain knowledge integration

The intelligence of human civilization accelerates due to connectivity between people, which goes beyond just classification problem.

e.g. If I say, B is A’s mother and, B and C are married. We Humans can very easily say that C is A’s father but for a Deep Learning Network these simple relationship becomes tough to evaluate.

Deep learning thus far works well as an approximation, but its answers often cannot be fully trusted…

By: Antara Basu

 

Leave a Reply

Your email address will not be published. Required fields are marked *