🏆Deep Learning:  3 Limits we face trying to replicate the human brain

🏆Deep Learning:  3 Limits we face trying to replicate the human brain

Deep Learning is a subfield of machine learning comprised of algorithms inspired by the structure and function of the brain. “Deep Learning” is a particularly interesting area of machine learning, because it gets us closer to realizing the true power of AI; however, it has some inherent limitations.  There is a lot of debate around this topic, but I believe it boils down to three major challenges:

  1. Deep Learning requires a large amount of data and processing power – If you want to train a deep learning model to recognize, say, a cat from a cow, you need to “feed” it millions of images of cats and cows, so it can train itself to recognize the difference. This is computationally expensive.
  2. Calculations and decision making are done in a hidden-layer or “black box” – Once the images are passed through the input layer, feature analysis and determination of weights is done in a “hidden layer”. While this may not be an issue in the case of the cat and cow example, not knowing what occurred in this stage for more consequential issues could be highly problematic.
  3. Computers don’t have morals and struggle with abstract concepts – In his 2003 paper titled “Ethical Issues in Advanced Artificial Intelligence”, Nick Bostrom cites a fictional paperclip manufacturing AI. If mistakes are made in its programming or rules are not explicit, he posits the AI will be so focused on its goal of manufacturing paperclips, that it will do whatever it takes to optimize production, even if it means causing harm or consuming unnecessary resources.

Although there will certainly continue to be a lot of focus on these and other challenges, “deep learning” holds promise for solving many of humanity’s problems. The key will be policing the transition from weak to general AI.

By: Jennifer Cooper

 

Leave a Reply

Your email address will not be published. Required fields are marked *