Why Ai Is Harder than We Think

Why Ai Is Harder than We Think

Author

Melanie Mitchell

Year
2021
image

Why Ai Is Harder than We Think

Mitchell. 2021. (View Paper →)

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.

The Paper outlines 4 reasons why AI is harder than we think - areas where researchers are overconfident in their predictions .

  1. Narrow intelligence might not be on a continuum with general intelligence
    • When we see a computer do something amazing in a narrow area - we might assumer we’re much closer to general intelligence.
    • This is called the first-step Fallacy
  2. Easy things are hard, and hard things are hard.
    • This is a play on the AI saying ‘easy things are hard and hard things are easy’ or Moravec’s paradox
    • 🤓
      Moravec’s paradox is named after roboticist Hans Moravec
    • We can make computers exhibit adult level performance when playing checkers, but it’s difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility
    • Looking out the window and understanding what we see... is easy for us, incredibly difficult for AI. DeepMind conquered the game of Go, but that's a game that difficult for humans. Conquering charades would be more impressive
  3. The Lure of Wishful Mnemonics
    • Neural Networks: sound more like they're based on the brain than they are
    • Machine Learning / Deep Learning: imply more learning than actually happens.
    • You’ll often hear people say the model ‘understands’ that the image should have that label - but ‘understands’ is an incredibly generous word choice
    • Transfer learning is needed to apply something narrow to something more general → and it’s really hard
  4. What if intelligence isn’t all in the brain
    • The assumption that intelligence can in principle be “disembodied” is implicit in almost all work on AI throughout its history
    • Maybe Moore's law has been kidding us into thinking we've been making progress and we haven't been
    • Can we make superhuman AI without the things that make us human? We don’t know if “pure rationality” is separable from the strongly integrated and interconnected human attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a common-sense understanding of the world.It’s not at all clear that these attributes can be separated.