Why AI is Harder Than We Think
“The year 2020 was supposed to herald the arrival of self-driving cars. Five years earlier, a headline in The
Guardian predicted that “From 2020 you will become a permanent backseat driver” . In 2016 Business
Insider assured us that “10 million self-driving cars will be on the road by 2020” . Tesla Motors CEO
Elon Musk promised in 2019 that “A year from now, we’ll have over a million cars with full self-driving,
software…everything” . And 2020 was the target announced by several automobile companies to bring
self-driving cars to market [4, 5, 6].
Despite attempts to redefine “full self-driving” into existence , none of these predictions has come true. It’s
worth quoting AI expert Drew McDermott on what can happen when over-optimism about AI systems—in
particular, self-driving cars—turns out to be wrong:
Perhaps expectations are too high, and… this will eventually result in disaster. [S]uppose that five
years from now [funding] collapses miserably as autonomous vehicles fail to roll. Every startup
company fails. And there’s a big backlash so that you can’t get money for anything connected
with AI. Everybody hurriedly changes the names of their research projects to something else.
This condition [is] called the “AI Winter” .
What’s most notable is that McDermott’s warning is from 1984, when, like today, the field of AI was
awash with confident optimism about the near future of machine intelligence. McDermott was writing
about a cyclical pattern in the field. New, apparent breakthroughs would lead AI practitioners to predict
rapid progress, successful commercialization, and the near-term prospects of “true AI.” Governments and
companies would get caught up in the enthusiasm, and would shower the field with research and development
funding. AI Spring would be in bloom. When progress stalled, the enthusiasm, funding, and jobs would dry
up. AI Winter would arrive. Indeed, about five years after McDermott’s warning, a new AI winter set in…”