Deep learning excels at learning statistical correlations, but lacks robust ways of understanding how the meanings of sentences relate to their parts.
At TED, in early 2018, the futurist and inventor Ray Kurzweil, currently a director of engineering at Google, announced his latest project, “Google Talk to Books,” which claimed to use natural language understanding to “provide an entirely new way to explore books.” Quartz dutifully hyped it as “Google’s astounding new search tool [that] will answer any question by reading thousands of books.”
If such a tool actually existed and worked robustly, it would be amazing. But so far it doesn’t. If we could give computers one capability that they don’t already have, it would be the ability to genuinely understand language. In medicine, for example, several thousand papers are published every day; no doctor or researcher can possibly read them all. Drug discovery gets delayed because information is locked up in unread literature. New treatments don’t get applied, because doctors don’t have time to discover them. AI programs that could synthesize the medical literature–or even just reliably scan your email for things to add to your to-do list—would be a revolution.
[...] The currently popular approach to AI doesn’t do any of that; instead of representing knowledge, it just represents probabilities, mainly of how often words tend to co-occur in different contexts. This means you can generate strings of words that sound humanlike, but there’s no real coherence there.
[...] We don’t think it is impossible for machines to do better. But mere quantitative improvement—with more data, more layers in our neural networks, and more computers in the networked clusters of powerful machines that run those networks—isn’t going to cut it.
Instead, we believe it is time for an entirely new approach that is inspired by human cognitive psychology and centered around reasoning and the challenge of creating machine-interpretable versions of common sense.
Reading isn’t just about statistics, it’s about synthesizing knowledge: combining what you already know with what the author is trying to tell you. Kids manage that routinely; machines still haven’t.
From Rebooting AI: Building Artificial Intelligence We Can Trust [amazon.com], by Gary Marcus and Ernest Davis.