It’s a funny thing, AI. It can identify objects in a fraction of a second, imitate the human voice, and recommend new music, but most machine “intelligence” lacks the most basic understanding of everyday objects and actions — in other words, common sense. DARPA is teaming up with the Seattle-based Allen Institute for Artificial Intelligence to see about changing that.
The Machine Common Sense program aims to both define the problem and engender progress on it, though no one is expecting this to be “solved” in a year or two. But if AI is to escape the prison of the hyper-specific niches where it works well, it’s going to need to grow a brain that does more than execute a classification task at great speed.
“The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences. This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future,” explained DARPA’s Dave Gunning in a press release.
Not only is common sense lacking in AIs, but it’s remarkably difficult to define and test, given how broad the concept is. Common sense could be anything from understanding that solid objects can’t intersect to the idea that the kitchen is where people generally go when they’re thirsty. As obvious as those things are to any human more than a few months old, they’re actually quite sophisticated constructs involving multiple concepts and intuitive connections.
It’s not just a set of facts (like that you must peel an orange before you eat it, or that a drawer can hold small items) but identifying connections between them based on what you’ve observed elsewhere. That’s why DARPA’s proposal involves building “computational models that learn from experience and mimic the core domains of cognition as defined by developmental psychology. This includes the domains of objects (intuitive physics), places (spatial navigation), and agents (intentional actors).”
But how do you test these things? Fortunately great minds have been at work on this problem for decades, and one research group has proposed an initial method for testing common sense that should work as a stepping stone to more sophisticated ones.
I talked with Oren Etzioni, head of the Allen Institute for AI, which has been working on common sense AI for quite a while now, among many other projects regarding the understanding and navigation of the real world.
“This has been a holy grail of AI for 35 years or more,” he said. “One of the problems is how to put this on an empirical footing. If you can’t measure it, how can you evaluate it? This is one of the very first times people have tried to make common sense measurable, and certainly the first time that DARPA has thrown their hat, and their leadership and funding, into the ring.”
The AI2 approach is simple but carefully calibrated. Machine learning models will be presented with written descriptions of situations and several short options for what happens next. Here’s one example:
On stage, a woman takes a seat at the piano. She
a) sits on a bench as her sister plays with the doll.
b) smiles with someone as the music plays.
c) is in the crowd, watching the dancers.
d) nervously sets her fingers on the keys.
The answer, as you and I would know in a heartbeat, is d. But the amount of context and knowledge that we put into finding that answer is enormous. And it’s not like the other options are impossible — in fact, they’re AI-generated to seem plausible to other agents but easily detectable by humans. This really is quite a difficult problem for a machine to solve, and current models are getting it right about 60 percent of the time (25 percent would be chance).
There are 113,000 of these questions, but Etzioni told me this is just the first dataset of several.
“This particular dataset is not that hard,” he said. “I expect to see rapid progress. But we’re going to be rolling out at least four more by the end of the year that will be harder.”
After all, toddlers don’t learn common sense by taking the GRE. As with other AI challenges, you want gradual improvements that generalize to harder versions of similar problems — for example, going from recognizing a face in a photo, to recognizing multiple faces, then identifying the expression on those faces.
There will be a proposers day next week in Arlington for any researcher who wants a little face time with the people running this little challenge, after which there will be a partner selection process, and early next year the selected groups will be able to submit their models for evaluation by AI2’s systems in the spring.
The common sense effort is part of DARPA’s big $2 billion investment in AI on multiple fronts. But they’re not looking to duplicate or compete with the likes of Google, Amazon, and Baidu, which have invested heavily in the narrow AI applications we see on our phones and the like.
“They’re saying, what are the limitations of those systems? Where can we fund basic research that will be the basis of whole new industries?” Etzioni suggested. And of course it is DARPA and government investment that set the likes of self-driving cars and virtual assistants on their first steps. Why shouldn’t it be the same for common sense?