AI and Similarity
IEEE Intelligent Systems
A s AI moves into the second half of its first century, we certainly have much to cheer about. It has produced a wealth of solid results on many fronts, including machine learning and knowledge representation, for instance. More generally, the field has delivered impressive, reliable, and widely applicable techniques we couldn't have dreamed of 50 years ago: constraint-satisfaction problem solving, probabilistic learning techniques, realtime planning, case-based reasoning, market-based models
... r societies of agents, and so on. AI has made significant headway on developing techniques, computational models, and systems that advance its synergistic twin goals of modeling cognition and building systems to get the job done. In fact, a lot of AI systems get the job done spectacularly. Yet we still have much work to do on some topics. Among these are similarity-driven reasoning, analogy, learning, and explanation, especially as they concern open-textured, ever-changing, and exception-riddled concepts. Although some of these were recognized from its beginning, AI still cannot deal well enough (at least for me) with the inherent messiness that characterizes much of the world that humans and AI artifacts operate in. There is no way to shrink from this challenge. Even though some subfields, such as my own disciplines of case-based reasoning (CBR) and AI and law, have made significant advances, abundant opportunities exist to push the envelope further. Doing so is necessary both to shed light on cognition and to advance the state of the art of performance systems. In the next half-century, AI can become robust enough not only to cope with our messy world but also to thrive in it. In this essay, I discuss a few aspects of the topics that I believe are important in order to realize truly robust AI. For AI to become truly robust, we must further our understanding of similarity-driven reasoning, analogy, learning, and explanation. Here are some suggested research directions.