Decision theory in expert systems and artificial intelligence

Eric J. Horvitz, John S. Breese, Max Henrion
1988 International Journal of Approximate Reasoning  
Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decision-theoretic framework. Recent analyses of the restrictions of several traditional AI reasoning techniques, coupled with the development of more tractable and expressive decision-theoretic
more » ... n and inference strategies, have stimulated renewed interest in decision theory and decision analysis. We describe early experience with simple probabilistic schemes for automated reasoning, review the dominant expert-system paradigm, and survey some recent research at the crossroads of AI and decision science. In particular, we present the belief network and influence diagram representations. Finally, we discuss issues that have not been studied in detail within the expert-systems setting, yet are crucial for developing theoretical methods and computational architectures for automated reasoners. ripe for a synthesis of AI methods and techniques developed in decision science for addressing resource allocation and decision making under uncertainty. By decision science we mean Bayesian probability and decision theory, the study of the psychology of judgment, and their practical application in operations research and decision analysis. In particular, decision theory can provide a valuable framework for addressing some of the foundational problems in AI, and forms the basis for a range of practical tools. Artificial intelligence and the decision sciences emerged from research on systematic methods for problem solving and decision making that blossomed in the 1940s. These disciplines were stimulated by new possibilities for automated reasoning unleashed by the development of the computer. Although the fields had common roots, AI soon distinguished itself from the others in its concern with autonomous problem solving, its emphasis on symbolic rather than numeric information, its use of declarative representations, and its interest in analogies between computer programs and human thinking. Some of the earliest AI research centered on an analysis of the sufficiency of alternative approximation strategies and heuristic methods to accomplish the task of more complex decision-theoretic representation and inference (Simon [1]). However, many AI researchers soon lost interest in decision theory. This disenchantment seems to have arisen, in part, from a perception that decisiontheoretic approaches were hopelessly intractable and were inadequate for expressing the rich structure of human knowledge (Gorry [2], Szolovits [3] ). This view is reflected in a statement by Szolovits, a researcher who had investigated the application of decision theory in early medical reasoning systems: "The typical language of probability and utility theory is not rich enough to discuss such [complex medical] issues, and its extension with the original spirit leads to untenably large decision problems" (Szolovits, [3], p. 7). Although similar views are still widespread among AI researchers, there has been a recent resurgence of interest in the application of probability theory, decision theory, and decision analysis to AI. In this paper, we examine some of the reasons for this renewed interest, including an increasing recognition of the shortcomings of some traditional AI methods for inference and decision making under uncertainty, and the recent development of more expressive decisiontheoretic representations and more practical knowledge-engineering techniques. The potential contributions of decision science for tackling AI problems derive from decision science's explicit theoretical framework and practical methodologies for reasoning about decisions under uncertainty. Decisions underlie any action that a problem solver may take in structuring problems, in reasoning, in allocating computational resources, in displaying information, or in controlling some physical activity. As AI has moved beyond toy problems to grapple with complex, real-world decisions, adequate treatment of uncertainty has become increasingly important. Attempts to build systems in such areas as medicine, investment, aerospace, and military planning have uncovered the Decision Theory in Expert Systems and AI 249 ] Several axiomatizations of probability theory have been proposed. Decision Theory in Expert Systems and AI 251 Sets of belief assignments that are consistent with the axioms of probability theory are said to be coherent. In this sense, the theory provides a consistency test for uncertain beliefs. Persuasive examples suggest that a rational person would wish to avoid making decisions based on incoherent beliefs. For example, someone willing to bet according to incoherent beliefs would be willing to accept a "Dutch book"--that is, a combination of bets leading to guaranteed loss under any outcome (Lehman [5], Shimony [61). A number of researchers have provided lists of fundamental properties that they consider intuitively desirable for continuous measures of belief in the truth of a proposition (Cox [7], Tribus [8], Lindley [9]). A recent reformulation of desirable properties of belief is (Horvitz et al. [10]:
doi:10.1016/0888-613x(88)90120-x fatcat:6gzvzf6rufckfcfmc3adec7leu