Adding Explanation Capability to a Knowledge-Based System for Interpretation of Oceanographic Images
[report]
Susan Bridges
1992
unpublished
S Approved for poht'ic release. distrihution i , tmlirrnitcd. IN v-Oceanographic anid Atmospheric Research Laboratory. Stennii5 'nario N E~4/ Center. Mississippi 39529-5004 ---------- Abstxact An explanation component has been developed for an existing oceanographic expert system that predicts the movement of mesoscale features associated with the Gulf Stream. The information provided by the expert system is used by image processing analysts when the oceanographic features cannot be observed by
more »
... satellite data due to interference such as cloud cover. The addition of an explanation capability gives users a basis for judging the quality of the system's decision making process. The structure of the original system was not amenable to the incorporation of an explanation facility because the knowledge needed for explanation was not explicitly represented in the knowledge base. The system has been restructured with the knowledge represented declaratively rather than procedurally, thus, allowing the reasoning process to be recorded and used to produce explanations of decisions. The rules have been rewritten with the knowledge "chunks" in each rule at a finer level of granularity. Each rule corresponds to one decision and the results of each decision are explicitly asserted into the working memory of the system. The presence of the new information causes other rules to fire and other decisions to be made. An explanation is produced by capturing the chain of rules that have fired. In addition to a reduction in granularity, the rules have also been generalized, which allows the same rules to be used in many different situations with different instantiations of the variables. The explanation component consists of an introspection module and a presentation module. The introspection module "watches" the reasoning process and records the data that caused each rule to fire and the new information produced as a result of each rule firing. The presentation module can use this information to present a detailed natural language trace of the rules that have fired or a shorter natural language summary of the reasoning used for the prediction. The trace will be most useful for those who are debugging the system, those who wish to modify the system, or those who need a detailed account of the system's reasoning. The summary will be more useful for the analysts who will use the system on a daily basis.
doi:10.21236/ada255887
fatcat:wotvts44bzf4dldyw7owweeysy