Questions and Answers: Theoretical and Applied Perspectives

Raffaella Bernardi, Bonnie Webber
2007 Journal of Applied Logic  
Editorial Questions and Answers: Theoretical and Applied Perspectives This special issue is the outcome of dissemination activities carried out within the Network of Excellence in Computational Logic (CoLogNET http://www.colognet.org). In particular, it results from an exploration of the common grounds between the "Logic and Natural Language Processing" Area of CoLogNET and ElsNET (Network of Excellence in Human Language Technologies, http://www.elsnet.org/) that has been the aim of the annual
more » ... oLogNET-ElsNET symposia organized from 2001 to 2004. Motivated by the broad interest that question answering (QA) is receiving in such fields as computational linguistics, computational logic, databases, formal semantics, and information retrieval, and inspired by the success of the second CoLogNET-ElsNET Symposium on "Questions and Answers: Theoretical and Applied Perspectives", we were led to prepare a special issue of the Journal of Applied Logic that addresses both the logical foundations underlying QA and technological implications for QA systems. Abstracting from the different approaches and angles, it is clear that two central concerns underlying current research in QA are (1) integrating current open-domain QA (based on free "unstructured" text) with its more mature sister, natural language front ends to database systems (NLDBS), and (2) issues of what should be taken as "answer". Both aspects have been carefully discussed by Spärck-Jones [1]. In this paper, the author invites more hospitable and flexible research on QA that learns from its own history and its interaction with other disciplines and that attributes more importance to the problem of ranking multiple candidate answers. The papers collected in this special issue can be seen as responses to this invitation. The interaction between open-domain QA and NLDBS is the focus of both the papers by Badia and by Frank et al. included in this special issue. In the former, Badia proposes a query language with generalized quantifiers to help bridge the gap between the two fields. The developed query language is meant to access information using a single formal language independently of the source from which the answer is going to be retrieved. The need for such a language comes from the observation that QA and NLDBS are converging. For instance, in the database community semistructured and unstructured data are acquiring more and more importance. Hence, the need for query languages that are more flexible than SQL and the appeal to tools and techniques from information retrieval. A similar desire to unify QA access to information sources with different degrees of structuring is behind the hybrid architecture described in the paper by Frank et al. They present their approach for domain-restricted question answering from structured knowledge sources and multilingual data (QUETAL). The distinctive characteristics of this system are its hybrid architecture integrating high-quality NLP tools, the focus on linguistic analysis of questions, and its use of ontologies to interface between question analysis, answer extraction and knowledge engineering. It also contains several abstraction layers that reduce the complexity of the mapping rules to database concepts and guarantee portability across languages and across different target knowledge bases. (The paper by Moldovan et al. provides another example of hybrid architecture for open-domain QA.) The second concern addressed by the papers in this special issue is what constitutes an answer and how answer relevance can be measured and ranked. Different proposals have been put forward. Research on open domain QA systems has generally avoided deep logical approaches, preferring instead a combination of information retrieval and extraction and statistical approaches. Two of the papers in this special issue describe successful open domain QA system that take advantage of logic based approaches. Moldovan et al. use automated reasoning to enhance their QA system COGEX and use a prover to check whether a given question is entailed 1570-8683/$ -see front matter
doi:10.1016/j.jal.2005.12.008 fatcat:rak2bsvqize5jgvhgavvxclxo4