Filters








6,794 Hits in 4.9 sec

Story Completion with Explicit Modeling of Commonsense Knowledge

Mingda Zhang, Keren Ye, Rebecca Hwa, Adriana Kovashka
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We explore another direction and present a novel method that explicitly incorporates commonsense knowledge from a structured dataset [11] , and demonstrate the potential for improving story completion.  ...  To successfully choose an ending requires not only detailed analysis of the context, but also applying commonsense reasoning and basic knowledge.  ...  Commonsense knowledge base completion Most of the knowledge base completion models predict missing relation links between entities.  ... 
doi:10.1109/cvprw50498.2020.00196 dblp:conf/cvpr/ZhangYHK20 fatcat:iqx6b6h4mravlduym3vlkdufmm

Incorporating Structured Commonsense Knowledge in Story Completion

Jiaao Chen, Jianshu Chen, Zhou Yu
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We present a neural story ending selection model that integrates three types of information: narrative sequence, sentiment evolution and commonsense knowledge.  ...  Story ending prediction requires not only the explicit clues within the context, but also the implicit knowledge (such as commonsense) to construct a reasonable and consistent story.  ...  Story completion tasks rely not only on the logic of the story itself, but also requires implicit commonsense knowledge outside the story.  ... 
doi:10.1609/aaai.v33i01.33016244 fatcat:xgw2ux4w4ncunhlikcodbsqjba

Incorporating Structured Commonsense Knowledge in Story Completion [article]

Jiaao Chen, Jianshu Chen, Zhou Yu
2018 arXiv   pre-print
We present a neural story ending selection model that integrates three types of information: narrative sequence, sentiment evolution and commonsense knowledge.  ...  Story ending prediction requires not only the explicit clues within the context, but also the implicit knowledge (such as commonsense) to construct a reasonable and consistent story.  ...  The model outperformed state-of-the-art methods. We found that introducing external knowledge such as struc-tured commonsense knowledge helps narrative completion.  ... 
arXiv:1811.00625v1 fatcat:bktjk7nrwjbdbpxh73n2ednpc4

Understanding Stories with Large-Scale Common Sense

Bryan Williams, Henry Lieberman, Patrick H. Winston
2017 International Symposium on Commonsense Reasoning  
Aspire extends Genesis, a rule-based story understanding system, with tens of thousands of goalrelated assertions from the commonsense semantic network ConceptNet.  ...  We have implemented the Aspire system, an application of large-scale commonsense knowledge to story understanding.  ...  Acknowledgements This research was supported, in part, by the Air Force Office of Scientific Research, Award Number FA9550-17-1-0081.  ... 
dblp:conf/commonsense/WilliamsLW17 fatcat:dwjn6gmbczhizc4ke62otwag5a

Reasoning with Heterogeneous Knowledge for Commonsense Machine Comprehension

Hongyu Lin, Le Sun, Xianpei Han
2017 Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing  
kinds of commonsense knowledge.  ...  Reasoning with commonsense knowledge is critical for natural language understanding.  ...  Acknowledgments This work is supported by the National Natural Science Foundation of China under Grants no. 61433015 and 61572477, the National High Technology Development 863 Program of China under Grants  ... 
doi:10.18653/v1/d17-1216 dblp:conf/emnlp/LinSH17 fatcat:22cujx67tfcfvpnsmkepyms36q

SemEval-2018 Task 11: Machine Comprehension Using Commonsense Knowledge

Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, Manfred Pinkal
2018 Proceedings of The 12th International Workshop on Semantic Evaluation  
It contains a high number of questions that require commonsense knowledge for finding the correct answer. 11 teams from 4 different countries participated in this shared task, most of them used neural  ...  This report summarizes the results of the Se-mEval 2018 task on machine comprehension using commonsense knowledge. For this machine comprehension task, we created a new corpus, MCScript.  ...  This research was funded by the German Research Foundation (DFG) as part of SFB 1102 Information Density and Linguistic Encoding and EXC 284 Multimodal Computing and Interaction.  ... 
doi:10.18653/v1/s18-1119 dblp:conf/semeval/OstermannRMTP18 fatcat:tlqzjfivarfvraht7jwfbm7rci

Beating Common Sense into Interactive Applications

Henry Lieberman, Hugo Liu, Push Singh, Barbara Barry
2004 The AI Magazine  
spotty coverage and unreliable inference of today's commonsense knowledge systems.  ...  This article surveys several of these applications and reflects on interface design principles that enable successful use of commonsense knowledge.  ...  Commonsense knowledge is used for query expansion, so that a picture of a baby is associated with the mention of milk.  ... 
doi:10.1609/aimag.v25i4.1785 dblp:journals/aim/LiebermanLSB04 fatcat:bm6wkzaabrdi7mi4weti2qceii

Event Transition Planning for Open-ended Text Generation [article]

Qintong Li, Piji Li, Wei Bi, Zhaochun Ren, Yuxuan Lai, Lingpeng Kong
2022 arXiv   pre-print
Open-ended text generation tasks, such as dialogue generation and story completion, require models to generate a coherent continuation given limited preceding context.  ...  The open-ended nature of these tasks brings new challenges to the neural auto-regressive text generators nowadays.  ...  This research is supported in part by the National Natural Science Foundation of China (Grant No. 62106105, 61902219), the Shanghai Committee of Science and Technology, China (Grant No. 21DZ1100100),  ... 
arXiv:2204.09453v1 fatcat:info5c5skndo5mjnp4usas622m

Automatic Story Generation: Challenges and Attempts [article]

Amal Alabdulkarim, Siyan Li, Xiangyu Peng
2021 arXiv   pre-print
The scope of this survey paper is to explore the challenges in automatic story generation. We hope to contribute in the following ways: 1.  ...  Explore how previous research in story generation addressed those challenges. 2. Discuss future research directions and new technologies that may aid more advancements. 3.  ...  Story generation is usually in need of a paragraph-level commonsense inference because combining with context, the inference could be completely different.  ... 
arXiv:2102.12634v1 fatcat:b67pi4zy5fc4dp4edidecwo54a

Commonsense Causal Explanation in a Legal Domain

Rinke Hoekstra, Joost Breuker
2007 Artificial Intelligence and Law  
In this paper, we present an approach to commonsense causal explanation of stories that can be used for automatically determining the liable party in legal case descriptions.  ...  We present design principles for representing commonsense causation, and describe a process-based approach to automatic identification of causal relations in stories, which are described in terms of the  ...  Although it does model the world of commonsense, it does not model using the vocabulary and structure of commonsense: it is rather a framework for describing commonsense things, than a commonsense framework  ... 
doi:10.1007/s10506-007-9033-5 fatcat:qcff7gcfj5hwreqdqwdzvf6z5e

A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation

Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, Minlie Huang
2020 Transactions of the Association for Computational Linguistics  
In this paper, we devise a knowledge-enhanced pretraining model for commonsense story generation.  ...  We conjecture that this is because of the difficulty of associating relevant commonsense knowledge, understanding the causal relationships, and planning entities and events with proper temporal order.  ...  Acknowledgments This work was supported by the National Science Foundation of China (grant no. 61936010/618 76096) and the National Key R&D Program of China (grant no. 2018YFC0830200).  ... 
doi:10.1162/tacl_a_00302 fatcat:uiukofqfajcqbjcqijlnbxrzu4

A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [article]

Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, Minlie Huang
2020 arXiv   pre-print
In this paper, we devise a knowledge-enhanced pretraining model for commonsense story generation.  ...  We conjecture that this is because of the difficulty of associating relevant commonsense knowledge, understanding the causal relationships, and planning entities and events with proper temporal order.  ...  Acknowledgments This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200).  ... 
arXiv:2001.05139v1 fatcat:h63tgcuzcnetfbg63t27wprj4q

Computational Narrative Intelligence: A Human-Centered Goal for Artificial Intelligence [article]

Mark O. Riedl
2016 arXiv   pre-print
We argue that instilling artificial intelligences with computational narrative intelligence affords a number of applications beneficial to humans.  ...  Narrative intelligence is the ability to craft, tell, understand, and respond affectively to stories.  ...  Humans learn commonsense knowledge and reasoning through a lifetime of experiences in the real world. Learning commonsense knowledge as been an ongoing challenge in AI and machine learning.  ... 
arXiv:1602.06484v1 fatcat:xf23w6rkzvddlmc6evy7mp3bdy

MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge [article]

Simon Ostermann, Ashutosh Modi, Michael Roth, Stefan Thater, Manfred Pinkal
2018 arXiv   pre-print
Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge  ...  We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge.  ...  help with the annotations.  ... 
arXiv:1803.05223v1 fatcat:zelyvzypmzcbrmtkjphvwnkc6m

C IS^2: A Simplified Commonsense Inference Evaluation for Story Prose [article]

Bryan Li, Lara J. Martin, Chris Callison-Burch
2022 arXiv   pre-print
In particular, we focus on the domain of commonsense reasoning within story prose, which we call contextual commonsense inference (CCI).  ...  We look at the GLUCOSE (Mostafazadeh et al. 2020) dataset and task for predicting implicit commonsense inferences between story sentences.  ...  We thank the authors of GLUCOSE, in particular Or Biran and Lori Moon, for their helpful assistance in working with the GLUCOSE dataset and codebase.  ... 
arXiv:2202.07880v3 fatcat:yvqbwntjjnhvxarm56ar4oqfpi
« Previous Showing results 1 — 15 out of 6,794 results