Filters








225,481 Hits in 4.8 sec

Make The Most of Prior Data: A Solution for Interactive Text Summarization with Preference Feedback [article]

Duy-Hung Nguyen and Nguyen Viet Dung Nghiem and Bao-Sinh Nguyen and Dung Tien Le and Shahab Sabahi and Minh-Tien Nguyen and Hung Le
2022 arXiv   pre-print
In this paper, we introduce a new framework to train summarization models with preference feedback interactively.  ...  For summarization, human preference is critical to tame outputs of the summarizer in favor of human interests, as ground-truth summaries are scarce and ambiguous.  ...  Acknowledgement We would like to thank anonymous ACL ARR reviewers and Senior Area Chairs who gave constructive comments for our paper.  ... 
arXiv:2204.05512v2 fatcat:4qg5icc2cjcvzlaqoq26j6cftm

NetReAct: Interactive Learning for Network Summarization [article]

Sorour E. Amiri, Bijaya Adhikari, John Wenskovitch, Alexander Rodriguez, Michelle Dowling, Chris North, B. Aditya Prakash
2020 arXiv   pre-print
NetReAct incorporates human feedback with reinforcement learning to summarize and visualize document networks.  ...  How can we use this feedback to improve the network summary quality?  ...  We proposed a novel and effective network summarization algorithm, NetReAct, which leverages a feedback-based reinforcement learning approach to incorporate human input.  ... 
arXiv:2012.11821v1 fatcat:m2phbutgsba4npxnnpd7fcjxaa

Training Language Models with Language Feedback [article]

Jérémy Scheurer, Jon Ander Campos, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, Ethan Perez
2022 arXiv   pre-print
Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization.  ...  Here, we propose to learn from natural language feedback, which conveys more information per human evaluation. We learn from language feedback on model outputs using a three-step learning algorithm.  ...  We follow prior work on learning from human preferences (Stiennon et al., 2020) and learn to summarize Reddit posts from Völske et al. (2017) .  ... 
arXiv:2204.14146v3 fatcat:27w63k7wofchdna7cazqbepxey

Putting Humans in the Natural Language Processing Loop: A Survey [article]

Zijie J. Wang, Dongjin Choi, Shenyu Xu, Diyi Yang
2021 arXiv   pre-print
How can we design Natural Language Processing (NLP) systems that learn from human feedback?  ...  There is a growing research body of Human-in-the-loop (HITL) NLP frameworks that continuously integrate human feedback to improve the model itself.  ...  We summarize recent literature on HITL NLP from both NLP and HCI communities, and position each work with respect to its task, goal, human interaction, and feedback learning method.  ... 
arXiv:2103.04044v1 fatcat:bnwj25lwofcwrnjtvlta64niq4

Evaluation of Unsupervised Learning based Extractive Text Summarization Technique for Large Scale Review and Feedback Data

Jai Prakash Verma, Atul Patel
2017 Indian Journal of Science and Technology  
Background/Objectives: Supervised techniques uses human generated summary to select features and parameter for summarization.  ...  Due to diversity of large scale datasets, supervised techniques based summarization also fails to meet the requirements.  ...  Unsupervised Learning Based Text Summarization Supervised techniques use human generated summary to select features and parameters for summarization.  ... 
doi:10.17485/ijst/2017/v10i17/106493 fatcat:qwbaxugabzfclanq4m6ajxc7vy

Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback

Avinesh PVS, Christian M. Meyer
2017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback.  ...  Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks.  ...  Nevertheless, we expect that in practical use, the human summarizers may give more feedback similar to DBS in comparison to DUC'04 simulation setting.  ... 
doi:10.18653/v1/p17-1124 dblp:conf/acl/AvineshM17 fatcat:xgri2evu7vhe5n3sj3imrwi2xa

Improving Factual Consistency of Abstractive Summarization on Customer Feedback [article]

Yang Liu, Yifei Sun, Vincent Gao
2021 arXiv   pre-print
E-commerce stores collect customer feedback to let sellers learn about customer concerns and enhance customer order experience.  ...  In this work, we introduce a set of methods to enhance the factual consistency of abstractive summarization on customer feedback.  ...  We propose to augment the training data with artificially corrupted summaries and use contrastive learning methods to enhance the model faithfulness.  ... 
arXiv:2106.16188v1 fatcat:4s7xtd7t7jgw5kzv6h2vxaqomq

Hone as You Read: A Practical Type of Interactive Summarization [article]

Tanner Bohn, Charles X. Ling
2021 arXiv   pre-print
Our approaches range from simple heuristics to preference-learning and their analysis provides insight into this important task. Human evaluation additionally supports the practicality of HARE.  ...  This task is related to interactive summarization, where personalized summaries are produced following a long feedback stage where users may read the same sentences many times.  ...  The challenge is to learn from unobtrusive user feedback, such as the types in Figure 1 , to identify uninteresting content to hop over.  ... 
arXiv:2105.02923v1 fatcat:ipvcxzs6jjgdjd2o75b2qpz7ey

Offline Reinforcement Learning from Human Feedback in Real-World Sequence-to-Sequence Tasks [article]

Julia Kreutzer, Stefan Riezler, Carolin Lawrence
2021 arXiv   pre-print
Using such interaction logs in an offline reinforcement learning (RL) setting is a promising approach.  ...  However, due to the nature of NLP tasks and the constraints of production systems, a series of challenges arise. We present a concise overview of these challenges and discuss possible solutions.  ...  This is a concise description of the learning scenario in interactive NLP: It is unrealistic to expect anything else than bandit feedback from a human user interacting with a chatbot, automatic summarization  ... 
arXiv:2011.02511v3 fatcat:n5quenu4qfbbtjuyoa7eutn6wa

Developing Summarization Skills through the Use of LSA-Based Feedback

Eileen Kintsch, Dave Steinhart, Gerry Stahl, LSA Research Group LSA Research Group, Cindy Matthews, Ronald Lamb
2000 Interactive Learning Environments  
The feedback allows students to engage in extensive, independent practice in writing and revising without placing excessive demands on teachers for feedback.  ...  We first discuss the underlying educational rationale, then present some results of the trials conducted with the system.  ...  beginning in learning how to summarize!  ... 
doi:10.1076/1049-4820(200008)8:2;1-b;ft087 fatcat:4tckegjnzvhxnf4ztyhbqtcnoy

Automatic Summarization of Student Course Feedback [article]

Wencan Luo, Fei Liu, Zitao Liu, Diane Litman
2018 arXiv   pre-print
In this work, we propose a new approach to summarizing student course feedback based on the integer linear programming (ILP) framework.  ...  Experimental results on a student feedback corpus show that our approach outperforms a range of baselines in terms of both ROUGE scores and human evaluation.  ...  We thank Jingtao Wang, Fan Zhang, Huy Nguyen and Zahra Rahimi for valuable suggestions about the proposed summarization algorithm.  ... 
arXiv:1805.10395v1 fatcat:v4dcbpwcnrfndehtjszqmsxaxy

Automatic Summarization of Student Course Feedback

Wencan Luo, Fei Liu, Zitao Liu, Diane Litman
2016 Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies  
In this work, we propose a new approach to summarizing student course feedback based on the integer linear programming (ILP) framework.  ...  Experimental results on a student feedback corpus show that our approach outperforms a range of baselines in terms of both ROUGE scores and human evaluation.  ...  We thank Jingtao Wang, Fan Zhang, Huy Nguyen and Zahra Rahimi for valuable suggestions about the proposed summarization algorithm.  ... 
doi:10.18653/v1/n16-1010 dblp:conf/naacl/LuoLLL16 fatcat:y7krdw7h55a2rjwrh5eutodpxa

Learning Improvised Chatbots from Adversarial Modifications of Natural Language Feedback [article]

Makesh Narsimhan Sreedhar, Kun Ni, Siva Reddy
2020 arXiv   pre-print
The generator's goal is to convert the feedback into a response that answers the user's previous utterance and to fool the discriminator which distinguishes feedback from natural responses.  ...  We show that augmenting original training data with these modified feedback responses improves the original chatbot performance from 69.94% to 75.96% in ranking correct responses on the Personachat dataset  ...  FEED2RESP: We use our main model (Section 2) to modify feedback to natural responses and train the chatbot models on modified feedback along with human conversations.  ... 
arXiv:2010.07261v2 fatcat:lc6rhnbqynd77iugpjzu3f6t4u

Entity Summarization with User Feedback [chapter]

Qingxia Liu, Yue Chen, Gong Cheng, Evgeny Kharlamov, Junyou Li, Yuzhong Qu
2020 Lecture Notes in Computer Science  
To address this challenge, in this paper we present the first study of entity summarization with user feedback.  ...  We consider a cooperative environment where a user reads the current entity summary and provides feedback to help an entity summarizer compute an improved summary.  ...  To summarize, our contributions in this paper include -the first research effort to improve entity summarization with user feedback, -a representation of entity summarization with iterative user feedback  ... 
doi:10.1007/978-3-030-49461-2_22 fatcat:q55eolmgxbh4xm4hbtytxj43jq

How Can Psychology Inform the Design of Learning Experiences?

Milos Kravcik, Ralf Klamma, Zinayda Petrushyna
2011 2011 IEEE 11th International Conference on Advanced Learning Technologies  
We have reviewed literature on human decision making processes, organized a survey and a workshop with PhD students to collect various opinions on these issues, and here we summarize the outcomes.  ...  Our aim is to analyze results of behavioral and cognitive psychology to help designers of learning experiences with specification of requirements.  ...  To achieve this we have organized a workshop with PhD students, who were dealing with technology enhanced learning (TEL).  ... 
doi:10.1109/icalt.2011.92 dblp:conf/icalt/KravcikKP11 fatcat:cos37vrrc5amdek5oesfq5a3gu
« Previous Showing results 1 — 15 out of 225,481 results