Filters








8,961 Hits in 5.9 sec

Learning Compressed Sentence Representations for On-Device Text Processing

Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, Lawrence Carin
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems.  ...  Moreover, with the learned binary representations, the semantic relatedness of two sentences can be evaluated by simply calculating their Hamming distance, which is more computational efficient compared  ...  Therefore, here we focus on learning universal binary embeddings based on pretained continuous sentence representations.  ... 
doi:10.18653/v1/p19-1011 dblp:conf/acl/ShenCSZYTCC19 fatcat:34mqfel4snfrtfqpsf4qi43e7e

Training Method and Device of Chemical Industry Chinese Language Model Based on Knowledge Distillation

Wen-Ting Li, Shang-Bing Gao, Jun-Qiang Zhang, Shu-Xing Guo, Wei-Chuen Yau
2021 Scientific Programming  
Moreover, there is no pretraining language model for the chemical industry. In this work, we propose a method to pretrain a smaller language representation model of the chemical industry domain.  ...  First, a huge number of chemical industry texts are used as pretraining corpus, and nontraditional knowledge distillation technology is used to build a simplified model to learn the knowledge in the BERT  ...  [13] , part-of-speech-based long short-term memory network for learning sentence representations was proposed by Zhu et al.  ... 
doi:10.1155/2021/5753693 fatcat:t4s4rkolfnhtdgcssq7rk726pe

Performance Evaluation of Manhattan and Euclidean Distance Measures For Clustering Based Automatic Text Summarization

Shakirat A Salihu, Ifeoma P Onyekwere, Modinat A Mabayoje, Hammed A Mojeed
2019 FUOYE Journal of Engineering and Technology  
summary of sentences that made up 94% of the total document all in one cluster using compression ratio as the performance metric.  ...  The experimental analysis was performed on Waikato Environment for Knowledge Analysis (WEKA).  ...  For the sentence level tasks, experiments on human generated abstractive compression datasets and system evaluation on several newly proposed Machine Translation (MT) evaluation metrics was conducted.  ... 
doi:10.46792/fuoyejet.v4i1.316 fatcat:fpg32j6vqvdetim7phgy4vkd2e

A Frequent Term and Semantic Similarity based Single Document Text Summarization Algorithm

Naresh Kumar Nagwani, Shrish Verma
2011 International Journal of Computer Applications  
Text summarization has number of applications; recently number of applications uses text summarization for the betterment of the text analysis and knowledge representation.  ...  Finally in the third step all the sentences in the document, which are containing the frequent and semantic equivalent terms, are filtered for summarization.  ...  Text Summarization can speed up other information retrieval and text mining processes. Text Summarization can also be useful for text display on hand-held devices, such as PDA.  ... 
doi:10.5120/2190-2778 fatcat:6hpb3cpnqjh7fcdjnxubzeybka

A Simple but Effective BERT Model for Dialog State Tracking on Resource-Limited Systems [article]

Tuan Manh Lai, Quan Hung Tran, Trung Bui, Daisuke Kihara
2020 arXiv   pre-print
Recently, many deep learning based methods have been proposed for the task.  ...  Finally, to make the model small and fast enough for resource-restricted systems, we apply the knowledge distillation method to compress our model.  ...  The input representation of BERT is flexible enough that it can unambiguously represent both a single text sentence and a pair of text sentences in one token sequence.  ... 
arXiv:1910.12995v3 fatcat:zxgcwdtqp5gavpkdeit7pahjyu

ProSeqo: Projection Sequence Networks for On-Device Text Classification

Zornitsa Kozareva, Sujith Ravi
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
We propose a novel on-device sequence model for text classification using recurrent projections.  ...  This results in fast and compact neural networks that can perform on-device inference for complex short and long text classification tasks.  ...  on-device and deep learning models for short and long text classification; • Our results show that ProSeqo outperformed state-of-the-art on-device SGNN network (Ravi and Kozareva, 2018) with up to +  ... 
doi:10.18653/v1/d19-1402 dblp:conf/emnlp/KozarevaR19 fatcat:iugtv4ls7bbqfm4m4rrpy4qdyi

D6.1 QNLP design and specification

Antonio Villalpando, Lee J. O'Riordan, Kaspars Balodis, Rihards Krišlauks
2021 Zenodo  
In this document we motivate the use of quantum computing models for natural-language processing tasks, focussing on comparison with existing methods in the classical natural language processing (NLP)  ...  Understanding the applicability of NISQ-era devices for a variety of problems is of the utmost importance to better develop and utilise these devices for real-world use-cases.  ...  Context QNLP project introduction The goal of the QNLP project is to define the required processes and mappings for natural language processing tasks on quantum computing devices.  ... 
doi:10.5281/zenodo.4745442 fatcat:vrqg65igergthnwzjgnrmfuf2y

Automatic Grammatical Error Correction for Sequence-to-sequence Text Generation: An Empirical Study

Tao Ge, Xingxing Zhang, Furu Wei, Ming Zhou
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
We conduct experiments across various seq2seq text generation tasks including machine translation, formality style transfer, sentence compression and simplification.  ...  In this paper, we present a preliminary empirical study on whether and how much automatic grammatical error correction can help improve seq2seq text generation.  ...  Acknowledgments We thank the anonymous reviewers for their valuable comments. Specially, we thank Shujie Liu for the discussion and constructive suggestions to this paper.  ... 
doi:10.18653/v1/p19-1609 dblp:conf/acl/GeZWZ19 fatcat:ww5lherf4fau5mxmjrcta3kk64

An Overview of Text Summarization

Laxmi B., P. Venkata
2017 International Journal of Computer Applications  
Hence, the natural language processing research community is developing new methods for summarizing the text mechanically.  ...  As the vast amount of information is available for every theme on Internet, shortening the information in the form of summary would immensely benefit readers.  ...  Text Summarization with Neural Network A Neural Network is a processing system modeled on the human brain that tries to reenact its learning process.  ... 
doi:10.5120/ijca2017915109 fatcat:hlpvgrzpd5hord5bfy4zndqexi

Performance Study on Extractive Text Summarization Using BERT Models

Shehab Abdel-Salam, Ahmed Rafea
2022 Information  
The objective of this paper is to produce a study on the performance of variants of BERT-based models on text summarization through a series of experiments, and propose "SqueezeBERTSum", a trained summarization  ...  There are different sizes for BERT, such as BERT-base with 12 encoders and BERT-larger with 24 encoders, but we focus on the BERT-base for the purpose of this study.  ...  Acknowledgments: Shehab Abdel-Salam would like to acknowledge Ahmed Rafea for the guidance and knowledge that he provided throughout the length of this paper and for giving the inspiration to research  ... 
doi:10.3390/info13020067 fatcat:jkbtyeirwjh6lgvznzaf7vzyh4

Abstractive Compression of Captions with Attentive Recurrent Neural Networks

Sander Wubben, Emiel Krahmer, Antal van den Bosch, Suzan Verberne
2016 Proceedings of the 9th International Natural Language Generation conference  
Recent advances in Recurrent Neural Networks (RNNs) have boosted interest in text-to-text generation tasks (Sutskever et al., 2014) . In this paper we focus on abstractive sentence compression 41  ...  Additionally, we show that automatic measures are not very well suited for evaluating this text-to-text generation task.  ...  Acknowledgements This work is part of the research programmes Discussion Thread Summarization for Mobile Devices and The Automated Newsroom, which are financed by the Netherlands Organisation for Scientific  ... 
doi:10.18653/v1/w16-6608 dblp:conf/inlg/WubbenKBV16 fatcat:igxmzrboafh55jj3o5qlfonuza

A Survey of Mobile Computing for the Visually Impaired [article]

Martin Weiss, Margaux Luck, Roger Girgis, Chris Pal, Joseph Paul Cohen
2018 arXiv   pre-print
Based on a series of interviews with the VIB and developers of assistive technology, this paper provides a survey of machine-learning based mobile applications and identifies the most relevant applications  ...  We discuss the functionality of these apps, how they align with the needs and requirements of the VIB users, and how they can be improved with techniques such as federated learning and model compression  ...  Federated Learning Federated learning is a way of training that allows for learning from data stored on a swarm of mobile devices without sacrificing privacy.  ... 
arXiv:1811.10120v2 fatcat:7yx4emnjovep5bmwi66ldgv4em

Searching Personal Photos on the Phone with Instant Visual Query Suggestion and Joint Text-Image Hashing

Zhaoyang Zeng, Jianlong Fu, Hongyang Chao, Tao Mei
2017 Proceedings of the 2017 ACM on Multimedia Conference - MM '17  
and sequential deep neural networks to extract representations for both photos and queries, and 3) joint text-image hashing (with compact binary codes) to facilitate binary image search and VQS.  ...  The ubiquitous mobile devices have led to the unprecedented growing of personal photo collections on the phone.  ...  To learn the joint embedding spaces with images, we set d s = 1200 to make sure that the model is able to learn the sentences representation well and is small enough to run on mobile devices.  ... 
doi:10.1145/3123266.3123446 dblp:conf/mm/ZengFCM17 fatcat:uco22nrn6ffujhbxabnt3budkq

Semantic Vector Machines [article]

Etter Vincent
2011 arXiv   pre-print
We realized that learning semantics of sentences and documents was the key for solving a lot of natural language processing problems, and thus moved to the second part of our work: sentence compression  ...  We introduce a flexible neural network architecture for learning embeddings of words and sentences that extract their semantics, propose an efficient implementation in the Torch framework and present embedding  ...  Such data consists of one text file per language, in which the i th line of each file correspond all to the same sentence, thus allowing to learn how to translate from one the others.  ... 
arXiv:1105.2868v1 fatcat:kbwdcazddnhnho4caebyfhzbdi

Knowledge integration across multiple texts

Doo Soon Kim, Ken Barker, Bruce Porter
2009 Proceedings of the fifth international conference on Knowledge capture - K-CAP '09  
We have built a Learning-by-Reading system and this paper focuses on one aspect of it: the task of integrating together snippets of knowledge drawn from multiple texts to build a single coherent knowledge  ...  One of the grand challenges of AI is to build systems that learn by reading. The ideal system would construct a rich knowledge base capable of automated reasoning.  ...  The representation indicates that piston-4 is a kind of Device and is the instrument of compresses-5.  ... 
doi:10.1145/1597735.1597745 dblp:conf/kcap/KimBP09 fatcat:zeod5d4w4fbzpgtcinvn4mrfpu
« Previous Showing results 1 — 15 out of 8,961 results