Filters








2,121 Hits in 5.8 sec

A Survey on Sentence Embedding Models Performance for Patent Analysis [article]

Hamid Bekamiri, Daniel S. Hain, Roman Jurowetzki
2022 arXiv   pre-print
The results based on the first claim of patents show that PatentSBERTa, Bert-for-patents, and TF-IDF Weighted Word Embeddings have the best accuracy for computing sentence embeddings at the subclass level  ...  Therefore, in this study, we provide an overview of the accuracy of these algorithms based on patent classification performance and propose a standard library and dataset for assessing the accuracy of  ...  In particular, GPT family models are mostly pre-trained for Machine translation, Text generation, Question and answering, and the BERT family for Document Classification and Regression tasks.  ... 
arXiv:2206.02690v3 fatcat:f5ycseetrjdwrg2b5mlvhcyquy

Predicting Institution Outcomes for Inter Partes Review (IPR) Proceedings at the United States Patent Trial & Appeal Board by Deep Learning of Patent Owner Preliminary Response Briefs

Bahrad A. Sokhansanj, Gail L. Rosen
2022 Applied Sciences  
A key challenge for artificial intelligence in the legal field is to determine from the text of a party's litigation brief whether, and why, it will succeed or fail.  ...  models, and using SHAP for interpretation. (2) Deep learning of document text in context, using convolutional neural networks (CNN) with attention, and comparing LIME and attention visualization for interpretability  ...  In that work, the authors also vectorized document text using word2vec and doc2vec pretrained embeddings along with pre-selected features like attorney name, and coded case types were used to train classifiers  ... 
doi:10.3390/app12073656 fatcat:46ximh26hzeull6lgc45lz54bu

Surfacing contextual hate speech words within social media [article]

Jherez Taylor, Melvyn Peignon, Yi-Shin Chen
2017 arXiv   pre-print
We also develop a word embedding model that learns the alternate hate speech meaning of words and demonstrate the candidacy of our code words with several annotation experiments, designed to determine  ...  a contextual task and does not depend on a fixed list of keywords.  ...  C a learned word embedding model trained on TwitterClean W H a learned word embedding model trained on HateComm 4.3.3 Contextual Code Word Search.  ... 
arXiv:1711.10093v1 fatcat:gdquhs5tlzflrggwmcv5st3bha

Linguistic Patterns for Code Word Resilient Hate Speech Identification

Fernando H. Calderón, Namrita Balani, Jherez Taylor, Melvyn Peignon, Yen-Hao Huang, Yi-Shin Chen
2021 Sensors  
This has prompted increased calls for automatic detection methods, most of which currently rely on a dictionary of hate speech words, and supervised classification.  ...  Code words are frequently used and have benign meanings in regular discourse, for instance, "skypes, googles, bing, yahoos" are all examples of words that have a hidden hate speech meaning.  ...  Acknowledgments: The authors would like to thank the Ministry of Science and Technology of the R.O.C. for the funding and support of this work.  ... 
doi:10.3390/s21237859 pmid:34883861 fatcat:o6ruflhtbzalbhvamivkrq4z34

Code Word Detection in Fraud Investigations using a Deep-Learning Approach [article]

Youri van der Zee, Jan C. Scholtes, Marcel Westerhoud, Julien Rossi
2021 arXiv   pre-print
Here for, a novel (annotated) synthetic data set is created containing such code words, hidden in normal email communication.  ...  In modern litigation, fraud investigators often face an overwhelming number of documents that must be reviewed throughout a matter.  ...  Acknowledgements The authors are grateful for the extensive support obtained for this research from ZyLAB Technologies BV and Ebben Partners BV, both based in the Netherlands.  ... 
arXiv:2103.09606v1 fatcat:dxswtjtlhfhfdhg3ekfkf3m6ci

Text-mining for Lawyers: How Machine Learning Techniques Can Advance our Understanding of Legal Discourse

Arthur Dyevre
2021 Erasmus Law Review  
The same goes for the pre-trained embeddings and transformers mentioned in this article (except for GPT-3).  ...  Yet the power of BERT for supervised classification lies in the possibility to further fine-tune the pre-trained BERT on a 'local' data set.  ...  -2020/ (last visited 2 March 2021). 9 Some text-mining tasks such as authorship identification require a distinct approach to pre-processing.  ... 
doi:10.5553/elr.000191 fatcat:bcwpxq6dlzffjb42uwwl764ufi

Actuarial Applications of Natural Language Processing Using Transformers: Case Studies for Using Text Features in an Actuarial Context [article]

Andreas Troxler
2022 arXiv   pre-print
of transfer learning for practical applications.  ...  Finally, the tutorial provides practical approaches to handle classification tasks in situations with no or only few labeled data.  ...  Acknowledgements The authors are very grateful to Mario Wüthrich, Christian Lorentzen and Michael Mayer for their comprehensive reviews and their innumerable inputs which led to substantial improvements  ... 
arXiv:2206.02014v1 fatcat:i2bqzfvfb5ho5kpfjgmfppxccu

Linguistically Informed Masking for Representation Learning in the Patent Domain [article]

Sophia Althammer, Mark Buckley, Sebastian Hofstätter, Allan Hanbury
2021 arXiv   pre-print
We make the source code as well as the domain-adaptive pre-trained patent language models publicly available at https://github.com/sophiaalthammer/patent-lim.  ...  However successfully applying such models in highly specific language domains requires domain adaptation of the pre-trained models.  ...  [22] or litigation analysis.  ... 
arXiv:2106.05768v1 fatcat:waf5zc6y6bhxxmwtzrsxtxnphu

Dialogue Inspectional Summarization with Factual Inconsistency Awareness [article]

Leilei Gan, Yating Zhang, Kun Kuang, Lin Yuan, Shuo Li, Changlong Sun, Xiaozhong Liu, Fei Wu
2021 arXiv   pre-print
In this paper, we mainly investigate the factual inconsistency problem for Dialogue Inspectional Summarization (DIS) under non-pretraining and pretraining settings.  ...  However, for professional dialogues (e.g., legal debate and medical diagnosis), semantic/statistical alignment can hardly fill the logical/factual gap between input dialogue discourse and summary output  ...  Instead of the exact token match in ROUGE, BERTScore computes token similarity with contextualized embeddings from pre-trained language models like BERT [25] .  ... 
arXiv:2111.03284v1 fatcat:gyai2bc74nd63i2yqj64nwwzmq

Adversarial Training for Weakly Supervised Event Detection

Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, Peng Li
2019 Proceedings of the 2019 Conference of the North  
The experiments on two real-world datasets show that our candidate selection and adversarial training can cooperate together to obtain more diverse and accurate training data for ED, and significantly  ...  However, these methods typically rely on sophisticated pre-defined rules as well as existing instances in knowledge bases for automatic annotation and thus suffer from low coverage, topic bias, and data  ...  Hyperparameter Settings For DMCNN, following the settings of previous work, we use the pre-trained word embeddings learned by Skip-Gram (Mikolov et al., 2013) as the initial word embeddings.  ... 
doi:10.18653/v1/n19-1105 dblp:conf/naacl/WangHLSL19 fatcat:7rnaokme3rb5lgkjavvhn25rny

Artificial Intelligence and Law

ANTONIO A. MARTINO
1994 International Journal of Law and Information Technology  
The ideal system also allows one to use the pre-ordering classification for other analysis techniques such as PLSA or LSA.  ...  Figure Twenty Three: Selecting from Pre-Classified Data for Higher-Order Models Second-level Analysis Performed on Sub-sets How Pre-Ordering Can Make Other Techniques Better This first-order classification  ...  To some extent this reluctance may be based on a lack of understanding as most lawyers do not receive training in statistical principles.  ... 
doi:10.1093/ijlit/2.2.154 fatcat:ff7kwvvn3vcbblhlkf63oqntyu

Artificial Intelligence and the Law

Charlotte Walker-Osborn, Christopher Chan
2017 ITNOW  
The ideal system also allows one to use the pre-ordering classification for other analysis techniques such as PLSA or LSA.  ...  Figure Twenty Three: Selecting from Pre-Classified Data for Higher-Order Models Second-level Analysis Performed on Sub-sets How Pre-Ordering Can Make Other Techniques Better This first-order classification  ...  To some extent this reluctance may be based on a lack of understanding as most lawyers do not receive training in statistical principles.  ... 
doi:10.1093/itnow/bwx017 fatcat:62ezubai3fcwlpie35xjq3iyem

Caesarean birth in public maternities in Argentina: a formative research study on the views of obstetricians, midwives and trainees

Carla Perrotta, Mariana Romero, Yanina Sguassero, Cecilia Straw, Celina Gialdini, Natalia Righetti, Ana Pilar Betran, Silvina Ramos
2022 BMJ Open  
Providers have conflicting views on the adequacy of training to deal with complex or prolonged labour.  ...  Limited pain management access was deemed a potential contributing factor for CS in adolescents and first-time mothers.  ...  Transcripts were independently coded by two researchers with experience in qualitative analysis.  ... 
doi:10.1136/bmjopen-2021-053419 pmid:35078842 pmcid:PMC8796244 fatcat:tmrk2hyalfcwpidib3tj5dxpia

Pretrained Transformers for Text Ranking: BERT and Beyond

Andrew Yates, Rodrigo Nogueira, Jimmy Lin
2021 Proceedings of the 14th ACM International Conference on Web Search and Data Mining  
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query for a particular task.  ...  We'd like to thank the following people for comments on earlier drafts of this work: Maura Grossman, Sebastian Hofstätter, Xueguang Ma, and Bhaskar Mitra.  ...  However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this survey also attempts to prognosticate where the  ... 
doi:10.1145/3437963.3441667 fatcat:6teqmlndtrgfvk5mneq5l7ecvq

The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications [article]

Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, Stuart M. Shieber
2022 arXiv   pre-print
, and embedding semantics.  ...  Finally, we demonstrate how HUPD can be used for three additional tasks: multi-class classification of patent subject areas, language modeling, and summarization.  ...  We used HuggingFace's Tokenizer library for pre-processing and tokenization: For NB classifiers, Logistics, and CNNs, we adopted the "WordLevel" tokenizer; for each Transformer model, we automatically  ... 
arXiv:2207.04043v1 fatcat:oykce3objvbbtj7a7ciwhvxnpa
« Previous Showing results 1 — 15 out of 2,121 results