Filters








4,232 Hits in 8.2 sec

Semi-Supervised Natural Language Approach for Fine-Grained Classification of Medical Reports [article]

Neil Deshmukh, Selin Gumustop, Romane Gauriau, Varun Buch, Bradley Wright, Christopher Bridge, Ram Naidu, Katherine Andriole, Bernardo Bizzo
2019 arXiv   pre-print
The challenge with utilizing natural language data for standard model development is due to the complex nature of the modality.  ...  passed into a decoder-classifier model that requires magnitudes less labeled data than previous approaches to differentiate between fine-grained disease classes accurately.  ...  Machine Learning Approaches While the classification of the medical reports found in the dataset seemed too fine-grained a task to classify for standard ML approaches, they were still attempted with the  ... 
arXiv:1910.13573v2 fatcat:c6iyygffqrglrk7kv6dawdlovm

Validating GAN-BioBERT: A Methodology For Assessing Reporting Trends In Clinical Trials [article]

Joshua J Myszewski, Emily Klossowski, Patrick Meyer, Kristin Bevil, Lisa Klesius, Kristopher M Schroeder
2021 arXiv   pre-print
This study develops a three-class sentiment classification algorithm for clinical trial abstracts using a semi-supervised natural language process model based on the Bidirectional Encoder Representation  ...  Also, previous attempts to develop larger scale tools, such as those using natural language processing, were limited by both their accuracy and the number of categories used for the classification of their  ...  Conflict of interest The authors declare that they have no conflict of interest.  ... 
arXiv:2106.00665v1 fatcat:oudoupct4zdcfo2jrklbyv65ym

Multilingual Epidemic Event Extraction

Stephen Mutuvi, Emanuela Boros, Antoine Doucet, Gaël Lejeune, Adam Jatowt, Moses Odeo
2021 Zenodo  
Moreover, we perform several preliminary experiments for the low-resourced languages present in the dataset using the mean teacher semi-supervised technique.  ...  Our findings show the potential of pre-trained language models benefiting from the incorporation of unannotated data in the training process.  ...  Supervised Learning Experiments For document classification, we chose the fine-tuned BERT-multilingualuncased [11, 30] whose performance on text classification is a F1 of 86.25%.  ... 
doi:10.5281/zenodo.5779965 fatcat:jz2vlx77irerbncqt3ncmkjjm4

Validating GAN-BioBERT: A Methodology for Assessing Reporting Trends in Clinical Trials

Joshua J. Myszewski, Emily Klossowski, Patrick Meyer, Kristin Bevil, Lisa Klesius, Kristopher M. Schroeder
2022 Frontiers in Digital Health  
BackgroundThe aim of this study was to validate a three-class sentiment classification model for clinical trial abstracts combining adversarial learning and the BioBERT language processing model as a tool  ...  classification model for clinical trial abstracts that significantly outperforms previous models with greater reproducibility and applicability to large-scale study of reporting trends.  ...  The pretrained language model from the semi-supervised stage of BERT is then fine-tuned for a specific language task by providing task-specific inputs and outputs and then adjusting the parameters of the  ... 
doi:10.3389/fdgth.2022.878369 pmid:35685304 pmcid:PMC9170913 fatcat:xzmyddgnbncmjdyw2jrr3oowbu

2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text

Özlem Uzuner, Brett R South, Shuying Shen, Scott L DuVall
2011 JAMIA Journal of the American Medical Informatics Association  
The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports  ...  ; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems  ...  Funding This work was supported in part by the NIH Roadmap for Medical Research, Grant U54LM008748 from the NIH/National Library of Medicine (NLM).  ... 
doi:10.1136/amiajnl-2011-000203 pmid:21685143 pmcid:PMC3168320 fatcat:25qp6nztfzc4djy75beorchone

Auto-Encoding Knowledge Graph for Unsupervised Medical Report Generation [article]

Fenglin Liu, Chenyu You, Xian Wu, Shen Ge, Sheng Wang, Xu Sun
2021 arXiv   pre-print
Existing approaches mainly adopt a supervised manner and heavily rely on coupled image-report pairs.  ...  Moreover, KGAE can also work in both semi-supervised and supervised settings, and accept paired images and reports in training.  ...  Semi-Supervised and Supervised Training Details To further validate the effectiveness of our approach, we fine-tune the unsupervised KGAE using partial and full image-report pairs to acquire the KGAE-Semi  ... 
arXiv:2111.04318v2 fatcat:lgo3xh7s75aspnpctaxau6zqlm

Nearly-Unsupervised Hashcode Representations for Biomedical Relation Extraction

Sahil Garg, Aram Galstyan, Greg Ver Steeg, Guillermo Cecchi
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
This nearly unsupervised approach allows fine-grained optimization of each hash function, which is particularly suitable for building hashcode representations generalizing from a training set to a test  ...  We empirically evaluate the proposed approach for biomedical relation extraction tasks, obtaining significant accuracy improvements w.r.t. stateof-the-art supervised and semi-supervised approaches.  ...  As we see, the task is formulated as of binary classification of natural language substructures extracted from the semantic parse.  ... 
doi:10.18653/v1/d19-1414 dblp:conf/emnlp/GargGSC19 fatcat:zhxhvwfbnjh43ep4pdsb54exfi

Self-Supervised Contextual Language Representation of Radiology Reports to Improve the Identification of Communication Urgency [article]

Xing Meng, Craig H. Ganoe, Ryan T. Sieberg, Yvonne Y. Cheung, Saeed Hassanpour
2019 arXiv   pre-print
In this work, we built a self-supervised contextual language representation model using BERT, a deep bidirectional transformer architecture, to identify radiology reports requiring prompt communication  ...  Our model achieved a precision of 97.0%, recall of 93.3%, and F-measure of 95.1% on an independent test set in identifying radiology reports for prompt communication, and significantly outperformed the  ...  Acknowledgments The authors would like to thank Lamar Moss for his feedback on this paper. This work was supported in part by a grant from the US National Cancer Institute (R01CA249758).  ... 
arXiv:1912.02703v1 fatcat:rbntl5anozdwpdo75q6l6v4ahu

Nearly-Unsupervised Hashcode Representations for Relation Extraction [article]

Sahil Garg, Aram Galstyan, Greg Ver Steeg, Guillermo Cecchi
2019 arXiv   pre-print
We empirically evaluate the proposed approach for biomedical relation extraction tasks, obtaining significant accuracy improvements w.r.t. state-of-the-art supervised and semi-supervised approaches.  ...  This nearly unsupervised approach allows fine-grained optimization of each hash function, which is particularly suitable for building hashcode representations generalizing from a training set to a test  ...  As we see, the task is formulated as of binary classification of natural language substructures extracted from the semantic parse.  ... 
arXiv:1909.03881v1 fatcat:26tanuk5hzgjdltm7ghdqhecbe

Improving Broad-Coverage Medical Entity Linking with Semantic Type Prediction and Large-Scale Datasets [article]

Shikhar Vashishth, Denis Newman-Griffis, Rishabh Joshi, Ritam Dutt, Carolyn Rose
2021 arXiv   pre-print
Most of the existing methods adopt a three-step approach of (1) detecting mentions, (2) generating a list of candidate concepts, and finally (3) picking the best concept among them.  ...  To address the dearth of annotated training data for medical entity linking, we present WikiMed and PubMedDS, two large-scale medical entity linking datasets, and demonstrate that pre-training MedType  ...  We report the results with the oracle type predictors (fine-grained and coarse-grained) and MEDTYPE.  ... 
arXiv:2005.00460v3 fatcat:yn3teuimcbaa5objz6bzsmxw6q

Making the Most of Text Semantics to Improve Biomedical Vision–Language Processing [article]

Benedikt Boecking, Naoto Usuyama, Shruthi Bannur, Daniel C. Castro, Anton Schwaighofer, Stephanie Hyland, Maria Wetscherek, Tristan Naumann, Aditya Nori, Javier Alvarez-Valle, Hoifung Poon, Ozan Oktay
2022 arXiv   pre-print
Further, we propose a self-supervised joint vision--language approach with a focus on better text modelling.  ...  We release a language model that achieves state-of-the-art results in radiology natural language inference through its improved vocabulary and novel language pretraining objective leveraging semantics  ...  While most related approaches use no ground truth, [5] study a semi-supervised edema severity classification setting, and [27] assume sets of seen and unseen labels towards zero-shot classification  ... 
arXiv:2204.09817v2 fatcat:wxrlara2wfhotppk4lsbvrvtuy

Semi-supervised and Unsupervised Methods for Categorizing Posts in Web Discussion Forums [article]

Krish Perumal
2016 arXiv   pre-print
A few existing unsupervised and semi-supervised approaches are either focused on identifying a single category or do not report category-specific performance.  ...  A fine-grained analysis is also carried out to discuss their limitations.  ...  Graeme Hirst, whose valuable guidance, feedback and attention to detail was pivotal for the completion of this work. I also thank my second reader, Prof.  ... 
arXiv:1604.00119v3 fatcat:m2flg6lhc5elvorhl2h5xksmdi

Recent advances and clinical applications of deep learning in medical image analysis [article]

Xuxin Chen, Ximin Wang, Ke Zhang, Roy Zhang, Kar-Ming Fung, Theresa C. Thai, Kathleen Moore, Robert S. Mannel, Hong Liu, Bin Zheng, Yuchen Qiu
2021 arXiv   pre-print
Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical image analysis, which are summarized based on different application  ...  scenarios, including classification, segmentation, detection, and image registration.  ...  They modified the semi-supervised approach MixMatch (Berthelot et al., 2019a) from two aspects to make it suitable for 3D medical image detection.  ... 
arXiv:2105.13381v2 fatcat:2k342a6rhjaavpoa2qoqxhg5rq

NatCat: Weakly Supervised Text Classification with Naturally Annotated Resources [article]

Zewei Chu, Karl Stratos, Kevin Gimpel
2021 arXiv   pre-print
To demonstrate its usefulness, we build general purpose text classifiers by training on NatCat and evaluate them on a suite of 11 text classification tasks (CatEval), reporting large improvements compared  ...  NatCat consists of document-category pairs derived from manual curation that occurs naturally within online communities.  ...  Acknowledgments We wish to thank the anonymous reviewers for their feedback. This research was supported in part by a Bloomberg data science research grant to KS and KG.  ... 
arXiv:2009.14335v2 fatcat:wydh64z5vbfm7afrodq4hw2ram

Cross-Lingual Text Classification with Minimal Resources by Transferring a Sparse Teacher [article]

Giannis Karamanolakis, Daniel Hsu, Luis Gravano
2020 arXiv   pre-print
Existing approaches for transferring supervision across languages require expensive cross-lingual resources, such as parallel corpora, while less expensive cross-lingual representation learning approaches  ...  Cross-lingual text classification alleviates the need for manually labeled documents in a target language by leveraging labeled documents from other languages.  ...  Acknowledgments We thank the anonymous reviewers for their constructive feedback. This material is based upon work supported by the National Science Foundation under Grant No. IIS-15-63785.  ... 
arXiv:2010.02562v1 fatcat:rggtsno3i5fcnnhzobzl6vevmq
« Previous Showing results 1 — 15 out of 4,232 results