A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Structured prediction models for RNN based sequence labeling in clinical text
[article]
2016
arXiv
pre-print
Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. ...
We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities. ...
set used in this work. ...
arXiv:1608.00612v1
fatcat:xrqigaxlgjhyhovcdjfvmq34yy
Structured prediction models for RNN based sequence labeling in clinical text
2016
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
We use these methods 1 for structured prediction in order to improve the exact phrase detection of clinical entities. ...
In this work we experiment with Conditional Random Field based structured learning models with Recurrent Neural Networks. ...
set used in this work. ...
doi:10.18653/v1/d16-1082
dblp:conf/emnlp/JagannathaY16
fatcat:3ne5lqrcwfckjatpvavebbopna
Structured prediction models for RNN based sequence labeling in clinical text
2016
Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. ...
We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities. ...
set used in this work. ...
pmid:28004040
pmcid:PMC5167535
fatcat:aak5zmkgkfgoxicybtc2tru6mi
Brundlefly at SemEval-2016 Task 12: Recurrent Neural Networks vs. Joint Inference for Clinical Temporal Information Extraction
2016
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)
We submitted two systems to the SemEval-2016 Task 12: Clinical TempEval challenge, participating in Phase 1, where we identified text spans of time and event expressions in clinical notes and Phase 2, ...
For temporal entity extraction, we find that a joint inference-based approach using structured prediction outperforms a vanilla recurrent neural network that incorporates word embeddings trained on a variety ...
We treat Phase 1 as a sequence labeling task and examine several models for labeling entities. ...
doi:10.18653/v1/s16-1198
dblp:conf/semeval/Fries16
fatcat:lwoeswychrgwdf6kt77kzgbiwq
Brundlefly at SemEval-2016 Task 12: Recurrent Neural Networks vs. Joint Inference for Clinical Temporal Information Extraction
[article]
2016
arXiv
pre-print
We submitted two systems to the SemEval-2016 Task 12: Clinical TempEval challenge, participating in Phase 1, where we identified text spans of time and event expressions in clinical notes and Phase 2, ...
For temporal entity extraction, we find that a joint inference-based approach using structured prediction outperforms a vanilla recurrent neural network that incorporates word embeddings trained on a variety ...
We treat Phase 1 as a sequence labeling task and examine several models for labeling entities. ...
arXiv:1606.01433v1
fatcat:yu3ve4s375amlfif4aku4w73k4
Natural language processing and recurrent network models for identifying genomic mutation-associated cancer treatment change from patient progress notes
2019
JAMIA Open
NLP and RNN-based text mining solutions have demonstrated advantages in information retrieval and document classification tasks for unstructured clinical progress notes. ...
We obtained 5889 deidentified progress reports (2439 words on average) for 755 cancer patients who have undergone a clinical next generation sequencing (NGS) testing in Wake Forest Baptist Comprehensive ...
ACKNOWLEDGMENTS We thank all the study patients whose data have been used for the study. We thank the contributions of investigators and staff for data collection, management, and data analysis. ...
doi:10.1093/jamiaopen/ooy061
pmid:30944913
pmcid:PMC6435007
fatcat:rnp6hkucdveivh6i7wcyfwkll4
End-to-End Models to Imitate Traditional Chinese Medicine Syndrome Differentiation in Lung Cancer Diagnosis: Model Development and Validation
2020
JMIR Medical Informatics
The mean average precision for the word encoding–based RCNN was 10% higher than that of the character encoding–based representation. ...
With the aid of entity-level representation, data augmentation, and model fusion, deep learning–based multilabel classification approaches can better imitate TCM syndrome differentiation in complex cases ...
Acknowledgments The research in this paper was supported by the National Science ...
doi:10.2196/17821
pmid:32543445
fatcat:xcdwugqidrb2zelbkh4z2xn24a
Semi‐Supervised Joint Learning for Longitudinal Clinical Events Classification Using Neural Network Models
2020
Stat
Specifically, our model consists of a sequence generative model and a label prediction model, and the two parts are learned end to end using both labeled and unlabeled data in a joint manner to obtain ...
Specifically, our model consists of a sequence generative model and a label prediction model, and the two parts are learned end to end using both labeled and unlabeled data in a joint manner to obtain ...
Our model consists of two parts: a sequence generative network for modeling longitudinal clinical events and a label prediction network which takes the hidden feature representation of the sequence generative ...
doi:10.1002/sta4.305
fatcat:2hasjaot3rdmrlqocjaxuh2hg4
Semi-supervised Learning for Information Extraction from Dialogue
2018
Interspeech 2018
We present a method for leveraging the unlabeled data to learn a better model than could be learned from the labeled data alone. ...
First, a recurrent neural network (RNN) encoder-decoder is trained on the task of predicting nearby turns on the full dialogue corpus; next, the RNN encoder is reused as a feature representation for the ...
The authors would like to thank Izhak Shafran, Patrick Nguyen, and Yonghui Wu for helpful discussions and feedback; Chris Co, Nina Gonzalez, Michael Pearson and Jack Po for contributions to data and labeling ...
doi:10.21437/interspeech.2018-1318
dblp:conf/interspeech/Kannan0JR18
fatcat:eux5tchsbnawnej2v45svonoia
Chief complaint classification with recurrent neural networks
[article]
2018
arXiv
pre-print
In all instances, the RNN models outperformed the bag-of-word classifiers, suggesting deep learning models could substantially improve the automatic classification of unstructured text for syndromic surveillance ...
For examples, chief complaints using the GRU model predicts alcohol-related disorders well (F1=78.91) but predicts influenza poorly (F1=14.80). ...
Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. ...
arXiv:1805.07574v2
fatcat:tq3brexxkjbaliex4x5spz6j6y
Multimodal Learning for Cardiovascular Risk Prediction using EHR Data
[article]
2020
arXiv
pre-print
To exploit the potential information captured in EHRs, in this study we propose a multimodal recurrent neural network model for cardiovascular risk prediction that integrates both medical texts and structured ...
Various machine learning approaches have been developed to employ information in EHRs for risk prediction. ...
Acknowledgments The authors would like to thank Erik-Jan van Kesteren for his comments. ...
arXiv:2008.11979v1
fatcat:4qgn4jtuxncihboeuca3wtxj7q
Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review
2018
JAMIA Journal of the American Medical Informatics Association
We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment. Review has grown for two reasons. ...
., longitudinal event sequences and continuous monitoring data) are available in healthcare and enable training of complex deep learning models. ...
Sequential prediction of clinical events refers to predicting future clinical events based on past longitudinal event sequences. 3. ...
doi:10.1093/jamia/ocy068
pmid:29893864
fatcat:ne7weiw7xvc2lp7hfgkzltdnri
Clinical big data and deep learning: Applications, challenges, and future outlooks
2019
Big Data Mining and Analytics
Although there are challenges involved in applying deep learning techniques to clinical data, it is still worthwhile to look forward to a promising future for deep learning applications in clinical big ...
In recent years, as a powerful technique for big data, deep learning has gained a central position in machine learning circles for its great advantages in feature representation and pattern recognition ...
For medical texts, CNN can be used to predict diagnosis codes from clinical notes [48] . ...
doi:10.26599/bdma.2019.9020007
dblp:journals/bigdatama/YuLLLW19
fatcat:72fi4naporetvlq4unvlypbzne
Neural Natural Language Processing for Unstructured Data in Electronic Health Records: a Review
[article]
2021
arXiv
pre-print
Well over half of the information stored within EHRs is in the form of unstructured text (e.g. provider notes, operation reports) and remains largely untapped for secondary use. ...
In this survey paper, we summarize current neural NLP methods for EHR applications. ...
Such RNN-based models have been widely applied in tasks including text classification [138] and language understanding [261] . 2.1.4 Sequence-to-sequence models. ...
arXiv:2107.02975v1
fatcat:nayhw7gadfdzrovycdkvzy75pi
Effective Representations of Clinical Notes
[article]
2018
arXiv
pre-print
They are very high dimensional, sparse and have complex structure. Furthermore, training data is often scarce because it is expensive to obtain reliable labels for many clinical events. ...
We used the learned representations, along with commonly used bag of words and topic model representations, as features for predictive models of clinical events. ...
Discussion We have demonstrated that learned representations of clinical text can be effective for predicting clinical events in scenarios where labeled data for supervised learning is scarce. ...
arXiv:1705.07025v3
fatcat:ffsbtdpli5c3nojbecxau34234
« Previous
Showing results 1 — 15 out of 2,068 results