Word embedding for French natural language in healthcare: a comparative study (Preprint)

Emeric Dynomant, Romain Lelong, Badisse Dahamna, Clément Massonaud, Gaétan Kerdelhué, Julien Grosjean, Stéphane Canu, Stefan J Darmoni
2018 JMIR Medical Informatics  
Word embedding technologies, a set of language modeling and feature learning techniques in natural language processing (NLP), are now used in a wide range of applications. However, no formal evaluation and comparison have been made on the ability of each of the 3 current most famous unsupervised implementations (Word2Vec, GloVe, and FastText) to keep track of the semantic similarities existing between words, when trained on the same dataset. The aim of this study was to compare embedding
more » ... trained on a corpus of French health-related documents produced in a professional context. The best method will then help us develop a new semantic annotator. Unsupervised embedding models have been trained on 641,279 documents originating from the Rouen University Hospital. These data are not structured and cover a wide range of documents produced in a clinical setting (discharge summary, procedure reports, and prescriptions). In total, 4 rated evaluation tasks were defined (cosine similarity, odd one, analogy-based operations, and human formal evaluation) and applied on each model, as well as embedding visualization. Word2Vec had the highest score on 3 out of 4 rated tasks (analogy-based operations, odd one similarity, and human validation), particularly regarding the skip-gram architecture. Although this implementation had the best rate for semantic properties conservation, each model has its own qualities and defects, such as the training time, which is very short for GloVe, or morphological similarity conservation observed with FastText. Models and test sets produced by this study will be the first to be publicly available through a graphical interface to help advance the French biomedical research.
doi:10.2196/12310 pmid:31359873 pmcid:PMC6690161 fatcat:jntc3ylh3rdwzkzx4ivhernsha