Filters








3,769 Hits in 2.2 sec

Modelling Semantic Categories Using Conceptual Neighborhood

Zied Bouraoui, Jose Camacho-Collados, Luis Espinosa-Anke, Steven Schockaert
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We propose a simple method for identifying conceptual neighbors and then show that incorporating these conceptual neighbors indeed leads to more accurate region based representations.  ...  In particular, categories often have conceptual neighbors, which are disjoint from but closely related to the given category (e.g. fruit and vegetable).  ...  Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert were funded by ERC Starting Grant 637277. Zied Bouraoui was supported by CNRS PEPS INS2I MODERN.  ... 
doi:10.1609/aaai.v34i05.6241 fatcat:e5otvh3ewze77k4coa2casmjda

Modelling Semantic Categories using Conceptual Neighborhood [article]

Zied Bouraoui, Jose Camacho-Collados, Luis Espinosa-Anke, Steven Schockaert
2019 arXiv   pre-print
We propose a simple method for identifying conceptual neighbors and then show that incorporating these conceptual neighbors indeed leads to more accurate region based representations.  ...  In particular, categories often have conceptual neighbors, which are disjoint from but closely related to the given category (e.g.\ fruit and vegetable).  ...  Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert were funded by ERC Starting Grant 637277. Zied Bouraoui was supported by CNRS PEPS INS2I MODERN.  ... 
arXiv:1912.01220v1 fatcat:nwy4jiz4vbbl3h2xltkkk7glgq

IsoScore: Measuring the Uniformity of Vector Space Utilization [article]

William Rudman, Nate Gillman, Taylor Rayne, Carsten Eickhoff
2021 arXiv   pre-print
Current metrics suggest that contextualized word embedding models do not uniformly utilize all dimensions when embedding tokens in vector space.  ...  Furthermore, IsoScore is conceptually intuitive and computationally efficient, making it well suited for analyzing the distribution of point clouds in arbitrary vector spaces, not necessarily limited to  ...  ISOTROPY IN CONTEXTUALIZED EMBEDDINGS Recent literature suggests that contextualized word embeddings are anisotropic.  ... 
arXiv:2108.07344v1 fatcat:nruyn5koxfho7c32ck2ridfpaq

CODER: Knowledge infused cross-lingual medical term embedding for term normalization [article]

Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu
2021 arXiv   pre-print
embeddings, and contextual embeddings.  ...  Training with relations injects medical knowledge into embeddings and aims to provide potentially better machine learning features.  ...  [56] proposed Conceptual-Contextual embedding trained on both biomedical corpora and UMLS relations. Contextual embeddings can generate different embeddings according to the context.  ... 
arXiv:2011.02947v3 fatcat:lyecgcxewzghhgsm3ujsy74o3u

Predicting Detection Events from Bayesian Scene Recognition [chapter]

Georg Ogris, Lucas Paletta
2003 Lecture Notes in Computer Science  
Objects of interest are embedded in their visual context, i.e., in visual events within their spatial neighborhood.  ...  This work is conceptually based on psychological findings in human perception that highlight the utility of scene interpretation in object detection processes.  ...  of the center of the predicted detection event, and ±σ β designates an angle interval so that the detection event is completely embedded within (Fig. 3f) .  ... 
doi:10.1007/3-540-45103-x_139 fatcat:n5vu2d7l6vcmdgot35oku5p2ce

CoKE: Contextualized Knowledge Graph Embedding [article]

Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, Hua Wu
2020 arXiv   pre-print
This work presents Contextualized Knowledge Graph Embedding (CoKE), a novel paradigm that takes into account such contextual nature, and learns dynamic, flexible, and fully contextualized entity and relation  ...  Previous methods allow a single static embedding for each entity or relation, ignoring their intrinsic contextual nature, i.e., entities and relations may appear in different graph contexts, and accordingly  ...  As future work, we would like to (1) Generalize CoKE to other types of graph contexts beyond edges and paths, e.g., subgraphs of arbitrary forms. (2) Apply CoKE to more downstream tasks, not only those  ... 
arXiv:1911.02168v2 fatcat:p27uz6lk7bepla44e4snltoae4

Interpretable Time-Budget-Constrained Contextualization for Re-Ranking [article]

Sebastian Hofstätter, Markus Zlabinger, Allan Hanbury
2020 arXiv   pre-print
TK employs a very small number of Transformer layers (up to three) to contextualize query and document word embeddings.  ...  To utilize this property, we propose TK (Transformer-Kernel): a neural re-ranking model for ad-hoc search using an efficient contextualization mechanism.  ...  Acknowledgements This work has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 822670.  ... 
arXiv:2002.01854v1 fatcat:m7hmwk3vxrecdawgutpwmcrhhu

Dissociation of Neural Mechanisms Underlying Orientation Processing in Humans

Sam Ling, Joel Pearson, Randolph Blake
2009 Current Biology  
contextual effects.  ...  Orientation selectivity is a fundamental, emergent property of neurons in early visual cortex, and the discovery of that property has dramatically shaped how we conceptualize visual processing [1] [2]  ...  and Patrick Henry and Jurnell Cockhren for technical assistance.  ... 
doi:10.1016/j.cub.2009.06.069 pmid:19682905 pmcid:PMC2763058 fatcat:jrb3rl55qrf3rpaxomh7bplsoa

Leveraging Concept-Enhanced Pre-Training Model and Masked-Entity Language Model for Named Entity Disambiguation

Zizheng Ji, Lin Dai, Jin Pang, Tingting Shen
2020 IEEE Access  
(ii) masked entity language model, aiming to train the contextualized embedding by predicting randomly masked entities based on words and non-masked entities in the given input-text.  ...  Therefore, the proposed pre-training NED model could merge the advantage of pre-training mechanism for generating contextualized embedding with the superiority of the lexical knowledge (e.g., concept knowledge  ...  GELUs (Gaussian Error Linear Units) activation function is adopted in Eq. (2) .  ... 
doi:10.1109/access.2020.2994247 fatcat:m5e2h3w5gnat3imgre6e5swx5e

Visual Surround Suppression in Schizophrenia

Marc S. Tibber, Elaine J. Anderson, Tracy Bobin, Elena Antonova, Alice Seabright, Bernice Wright, Patricia Carlin, Sukhwinder S. Shergill, Steven C. Dakin
2013 Frontiers in Psychology  
To examine the generality of this phenomenon we measured the ability of 24 individuals with SZ to judge the luminance, contrast, orientation, and size of targets embedded in contextual surrounds that would  ...  Surround suppression in schizophrenia FIGURE 1 | Stimuli used to measure surround suppression for judgments of (A) luminance, (B) contrast, (C) orientation, and (D) size.  ...  scores for item P2 on the PANSS test, "conceptual disorganization" (DIS).  ... 
doi:10.3389/fpsyg.2013.00088 pmid:23450069 pmcid:PMC3584288 fatcat:th3p6qzv3vcypdt3zpqnzu7pku

ViCE: Self-Supervised Visual Concept Embeddings as Contextual and Pixel Appearance Invariant Semantic Representations [article]

Robin Karlsson, Tomoki Hayashi, Keisuke Fujii, Alexander Carballo, Kento Ohtani, Kazuya Takeda
2021 arXiv   pre-print
Additional contributions are regional contextual masking with nonuniform shapes matching visually coherent patches and complexity-based view sampling inspired by masked language models.  ...  Our method improves on prior work by generating more expressive embeddings and by being applicable for high-resolution images.  ...  The contextual supervisory signal for learning word embeddings in NLP have been mentioned before as a conceptual motivator for pretext tasks for self-supervised computer vision pretraining methods [34  ... 
arXiv:2111.12460v1 fatcat:su2xqj77cnbhfhkmvjz3tcywim

Comparison of Bayesian and empirical ranking approaches to visual perception

Catherine Q. Howe, R. Beau Lotto, Dale Purves
2006 Journal of Theoretical Biology  
Here, we compare two different theoretical frameworks for predicting what observers actually see in response to visual stimuli: Bayesian decision theory and empirical ranking theory.  ...  Much current vision research is predicated on the idea-and a rapidly growing body of evidence-that visual percepts are generated according to the empirical significance of light stimuli rather than their  ...  and not necessarily in agreement with the arguments presented here.  ... 
doi:10.1016/j.jtbi.2006.01.017 pmid:16537082 fatcat:bth6rk4lv5eujizy7s4x4c3qdu

Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models

Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, Lawrence Carin
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
In particular, a hierarchy of stochastic layers between the encoder and decoder networks is employed to abstract more informative and semantic-rich latent codes.  ...  In this paper, we propose to leverage several multi-level structures to learn a VAE model for generating long, and coherent text.  ...  ., 2018) , ml-VAE is conceptually simple and easy to implement. We evaluate ml-VAE on language modeling, unconditional and conditional text generation tasks.  ... 
doi:10.18653/v1/p19-1200 dblp:conf/acl/ShenCZCWGC19 fatcat:2r3kjiit75grnkz47wsrujlety

Inducing and Embedding Senses with Scaled Gumbel Softmax [article]

Fenfei Guo, Mohit Iyyer, Jordan Boyd-Graber
2019 arXiv   pre-print
Our model produces sense embeddings that are competitive (and sometimes state of the art) on multiple similarity based downstream evaluations.  ...  While many previous approaches perform well on downstream evaluations, they do not produce interpretable embeddings and learn duplicated sense groups; our method achieves the best of both worlds.  ...  et al. (2018) model word representa-tions as Gaussian Mixture embeddings where each Gaussian component captures different senses; Lee and Chen Figure 3 : 3 Our hard attention mechanism is approximated  ... 
arXiv:1804.08077v2 fatcat:opbjqhuxs5acfpgsu77fnx3nlm

A framework for proactive assistance: Summary

Alexandre Armand, David Filliat, Javier Ibanez-Guzman
2014 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC)  
Indeed, it is difficult for an embedded system to understand driving situations, and to predict early enough that it is to become uncomfortable or dangerous.  ...  On one hand, an ontology which is a conceptual description of entities present in driving spaces is used to understand how all the perceived entities interact together with the subject vehicle, and govern  ...  Gaussian processes were chosen to learn the driver velocity profiles as described in [1] .  ... 
doi:10.1109/smc.2014.6973988 dblp:conf/smc/ArmandFG14 fatcat:mdlxcgidevcmppxg6q3w5a2lze
« Previous Showing results 1 — 15 out of 3,769 results