Filters








41,078 Hits in 6.6 sec

We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos [article]

Alex Andonian, Camilo Fosco, Mathew Monfort, Allen Lee, Rogerio Feris, Carl Vondrick, Aude Oliva
2020 arXiv   pre-print
We propose an approach for learning semantic relational set abstractions on videos, inspired by human learning.  ...  This allows our model to perform cognitive tasks such as set abstraction (which general concept is in common among a set of videos?), set completion (which new video goes well with the set?)  ...  Fig. 1 : Semantic Relational Set Abstraction and its Applications: We propose a paradigm to learn the commonalities between events in a set (the set abstraction) using a relational video model.  ... 
arXiv:2008.05596v1 fatcat:wte6tlsrsbazjiis3tqv3fj5rm

Machine Common Sense Concept Paper [article]

David Gunning
2018 arXiv   pre-print
Its absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general, human-like AI systems we would like to build in the future.  ...  Recent advances in machine learning have resulted in new AI capabilities, but in all of these applications, machine reasoning is narrow and highly specialized.  ...  Joshua Alspector for his expertise in deep learning and insights into its potential for achieving machine common sense; and Ms.  ... 
arXiv:1810.07528v1 fatcat:pntbtkr7tfglxbdznb4t3inhsm

The common aspect proof environment

Shmuel Katz, David Faitelson
2011 International Journal on Software Tools for Technology Transfer (STTT)  
We describe the goals, architecture, design considerations and use of the common aspect proof environment (CAPE).  ...  As one example, verification aspects are used to aid in the abstraction and specification needed for formal analysis in Java Pathfinder.  ...  We also wish to thank Shahar Dag, Eyal Dror, Wilke Havinga, Yael Kalachman, Emilia Katz, Ha Nguyen, Tom Staijen, and Nathan Weston for their help in developing the CAPE and its tools.  ... 
doi:10.1007/s10009-011-0191-0 fatcat:xcyxypjxcrgh5el5u7xuxy626u

Evolution of a common controller

D. Powell, D. Barbour, G. Gilbreath
2012 Unmanned Systems Technology XIV  
ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18  ...  the Multi-robot Operator Control Unit (MOCU) to address interoperability, standardization, and customization issues by using a modular, extensible, and flexible architecture built upon a sharedworld model  ...  Vehicles have navigation objects where we can set desired heading and speed. Camera objects allow one to set options like iris, shutter, and autofocus.  ... 
doi:10.1117/12.921347 fatcat:cy5k325b4bg6tcb5gzw6zyk63m

A Formal Analysis of Multimodal Referring Strategies Under Common Ground [article]

Nikhil Krishnaswamy, James Pustejovsky
2020 arXiv   pre-print
In doing so, we expose some striking formal semantic properties of the interactions between gesture and language, conditioned on the introduction of content into the common ground between the (computational  ...  In this paper, we present an analysis of computationally generated mixed-modality definite referring expressions using combinations of gesture and linguistic descriptions.  ...  Acknowledgments We would like to thank the reviewers for their helpful comments.  ... 
arXiv:2003.07385v1 fatcat:zjeus7tfurcmpidplosacotulq

Mapping a common geoscientific object model to heterogeneous spatial data repositories

Silvia Nittel, Jiong Yang, Richard R. Muntz
1996 Proceedings of the fourth ACM workshop on Advances in geographic information systems - GIS '96  
A large variety of different data sets are available in various specialized repositories, and users would like to access and manipulate these data sets in a uniform way. Additionally,  ...  Acknowledgements We sincerely acknowledge support from NASA EOS-DIS grant NAGW-4242, and we thank Edmond Mesrobian for his helpful comments on the paper.  ...  In summary, mapping a common spatial data model to a spatial repository introduces inaccuracy for the set of data types that have to be approximated.  ... 
doi:10.1145/258319.258335 dblp:conf/gis/NittelYM96 fatcat:7gbjnujmozhflchi47m3sasrgm

Mining common topics from multiple asynchronous text streams

Xiang Wang, Kai Zhang, Xiaoming Jin, Dou Shen
2009 Proceedings of the Second ACM International Conference on Web Search and Data Mining - WSDM '09  
In many applications, we are facing multiple text streams which are related to each other and share common topics.  ...  In this paper, we formally address this problem and put forward a novel algorithm based on the generative topic model.  ...  As a contrast, in our work, we aim to find topics that are common in semantics, while having asynchronous time distributions in different streams.  ... 
doi:10.1145/1498759.1498826 dblp:conf/wsdm/WangZJS09 fatcat:oulqqsxxfffnvalwhtb4t3rsaq

A common type system for clinical natural language processing

Stephen T Wu, Vinod C Kaggal, Dmitriy Dligach, James J Masanz, Pei Chen, Lee Becker, Wendy W Chapman, Guergana K Savova, Hongfang Liu, Christopher G Chute
2013 Journal of Biomedical Semantics  
Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings.  ...  Results: We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse  ...  Acknowledgements We thank Peter Szolovits, Lee Christensen, Scott Halgrim, Cheryl Clark, Jon Aberdeen, Arya Tafvizi, Ken Burford, and Jay Doughty.  ... 
doi:10.1186/2041-1480-4-1 pmid:23286462 pmcid:PMC3575354 fatcat:4zvqlql3wfh2vmombjmcp76dsq

Narrating machines and interactive matrices: a semiotic common ground for game studies

Gabriele Ferri
2007 Conference of the Digital Games Research Association  
diagrams and other recent proposals in semantics of perception.  ...  Between playing a game and enjoying a narration there is a semiotic and semantic common ground: interpretation and meaning-making.  ...  ACKNOWLEDGEMENTS I owe Patrizia Violi and Claudio Paolucci at the University of Bologna much gratitude for their patience and support.  ... 
dblp:conf/digra/Ferri07 fatcat:5c4tlmmpivfedj4pfvqn7674by

Argumentation mining: How can a machine acquire common sense and world knowledge?

Marie-Francine Moens
2018 Argument & Computation  
Then we go deeper into the new field of representation learning that nowadays is very much studied in computational linguistics.  ...  In this article 1 we focus on how the machine can automatically acquire the needed common sense and world knowledge.  ...  Deep (or not so deep) learning models can model context and can be helpful in the acquisition of world knowledge, but we have little experience in capturing such knowledge into reusable representations  ... 
doi:10.3233/aac-170025 fatcat:jiifm7vupncbvb2ovnuwjyd22y

Cross-modal Common Representation Learning by Hybrid Transfer Network [article]

Xin Huang, Yuxin Peng, Mingkuan Yuan
2017 arXiv   pre-print
Cross-modal data can be converted to common representation by CHTN for retrieval, and comprehensive experiment on 3 datasets shows its effectiveness.  ...  Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal  ...  Specifically, we first obtain the common representation for the images and text in the test set with CHTN and all the compared methods.  ... 
arXiv:1706.00153v2 fatcat:mrbbnvji3zdyjafae6mxmcg4ni

A smarter knowledge commons for smart learning

Penelope J. Lister
2018 Smart Learning Environments  
This paper takes the form of a discussion relating to a smarter knowledge commons, having come about due to implications arising from research into the development of a pragmatic pedagogical 'guide to  ...  Considering the future development and pedagogies of open-access smart learning environments, we must ask how the knowledge commons, an integral part of this learning, can become 'smarter' for learning  ...  So the educator sees 'knowledge-push' models in terms of prescribed learning content (e.g. set reading), while 'knowledge-pull' models are seen as personalised and emergent.  ... 
doi:10.1186/s40561-018-0056-z fatcat:7bie5prd6nad5pklrwvtoxzepa

Common spaces: multi-modal-media ecosystem for live performances

Luís Leite, et. al.
2018 MatLit : Materialidades da Literatura  
Big Data became an integral part of the human cultural behavior, but how can we deal with so much information?  ...  With this concept in mind we have imagined four distinct interaction scenarios, relating the performative space with the media spaces where texts were integrated and manipulated.  ... 
doi:10.14195/2182-8830_6-1_13 fatcat:2dm6qxgewffd5czb6efbydve5y

Semantic constraints to represent common sense required in household actions for multi-modal Learning-from-observation robot [article]

Katsushi Ikeuchi, Naoki Wake, Riku Arakawa, Kazuhiro Sasabuchi, Jun Takamatsu
2021 arXiv   pre-print
In order to extend this paradigm to the household domain which consists non-observable constraints derived from a human's common sense; we introduce the idea of semantic constraints.  ...  We then apply our constraint representation to analyze various actions in top hit household YouTube videos and real home cooking recordings.  ...  We appreciate their providing miso soap and beef stew making videos.  ... 
arXiv:2103.02201v1 fatcat:2fj4hllv3nbhpohp5oaaevtigu

Cross-modal Common Representation Learning by Hybrid Transfer Network

Xin Huang, Yuxin Peng, Mingkuan Yuan
2017 Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence  
Cross-modal data can be converted to common representation by CHTN for retrieval, and comprehensive experiment on 3 datasets shows its effectiveness.  ...  Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal  ...  Specifically, we first obtain the common representation for the images and text in the test set with CHTN and all the compared methods.  ... 
doi:10.24963/ijcai.2017/263 dblp:conf/ijcai/HuangPY17 fatcat:c2u5yzhbcrb7nlwuwuhbbgsnwi
« Previous Showing results 1 — 15 out of 41,078 results