Filters








2,833 Hits in 4.9 sec

Context-Aware Mixed Reality: A Framework for Ubiquitous Interaction [article]

Long Chen, Wen Tang, Nigel John, Tao Ruan Wan, Jian Jun Zhang
2018 arXiv   pre-print
Our key insight is to build semantic understanding in MR that not only can greatly enhance user experience through object-specific behaviours, but also pave the way for solving complex interaction design  ...  We demonstrate our approach with a material-aware prototype system for generating context-aware physical interactions between the real and the virtual objects.  ...  Figure 7 shows that the back layer of the interface displays the video stream feed from the RGB-D camera; A semantic interaction 3D model is in front of the video layer for handling interactions of different  ... 
arXiv:1803.05541v1 fatcat:p6rf7crsprhtdmoslnecyeyp64

An Intelligent Surveillance Platform for Large Metropolitan Areas with Dense Sensor Deployment

Jorge Fernández, Lorena Calavia, Carlos Baladrón, Javier Aguiar, Belén Carro, Antonio Sánchez-Esguevillas, Jesus Alonso-López, Zeev Smilansky
2013 Sensors  
The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams.  ...  The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection  ...  The authors would like to thank the Companies C-B4 and C Tech for their valuable collaboration in this paper and in HuSIMS project.  ... 
doi:10.3390/s130607414 pmid:23748169 pmcid:PMC3715256 fatcat:o5qggo6i4zf4nh5tmmw7i2hz3m

Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams [article]

Dave Braines, Federico Cerutti, Marc Roig Vilamala, Mani Srivastava, Lance Kaplan Alun Preece, Gavin Pearson
2020 arXiv   pre-print
In this paper we describe the initial steps towards this human-agent knowledge fusion (HAKF) environment through a recap of the key requirements, and an explanation of how these can be fulfilled for an  ...  We show how HAKF has the potential to bring value to both human and machine agents working as part of a distributed coalition team in a complex event processing setting with uncertain sources.  ...  The U.S. and UK Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.  ... 
arXiv:2010.12327v1 fatcat:jukoa5qcbvb5bkx2zymiybttxa

A research port test bed based on distributed optical sensors and sensor fusion framework for ad hoc situational awareness

Nick Rüssmeier, Axel Hahn, Daniela Nicklas, Oliver Zielinski
2017 Journal of Sensors and Sensor Systems  
</strong> Maritime study sites utilized as a physical experimental test bed for sensor data fusion, communication technology and data stream analysis tools can provide substantial frameworks for design  ...  for further analysis and (iii) reuse of data, e.g. for training or testing of assistant systems.  ...  The authors are grateful for the help of the ICBM workshop in setting up parts of the sensor system as well as the Institute for Applied Photogrammetry and Geoinformatics (IAPG), Oldenburg, for support  ... 
doi:10.5194/jsss-6-37-2017 fatcat:pzn26hcgzbetzha6amaptwd5da

Going Deeper: Autonomous Steering with Neural Memory Networks

Tharindu Fernando, Simon Denman, Sridha Sridharan, Clinton Fookes
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
Furthermore, this work investigates optimal feature fusion techniques to combine these multimodal information sources, without discarding the vital information that they offer.  ...  If the document is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use.  ...  The hierarchical structure of fu-h grants the opportunity for deep layer wise fusion of the salient features, allowing the model to pay careful attention towards different levels of abstraction.  ... 
doi:10.1109/iccvw.2017.34 dblp:conf/iccvw/FernandoDSF17 fatcat:zuqha373pvcutmrcsfn7ou2gse

A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets

Khaled Bayoudh, Raja Knani, Fayçal Hamdaoui, Abdellatif Mtibaa
2021 The Visual Computer  
The growing potential of multimodal data streams and deep learning algorithms has contributed to the increasing universality of deep multimodal learning.  ...  In particular, we summarize six perspectives from the current literature on deep multimodal learning, namely: multimodal data representation, multimodal fusion (i.e., both traditional and deep learning-based  ...  Autonomous systems Up to now, deep learning has proven to be a powerful tool for generating multimodal data suitable for robotics and autonomous systems [146] .  ... 
doi:10.1007/s00371-021-02166-7 pmid:34131356 pmcid:PMC8192112 fatcat:jojwyc6slnevzk7eaiutlmlgfe

Audiovisual Information Fusion in Human–Computer Interfaces and Intelligent Environments: A Survey

Shankar T. Shivappa, Mohan Manubhai Trivedi, Bhaskar D. Rao
2010 Proceedings of the IEEE  
The fusion strategy used tends to depend mainly on the model, probabilistic or otherwise, used in the particular task to process sensory information to obtain higher level semantic information.  ...  Human brain processes the audio and video modalities extracting complementary and robust information from them.  ...  We sincerely thank the reviewers for their valuable advise which has helped us enhance the content as well as the presentation of the paper.  ... 
doi:10.1109/jproc.2010.2057231 fatcat:lfzgfmn2hjdq7h6o5txva3oapq

Context‐Aware Mixed Reality: A Learning‐Based Framework for Semantic‐Level Interaction

L. Chen, W. Tang, N. W. John, T. R. Wan, J. J. Zhang
2019 Computer graphics forum (Print)  
Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object-specific behaviours, but also it paves the way for  ...  Mixed reality (MR) is a powerful interactive technology for new types of user experience.  ...  Figure 9 shows that the back layer of the interface displays the video stream from an RGB-D camera; a semantic interaction 3D model is in the front of the video layer for handling interactions of different  ... 
doi:10.1111/cgf.13887 fatcat:6ennelqspreudiodpmbbsmv5vq

DANS

Gisik Kwon, K. Selçuk Candan
2006 Proceedings of the 14th annual ACM international conference on Multimedia - MULTIMEDIA '06  
Furthermore, physical workflow nodes (operator instances) are able to locate and select the next filter or fusion operator instance autonomously, while ensuring the correct execution of the workflow.  ...  In this paper, we propose a novel decentralized multimedia workflow processing system, DANS, in which operators defined in workflows are mapped into (distributed) physical nodes through Distributed Hash  ...  Filter and fusion operators provide analysis, aggregation, and filtering semantics.  ... 
doi:10.1145/1180639.1180755 dblp:conf/mm/KwonC06 fatcat:ht4fwgxkbrahzl6und7zlqimea

Biased Competition in Visual Processing Hierarchies: A Learning Approach Using Multiple Cues

Alexander R. T. Gepperth, Sven Rebhan, Stephan Hasler, Jannik Fritsch
2011 Cognitive Computation  
Specifically, we focus on the question of how the autonomous learning of invariant models can be embedded into a performing system and how such models can be used to define object-specific attentional  ...  In order to demonstrate the benefits of this approach, we apply the system to the detection of cars in a variety of challenging traffic videos.  ...  video streams I-V by ROC-like plots.  ... 
doi:10.1007/s12559-010-9092-x pmid:21475682 pmcid:PMC3059758 fatcat:4l4flbtnw5bfvdk2usr7x22kqm

Meetings, gatherings, and events in smart environments

Anton Nijholt
2004 Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry - VRCAI '04  
This may lead to situations where differences between real, humancontrolled, and (semi-) autonomous virtual participants disappear.  ...  We survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real-time or off-line.  ...  Higher-level fusion, where semantic modeling of verbal and nonverbal utterances is taken into account has not been done yet.  ... 
doi:10.1145/1044588.1044636 dblp:conf/vrcai/Nijholt04 fatcat:j5taauuqqbfdpcsae4z4k2ywge

Accurate Single-Stream Action Detection in Real-Time

Yu Liu, Fan Yang, Dominique Ginhac
2019 Proceedings of the 13th International Conference on Distributed Smart Cameras - ICDSC 2019  
Analyzing videos of human actions involves understanding the spatial and temporal context of the scenes.  ...  However, most of them operate in a non-real-time, offline fashion, thus are not well-equipped in many emerging real-world scenarios such as autonomous driving and public surveillance.  ...  Under this framework, two CNNs, one for the spatial stream (e.g., RGB images) and the other one for the temporal stream (typically optical flows), run separately followed by a fusion step.  ... 
doi:10.1145/3349801.3349821 dblp:conf/icdsc/Liu0G19 fatcat:3dtanwzunjcg7egrr52l47k3ry

Semantic Support For Hypothesis-Based Research From Smart Environment Monitoring And Analysis Technologies

T. S. Myers, J. Trevathan
2013 Zenodo  
Currently, there are developments in the Semantic Sensor Web community to explore efficient methods for reuse, correlation and integration of web-based data sets and live data streams.  ...  A framework is presented for how the data fusion concepts from the Semantic Reef architecture map to the Smart Environment Monitoring and Analysis Technologies (SEMAT) intelligent sensor network initiative  ...  The authors would like to thank Professor Ron Johnstone from the University of Queensland, Professor Ian Atkinson from James Cook University and Yong Jin Lee for their assistance, advice and feedback.  ... 
doi:10.5281/zenodo.1087936 fatcat:a62fuyxhcbbanba3naocvapbji

Monocular Instance Motion Segmentation for Autonomous Driving: KITTI InstanceMotSeg Dataset and Multi-task Baseline [article]

Eslam Mohamed, Mahmoud Ewaisha, Mennatullah Siam, Hazem Rashed, Senthil Yogamani, Waleed Hamdy, Muhammad Helmi, Ahmad El-Sallab
2021 arXiv   pre-print
Moving object segmentation is a crucial task for autonomous vehicles as it can be used to segment objects in a class agnostic manner based on their motion cues.  ...  The model then learns separate prototype coefficients within the class agnostic and semantic heads providing two independent paths of object detection for redundant safety.  ...  ACKNOWLEDGEMENTS We would like to thank B Ravi Kiran (Navya), Letizia Mariotti and Lucie Yahiaoui for reviewing the paper and providing feedback.  ... 
arXiv:2008.07008v4 fatcat:a54do7k7rrdhdj75dm6wyom5qy

2019 Index IEEE Transactions on Circuits and Systems for Video Technology Vol. 29

2019 IEEE transactions on circuits and systems for video technology (Print)  
., +, TCSVT Dec. 2019 3568-3582 High-Order Statistical Modeling Based on a Decision Tree for Distributed Video Coding.  ...  ., +, TCSVT Aug. 2019 2442-2452 Learning Coupled Convolutional Networks Fusion for Video Saliency Prediction.  ... 
doi:10.1109/tcsvt.2019.2959179 fatcat:2bdmsygnonfjnmnvmb72c63tja
« Previous Showing results 1 — 15 out of 2,833 results