Filters








35,754 Hits in 6.8 sec

Sensor Fusion and Environmental Modelling for Multimodal Sentient Computing

Christopher Town, Zhigang Zhu
<span title="">2007</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2007 IEEE Conference on Computer Vision and Pattern Recognition</a> </i> &nbsp;
Adaptive Multi-modal Fusion of Tracking Hypotheses The dynamic component of the world model benefits from a high-level fusion of the visual and ultrasonic modalities for robust multi-object tracking and  ...  The achieved spatial granularity is better than 3cm for > 95% of Bat observations (assuming only small motion) and Bats may be polled using radio base stations and a variable quality of service to give  ...  Adaptive Multi-modal Fusion of Tracking Hypotheses The dynamic component of the world model benefits from a high-level fusion of the visual and ultrasonic modalities for robust multi-object tracking and  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2007.383526">doi:10.1109/cvpr.2007.383526</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/TownZ07.html">dblp:conf/cvpr/TownZ07</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dfkkliujlnfxdconr5su6infhm">fatcat:dfkkliujlnfxdconr5su6infhm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20080419013954/http://www-cs.engr.ccny.cuny.edu/~zhu/MMS/WMSC07-11.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d7/4a/d74a1b1d10700673861f71eb185aed0ea8643374.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2007.383526"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Recognize Moving Objects Around an Autonomous Vehicle Considering a Deep-learning Detector Model and Dynamic Bayesian Occupancy

Andres E. Gomez Hernandez, Ozgur Erkent, Christian Laugier
<span title="2020-12-13">2020</span> <i title="IEEE"> 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV) </i> &nbsp;
scenery. • Fusion of an object detection method with a Bayesian filter framework at a later stage for recognize moving objects in the environment.  ...  In this paper, we aim to recognize moving objects in traffic scenes through the fusion of semantic information with occupancy-grid estimations.  ...  ACKNOWLEDGMENT This work has been supported by the French Government in the scope of the FUI STAR project.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icarcv50220.2020.9305328">doi:10.1109/icarcv50220.2020.9305328</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/r7hd3awuw5dc3bfq6w27fba5kq">fatcat:r7hd3awuw5dc3bfq6w27fba5kq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210715180259/https://hal.inria.fr/hal-03038599/file/ICARCV2020.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/55/88/558823465233650dac371240e5648168924c427f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icarcv50220.2020.9305328"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Spatio-Temporal Fusion Of Visual Attention Model

D. Houzet, D. Pellerin, Anis Rahman, Guanghan Song
<span title="2011-08-29">2011</span> <i title="Zenodo"> Zenodo </i> &nbsp;
Publication in the conference proceedings of EUSIPCO, Barcelona, Spain, 2011  ...  Here, strong motion contrast will increase the weight for the dynamic map, whereas the fusion weight of the spatial information causes it to decrease.  ...  It is useful to exclude the inconsistent regions, and requires no selection of a weighting factor for the spatial and temporal information.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.42631">doi:10.5281/zenodo.42631</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/aclcsj2rzbf4jbr3ebss4yghqa">fatcat:aclcsj2rzbf4jbr3ebss4yghqa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20181102062612/https://zenodo.org/record/42631/files/1569427221.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1e/3c/1e3ca4e38fe81d35f139461ee3e0a64e731c0f45.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.42631"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> zenodo.org </button> </a>

Is There Real Fusion between Sensing and Network Technology? — What are the Problems?

Masatoshi ISHIKAWA
<span title="">2010</span> <i title="Institute of Electronics, Information and Communications Engineers (IEICE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/4dhg4rk2rzdyfllkpx64b2rq6a" style="color: black;">IEICE transactions on communications</a> </i> &nbsp;
of real-time properties, spatial continuity, etc.  ...  On the other hand, network technologies are mainly designed for data exchange in the information world, as is seen in packet communications, and do not go well with sensing structures from the viewpoints  ...  This paper clarifies the structures of these technologies, particularly the sensing structures, proposes a design concept for their fusion, and discusses a vision for real fusion of sensor technologies  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1587/transcom.e93.b.2855">doi:10.1587/transcom.e93.b.2855</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4zxpq54lt5bphmnhramoozdh4u">fatcat:4zxpq54lt5bphmnhramoozdh4u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180726082912/https://www.jstage.jst.go.jp/article/transcom/E93.B/11/E93.B_11_2855/_pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7e/7d/7e7d5f4d05ade4fd65237381898ca1618a87e7a8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1587/transcom.e93.b.2855"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Multisensory information for human postural control: integrating touch and vision

John Jeka, Kelvin S. Oie, Tim Kiemel
<span title="2000-08-23">2000</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/5eqke72ugngsbddlyofvxqjgwu" style="color: black;">Experimental Brain Research</a> </i> &nbsp;
The focus of the present study was to test whether a linear additive model could account for the fusion of touch and vision for postural control.  ...  The visual stimulus was a display of random dots projected onto a screen in front of the standing subject.  ...  For example, gain values for subject 2 (vision dominant) in Fig. 3B were higher in the dynamic vision conditions (i.e., dynamic vision-no touch, dynamic vision-static touch, and dynamic vision-dynamic  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s002210000412">doi:10.1007/s002210000412</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/11026732">pmid:11026732</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/27gd27x4tzaw7glwlpcq6ec5oa">fatcat:27gd27x4tzaw7glwlpcq6ec5oa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20010603155130/http://www.glue.umd.edu:80/~kso3713/EBR2000.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2b/7c/2b7c2c80f6f5711b2980a7c8a00aa6d75f194003.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s002210000412"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Audio-Visual Temporal Saliency Modeling Validated by fMRI Data

Petros Koutras, Georgia Panagiotaropoulou, Antigoni Tsiami, Petros Maragos
<span title="">2018</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</a> </i> &nbsp;
its effectiveness and appropriateness in predicting audio-visual saliency for dynamic stimuli.  ...  The evaluation of our model using the new fMRI database under a mixed-effect analysis shows that the proposed saliency model has strong correlation with both the visual and audio brain areas, that confirms  ...  Figure 1 : 1 Overview of the audio-visual temporal saliency model: a) fusion in the level of saliency maps, b) fusion in the level of saliency curves.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2018.00269">doi:10.1109/cvprw.2018.00269</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/KoutrasPTM18.html">dblp:conf/cvpr/KoutrasPTM18</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6hqbdm7dnrbfrjtwmhwbyicvjy">fatcat:6hqbdm7dnrbfrjtwmhwbyicvjy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200318224950/http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w39/Koutras_Audio-Visual_Temporal_Saliency_CVPR_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3f/f5/3ff58fceb7236887f82da1b96fede72c72f5ad30.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2018.00269"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Multi-Scale Attention 3D Convolutional Network for Multimodal Gesture Recognition

Huizhou Chen, Yunan Li, Huijuan Fang, Wentian Xin, Zixiang Lu, Qiguang Miao
<span title="2022-03-21">2022</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
Moreover, for dynamic gesture recognition, it is not enough to consider only the attention in the spatial dimension.  ...  This paper proposes a multi-scale attention 3D convolutional network for gesture recognition, with a fusion of multimodal data.  ...  Conflicts of Interest: The authors declare no conflict of interest. Sensors 2022, 22, 2405  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s22062405">doi:10.3390/s22062405</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/35336576">pmid:35336576</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC8950910/">pmcid:PMC8950910</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xbfwwvgrdzfczlksdixesfxgey">fatcat:xbfwwvgrdzfczlksdixesfxgey</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220605214628/https://mdpi-res.com/d_attachment/sensors/sensors-22-02405/article_deploy/sensors-22-02405.pdf?version=1647854344" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/19/e7/19e7f8c2a69e36cf3d62813eec73ca17e9655897.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s22062405"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8950910" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Review of dynamic gesture recognition

Yuanyuan SHI, Yunan LI, Xiaolong FU, M.I.A.O. Kaibin, M.I.A.O. Qiguang
<span title="">2021</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wrwdbi2denazdm5k6riguknyke" style="color: black;">Virtual Reality &amp; Intelligent Hardware</a> </i> &nbsp;
Gesture data obtained through special wearable devices, such as data gloves [17, 18] , can detect specific finger curvature information and spatial position information of the hand and arm.  ...  In this review, we discuss the advantages and limitations of existing technologies, focusing on the feature extraction method of the spatiotemporal structure information in a video sequence, and consider  ...  [79] investigated a method for fusing spatial and temporal information in TSN.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.vrih.2021.05.001">doi:10.1016/j.vrih.2021.05.001</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jpddnlf2xbfufnyuf3s6fbxgty">fatcat:jpddnlf2xbfufnyuf3s6fbxgty</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210722100058/https://pdf.sciencedirectassets.com/321628/1-s2.0-S2096579621X00046/1-s2.0-S2096579621000279/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjENr%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIQC7ZKwwNSU7KgSeSNfkxeHFnSYMl36uj56uyaMwMp0mCwIgeG6eJJ1M0cKmYK1SX9xl0dPgA7zMRQVDnjJzq%2FOBNHgqgwQI0v%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARAEGgwwNTkwMDM1NDY4NjUiDJwCUhl7z2XXlNpJ1irXA1uK8LUgyixWJFZX7tUqdr1LzaSt%2ByrkyyZ07EhTEi99xtTB4wHSQmKrp22FcE5VedWqj%2BMtbJRT%2FhbBsjIj1G6cA07XomdE2KR7Zna7uXtHwTm20GZ5l2y8mR0ICvBOVj48Vj2T9nwQ0ng%2BcsCPdfEYT9ClT6E854khEnS9TGk9wvlQYezLYHL%2FmiTig%2B2EVll8MOL73CALZpWpKX6fJYubmVszsj%2FU00R36R33AsgLVFxkbUo8yZtThWoqjqZBCvTk0foUq1AdsMe0P4ubBi70xEaeclnHyZI20Fjotf4YgXG09P4BU8edRGwAaIxNwZIR1vpsnCV2rd9pVG%2B96ud%2FQ2U50v2voW1yOzAACHCPpo6U3vryPflFG0Y0PqKqjo34m3nAWmmfzZXXiumD6pn76opQyTfOmHZdwGli7lF4yxaCILBqp3yYNqGDxuZTmL%2FQeiWzH4Q7AZxMu5OP4kc6I0kU%2BNLec0lxE0V5vL2NgVUxvVLK9La29iPgWytz3XUOxoKHWsfbJNupnLYaFuyZ9ebyJO6gpDurgB3%2Fcn4lZQfi24zku98C%2FPMSifA%2BEklxzSlkYEZwiokFDtjlha22x3QJUhnVn9FtmcJvws4MXOPmK5a%2BqTDM7%2BSHBjqlASSWR1NXKeNxhueGFF7SYtDs%2BttJ7rheno6GoJ5Npgsj2txhZLIImhERv2YpBxcEWRF7OcDvUxQ0b7vV0RRPCDJpCQZukibZZG%2FniBg5GFVlLRZxQNUC%2BUwkerePEM0gfQ6hxLQiPIBkd%2FpyAZ2cuCf%2BkCC2i%2FkNnH65%2BV%2FTwCNPaWnSLBt8jxrDUe%2FZZFrk0An6udcnrxsH0wF91QjVkJh2P3i1iw%3D%3D&amp;X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Date=20210722T100053Z&amp;X-Amz-SignedHeaders=host&amp;X-Amz-Expires=300&amp;X-Amz-Credential=ASIAQ3PHCVTYR6XV7DFF%2F20210722%2Fus-east-1%2Fs3%2Faws4_request&amp;X-Amz-Signature=3e1b70ef84e0ce9ad2fd582496fb30701eda695f5403b0d098b5dec5a329c67c&amp;hash=28eb5cec041bed90404ef0fe64f114b3cb5b5d5faf4ab68ddf8f3ccd18799700&amp;host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&amp;pii=S2096579621000279&amp;tid=spdf-b4e354c9-7b8a-4804-9253-d805ecc1c8b1&amp;sid=cdfb58fb39b1a7488b58a216ed7d1a2e558egxrqa&amp;type=client" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/44/e2/44e2661e92b178494a45ff0200eba94d6d6076b6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.vrih.2021.05.001"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> elsevier.com </button> </a>

Dynamic Gesture Recognition Algorithm Based on 3D Convolutional Neural Network

Yuting Liu, Du Jiang, Haojie Duan, Ying Sun, Gongfa Li, Bo Tao, Juntong Yun, Ying Liu, Baojia Chen, Syed Hassan Ahmed
<span title="2021-08-16">2021</span> <i title="Hindawi Limited"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/3wwzxqpotbc73bzpemzybzg7ee" style="color: black;">Computational Intelligence and Neuroscience</a> </i> &nbsp;
To solve above problems, a dynamic gesture recognition model based on CBAM-C3D is proposed.  ...  However, compared with the convolution calculation of a single image, multiframe image of dynamic gestures has more computation, more complex feature extraction, and more network parameters, which affects  ...  In general, the recognition effect and average recognition rate of the series fusion model are better than those of the average fusion model. e above experiments show that in dynamic gesture recognition  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/2021/4828102">doi:10.1155/2021/4828102</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/34447430">pmid:34447430</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC8384521/">pmcid:PMC8384521</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bb5m3n3znncwnfjsuhc33pzcxm">fatcat:bb5m3n3znncwnfjsuhc33pzcxm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211011060424/https://downloads.hindawi.com/journals/cin/2021/4828102.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f8/2d/f82d34ffee7d49f41ec52568ec75f304fcd68894.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/2021/4828102"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> hindawi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8384521" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

A Gated Fusion Network for Dynamic Saliency Prediction [article]

Aysun Kocak, Erkut Erdem, Aykut Erdem
<span title="2021-02-15">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we introduce Gated Fusion Network for dynamic saliency (GFSalNet), the first deep saliency model capable of making predictions in a dynamic way via gated fusion mechanism.  ...  Predicting saliency in videos is a challenging problem due to complex modeling of interactions between spatial and temporal information, especially when ever-changing, dynamic nature of videos is considered  ...  ACKNOWLEDGMENTS This work was supported in part by TUBA GEBIP fellowship awarded to E. Erdem.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.07682v1">arXiv:2102.07682v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/btyo6jtzpvbixj3bbd4hfnioh4">fatcat:btyo6jtzpvbixj3bbd4hfnioh4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210225112413/https://arxiv.org/pdf/2102.07682v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f8/95/f8956d79071a271929cbc44dc99a4eeced18c4ec.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.07682v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Spatio-Temporal Saliency Networks for Dynamic Saliency Prediction [article]

Cagdas Bak, Aysun Kocak, Erkut Erdem, Aykut Erdem
<span title="2017-11-15">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The key to our models is the architecture of two-stream networks where we investigate different fusion mechanisms to integrate spatial and temporal information.  ...  Motivated by this, in this work, we study the use of deep learning for dynamic saliency prediction and propose the so-called spatio-temporal saliency networks.  ...  ACKNOWLEDGMENT This research was supported in part by TUBITAK Career Development Award 113E497 and Hacettepe BAP FDS-2016-10202.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1607.04730v2">arXiv:1607.04730v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ygysix2pojetliwpodmtngvnxm">fatcat:ygysix2pojetliwpodmtngvnxm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191022161848/https://arxiv.org/pdf/1607.04730v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e5/6c/e56c3381763c5d54fc41d52290244cb4727e8b04.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1607.04730v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Evidential Filter for Indoor Navigation of a Mobile Robot in Dynamic Environment [chapter]

Quentin Labourey, Olivier Aycard, Denis Pellerin, Michèle Rombaut, Catherine Garbay
<span title="">2016</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jyopc6cf5ze5vipjlm4aztcffi" style="color: black;">Communications in Computer and Information Science</a> </i> &nbsp;
This article presents the key-stages of the multimodal fusion: an evidential grid is built from each modality using a modified Dempster combination, and a temporal fusion is made using an evidential filter  ...  Robots are destined to live with humans and perform tasks for them. In order to do that, an adapted representation of the world including human detection is required.  ...  This perception grid is then fused in an evolutive fusion model, in order to extract information about the dynamics of the scene.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-40596-4_25">doi:10.1007/978-3-319-40596-4_25</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lf6tn5il6ffgdleedug4x6f37a">fatcat:lf6tn5il6ffgdleedug4x6f37a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170928031154/https://hal.archives-ouvertes.fr/hal-01341861/document" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/35/b5/35b51941b75607c1e35b822e9e1e2199b0f32630.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-40596-4_25"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Data Fusion in Application of Image Information

M. Piszczek
<span title="">2011</span> <i title="Institute of Physics, Polish Academy of Sciences"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ys4li2r4zvad3pmku3xhtemaui" style="color: black;">Acta Physica Polonica. A</a> </i> &nbsp;
Expressed deliberations about data fusion in vision information systems are based on research activity.  ...  This paper presents a new look at meaning of metadata in applications using image information.  ...  Acknowledgments The expert opinions concern issues of image information was prepared for the Ministry of Defence in 2005.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.12693/aphyspola.120.716">doi:10.12693/aphyspola.120.716</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/w5ovkjkdezcbbkr5czsabq6c4i">fatcat:w5ovkjkdezcbbkr5czsabq6c4i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180601225105/http://przyrbwn.icm.edu.pl/APP/PDF/120/a120z4p33.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/40/3d/403d4d6aa6ec248022ee1d06e54fb1e0c05a01e1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.12693/aphyspola.120.716"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Dynamic Context Correspondence Network for Semantic Alignment [article]

Shuaiyi Huang, Qiuyue Wang, Songyang Zhang, Shipeng Yan, Xuming He
<span title="2019-09-08">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Establishing semantic correspondence is a core problem in computer vision and remains challenging due to large intra-class variations and lack of annotated data.  ...  We then develop a novel dynamic fusion strategy based on attention mechanism to weave the advantages of both local and context features by integrating semantic cues from multiple scales.  ...  Acknowledgments This work was supported in part by the NSFC Grant No.61703195 and the Shanghai NSF Grant No.18ZR1425100.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.03444v1">arXiv:1909.03444v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oxewn5s3a5ekbadmmd2f5pk464">fatcat:oxewn5s3a5ekbadmmd2f5pk464</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200823034304/https://arxiv.org/pdf/1909.03444v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/01/72/0172967e6e822cdb89ecd418586008ac831c41cc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.03444v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor

Shuo Chang, Yifan Zhang, Fan Zhang, Xiaotong Zhao, Sai Huang, Zhiyong Feng, Zhiqing Wei
<span title="2020-02-11">2020</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
In this paper, we propose a new spatial attention fusion (SAF) method for obstacle detection using mmWave radar and vision sensor, where the sparsity of radar points are considered in the proposed SAF.  ...  In addition, we build a generation model, which converts radar points to radar images for neural network training.  ...  [14] introduced a Bayesian Network, which can make fusion dynamically. In contrast to [13, 14] ,Ćesić et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s20040956">doi:10.3390/s20040956</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/32053909">pmid:32053909</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC7070402/">pmcid:PMC7070402</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sib6d6iqxrcq5doxeqqsdy3qpm">fatcat:sib6d6iqxrcq5doxeqqsdy3qpm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200215032959/https://res.mdpi.com/d_attachment/sensors/sensors-20-00956/article_deploy/sensors-20-00956.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/71/ac/71ac33d4c081a93716d50bb7038a4fe3832e67f3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s20040956"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7070402" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 35,754 results