Filters








1,398 Hits in 7.2 sec

A Bayesian Approach to Multimodal Visual Dictionary Learning

Go Irie, Dong Liu, Zhenguo Li, Shih-Fu Chang
<span title="">2013</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2013 IEEE Conference on Computer Vision and Pattern Recognition</a> </i> &nbsp;
words through a unified Bayesian inference.  ...  Despite significant progress, most existing visual dictionary learning methods rely on image descriptors alone or together with class labels.  ...  Conclusion Focusing on the scenario where images are associated with textual words, we presented a Bayesian approach to multimodal visual dictionary learning.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2013.49">doi:10.1109/cvpr.2013.49</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/IrieLLC13.html">dblp:conf/cvpr/IrieLLC13</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3up72rn5mvebtcqzmaboqjagbm">fatcat:3up72rn5mvebtcqzmaboqjagbm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170706073926/http://www.ee.columbia.edu/%7Edongliu/Papers/Dictionary_CVPR13.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5a/1c/5a1c4b65f70e5c23196ba9f09015db96e598f6de.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2013.49"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Image Processing for Art Investigation

Bruno Cornelis
<span title="2015-12-21">2015</span> <i title="Universitat Autonoma de Barcelona"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/tacfpjxccve3bmtvwxtrw3krn4" style="color: black;">ELCVIA Electronic Letters on Computer Vision and Image Analysis</a> </i> &nbsp;
Our contribution to this research field consists of a set of tools that are based on dimensionality reduction, sparse representations and dictionary learning.  ...  scheme three crack detection techniques: oriented elongated filters, a multiscale extension of the morphological top-hat transformation and a detection method based on dictionary learning.  ...  We adopted a Bayesian approach that estimates for each pixel a posterior probability of belonging to a crack, given a large set of feature vectors extracted over all modalities [4] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5565/rev/elcvia.715">doi:10.5565/rev/elcvia.715</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fzgizoevonfwtei4liimhqggbe">fatcat:fzgizoevonfwtei4liimhqggbe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170809142531/http://ddd.uab.cat/pub/elcvia/elcvia_a2015v14n3/elcvia_a2015v14n3p13.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/de/b4/deb4d6975cd45c6b907142380e29d4299faa2ece.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5565/rev/elcvia.715"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Multimodal Sparse Bayesian Dictionary Learning [article]

Igor Fedorov, Bhaskar D. Rao
<span title="2019-05-29">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The underlying framework offers a considerable amount of flexibility to practitioners and addresses many of the shortcomings of existing multimodal dictionary learning approaches.  ...  We present an algorithm called multimodal sparse Bayesian dictionary learning (MSBDL). MSBDL leverages information from all available data modalities through a joint sparsity constraint.  ...  A. Contributions We present the multimodal sparse Bayesian dictionary learning algorithm (MSBDL).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.03740v3">arXiv:1804.03740v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cyiu7bpcmvfwxixobbbh5ts7sa">fatcat:cyiu7bpcmvfwxixobbbh5ts7sa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200912193516/https://arxiv.org/pdf/1804.03740v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5a/51/5a51c7b067067e1bc860fcb5a05623f3686418b1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.03740v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multimodal Machine Learning: A Survey and Taxonomy [article]

Tadas Baltrušaitis, Chaitanya Ahuja, Louis-Philippe Morency
<span title="2017-08-01">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy.  ...  Multimodal machine learning aims to build models that can process and relate information from multiple modalities.  ...  [154] used a dynamic Bayesian network to align speakers to videos. Naim et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1705.09406v2">arXiv:1705.09406v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/262fo4sihffvxecg4nwsifoddm">fatcat:262fo4sihffvxecg4nwsifoddm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191022030904/https://arxiv.org/pdf/1705.09406v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/34/02/3402b5e354eebcf443789f3c8d3c97eccd3ae55e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1705.09406v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-Scale Saliency Detection using Dictionary Learning [article]

Shubham Pachori
<span title="2017-07-05">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a method to detect saliency in the objects using multimodal dictionary learning which has been recently used in classification and image fusion.  ...  The multimodal dictionary that we are learning is task driven which gives improved performance over its counterpart (the one which is not task specific).  ...  A graph-based visual saliency (GBVS) model based on the Markovian approach is suggested in [15] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1611.06307v3">arXiv:1611.06307v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ulf3duxoprfr5j3yitutgoawca">fatcat:ulf3duxoprfr5j3yitutgoawca</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200901033040/https://arxiv.org/pdf/1611.06307v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0f/4e/0f4e66594ead004625105be2b8d0008bbe30180e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1611.06307v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

WAFFLe: Weight Anonymized Factorization for Federated Learning

Weituo Hao, Nikhil Mehta, Kevin J. Liang, Pengyu Cheng, Mostafa El-Khamy, Lawrence Carin
<span title="">2022</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
To address these issues, we propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural  ...  the very data federated learning seeks to protect.  ...  To satisfy the global objective, θ is learned to minimize the loss on average across all clients. This is the approach of many federated learning approaches.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2022.3172945">doi:10.1109/access.2022.3172945</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ep2dke72tzezzg2qytkrgvfxcy">fatcat:ep2dke72tzezzg2qytkrgvfxcy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220514024555/https://ieeexplore.ieee.org/ielx7/6287639/9668973/09770028.pdf?tp=&amp;arnumber=9770028&amp;isnumber=9668973&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/64/95/6495f5081d9f75f3370037ad7f0e40863de4cd87.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2022.3172945"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Continuous visual speech recognition for multimodal fusion

Eric Benhaim, Hichem Sahbi, Guillaume Vitte
<span title="">2014</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/rc5jnc4ldvhs3dswicq5wk3vsq" style="color: black;">2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</a> </i> &nbsp;
Experiments conducted on the standard LIPS2008 dataset, show a clear and a consistent gain of our multimodal approach compared to others.  ...  It is admitted that human speech perception is a multimodal process that combines both visual and acoustic informations.  ...  VISUAL LEARNING FRAMEWORK This section describes our visual learning model which consists in a multi-class SVM and a visemic language model that learns speech unit transitions using a large corpus of phonetic  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icassp.2014.6854477">doi:10.1109/icassp.2014.6854477</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icassp/BenhaimSV14.html">dblp:conf/icassp/BenhaimSV14</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vwfp6gadq5daldcul2xf65i25a">fatcat:vwfp6gadq5daldcul2xf65i25a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170812151242/http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202014/papers/p4651-benhaim.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/26/01/260126c4fa2c0be6006335527eaaa58cec711be6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icassp.2014.6854477"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Multimodal fusion using learned text concepts for image categorization

Qiang Zhu, Mei-Chen Yeh, Kwang-Ting Cheng
<span title="">2006</span> <i title="ACM Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/lahlxihmo5fhzpexw7rundu24u" style="color: black;">Proceedings of the 14th annual ACM international conference on Multimedia - MULTIMEDIA &#39;06</a> </i> &nbsp;
Specific to each image category, a text concept is first learned from a set of labeled texts in images of the target category using Multiple Instance Learning [1] .  ...  In this paper, we describe a multimodal fusion scheme which improves the image classification accuracy by incorporating the information derived from the embedded texts detected in the image under classification  ...  Winn and Minka [23] learned a universal visual dictionary by pair-wise merging of visual words from an initially large dictionary.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/1180639.1180698">doi:10.1145/1180639.1180698</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/mm/ZhuYC06.html">dblp:conf/mm/ZhuYC06</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gfnskh6brncl7fz3sva3ja2ag4">fatcat:gfnskh6brncl7fz3sva3ja2ag4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20100806202205/http://engineering.ucsb.edu/~zhuq/paper/mm06.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e8/34/e834982537c8e2df9835a1802575145cb095b1cd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/1180639.1180698"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

Learning Articulated Motion Models from Visual and Lingual Signals [article]

Zhengyang Wu and Mohit Bansal and Matthew R. Walter
<span title="2016-07-01">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we present a multimodal learning framework that incorporates both visual and lingual information to estimate the structure and parameters that define kinematic models of articulated objects  ...  We evaluate our multimodal learning framework on a dataset comprised of a variety of household objects, and demonstrate a 36% improvement in model accuracy over the vision-only baseline.  ...  Our contributions include a multimodal approach to learning kinematic models from visual and lingual signals, the exploration of different language grounding methods to align action verbs and kinematic  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1511.05526v2">arXiv:1511.05526v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rrv2bp34vjcjlcdt7q7awlvk3e">fatcat:rrv2bp34vjcjlcdt7q7awlvk3e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200913105858/https://arxiv.org/pdf/1511.05526v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0a/91/0a91b825e4b7127f8635f92b5efc6c8a96e791db.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1511.05526v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

WAFFLe: Weight Anonymized Factorization for Federated Learning [article]

Weituo Hao, Nikhil Mehta, Kevin J Liang, Pengyu Cheng, Mostafa El-Khamy, Lawrence Carin
<span title="2020-08-13">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To address these issues, we propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural  ...  the very data federated learning seeks to protect.  ...  Bayesian Nonparametric Federated Learning Several previous works have applied Bayesian nonparameterics to federated learning, though primarily as a means for parameter matching during aggregation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.05687v1">arXiv:2008.05687v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cbicydlt4jcizltbw6w74zpcza">fatcat:cbicydlt4jcizltbw6w74zpcza</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200820032752/https://arxiv.org/pdf/2008.05687v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.05687v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multimodal Classification of Remote Sensing Images: A Review and Future Directions

Luis Gomez-Chova, Devis Tuia, Gabriele Moser, Gustau Camps-Valls
<span title="">2015</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/yfvtieuumfamvmjlc255uckdlm" style="color: black;">Proceedings of the IEEE</a> </i> &nbsp;
In this scenario, multimodal image fusion stands out as the appropriate framework to address these problems.  ...  In this paper, we provide a taxonomical view of the field and review the current methodologies for multimodal classification of remote sensing images.  ...  ACKNOWLEDGEMENTS The authors would like to thank DigitalGlobe Inc. for the optical data on Rio and Haiti, and the Italian Space Agency for the SAR data on Haiti.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/jproc.2015.2449668">doi:10.1109/jproc.2015.2449668</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gaficd2bcrbshcrds3a2wfa25a">fatcat:gaficd2bcrbshcrds3a2wfa25a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20151008233419/http://www.geo.uzh.ch/fileadmin/files/content/abteilungen/multimodal_rs/papers/multimodal_paper_procieee_preprint-rdc.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/58/bd/58bd0411bce7df96c44aa3579136eff873b56ac5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/jproc.2015.2449668"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Cross-situational noun and adjective learning in an interactive scenario

Yuxin Chen, David Filliat
<span title="">2015</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6uxdcjybibcyxgl23nvffdmliq" style="color: black;">2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)</a> </i> &nbsp;
We propose a model of this cross-situational learning capacity and apply it to learning nouns and adjectives from noisy and ambiguous speeches and continuous visual input.  ...  This model uses two different strategy: a statistical filtering to remove noise in the speech part and the Non Negative Matrix Factorization algorithm to discover word-meaning in the visual domain.  ...  ACKNOWLEDGMENT The authors would like to thank Fabio Pardo and Olivier Mangin for their help in implementing the reported experiments. This work was supported by the China Scholarship Council.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/devlrn.2015.7346129">doi:10.1109/devlrn.2015.7346129</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icdl-epirob/ChenF15.html">dblp:conf/icdl-epirob/ChenF15</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3gj4t2cibrcytjtbkz4mxa7ioi">fatcat:3gj4t2cibrcytjtbkz4mxa7ioi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190502113656/https://hal.archives-ouvertes.fr/hal-01170674/document" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/71/23/7123721758b93852c9539ffa4ac1a648e176794d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/devlrn.2015.7346129"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Symbol Emergence in Robotics: A Survey [article]

Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya Ogata, Hideki Asoh
<span title="2015-09-29">2015</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and  ...  Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment  ...  Mangin used a nonnegative matrix factorization algorithm to learn a dictionary of components from multimodal time series data [53] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1509.08973v1">arXiv:1509.08973v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yg6bscvy2fdpdhapltyonvhs2a">fatcat:yg6bscvy2fdpdhapltyonvhs2a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930174305/https://arxiv.org/pdf/1509.08973v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d8/ab/d8ab2bc5c29f0cbb3318838b53846bbf1f548ff0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1509.08973v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Medical images modality classification using multi-scale dictionary learning

M. Srinivas, C. Krishna Mohan
<span title="">2014</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/gtrqa24xerhsbfcojson6wogdy" style="color: black;">2014 19th International Conference on Digital Signal Processing</a> </i> &nbsp;
The ability of On-line dictionary learning (ODL) to achieve sparse representation of an image is exploited to develop dictionaries for each class using multi-scale representation (wavelets) feature.  ...  In this paper, we proposed a method for classification of medical images captured by different sensors (modalities) based on multi-scale wavelet representation using dictionary learning.  ...  This approach was later extended by adding a block incoherence term in their optimization problem to improve the accuracy of sparse coding. Multi-scale dictionary learning is proposed in [22] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icdsp.2014.6900739">doi:10.1109/icdsp.2014.6900739</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icdsp/SrinivasM14.html">dblp:conf/icdsp/SrinivasM14</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7zg2rsowqrauvpnlifvzk4xew4">fatcat:7zg2rsowqrauvpnlifvzk4xew4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20171201223500/https://core.ac.uk/download/pdf/38679079.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/96/3f/963f970d0abf6446d4ab8e8546527a1f90baed8e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icdsp.2014.6900739"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

A two-stream neural network architecture for the detection and analysis of cracks in panel paintings

Roman Sizyakin, Bruno Cornelis, Laurens Meeus, Viacheslav Voronin, Aleksandra Pizurica, Peter Schelkens, Tomasz Kozacki
<span title="2020-04-01">2020</span> <i title="SPIE"> Optics, Photonics and Digital Technologies for Imaging Applications VI </i> &nbsp;
The results show an encouraging performance of the proposed approach compared to traditional machine learning methods and the state-of-the-art Bayesian Conditional Tensor Factorization (BCTF) method for  ...  We validate the proposed method on a multimodal visual dataset from the Ghent Altarpiece, a world famous polyptych by the Van Eyck brothers.  ...  and NSFC according to the research project N o 20-57-53012 and in part by the Research Foundation Flanders (FWO), project 528 G.OA26.17N.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/12.2555857">doi:10.1117/12.2555857</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ekpred6msvbqpkhxqntn6477qu">fatcat:ekpred6msvbqpkhxqntn6477qu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210715015729/https://biblio.ugent.be/publication/8659846/file/8659847" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/db/5b/db5bea21196692b68bd5f1d2f8598eba9f1d142b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/12.2555857"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 1,398 results