Filters








310 Hits in 6.9 sec

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition [article]

Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell
<span title="2013-10-06">2013</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to  ...  We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition  ...  Acknowledgements The authors would like to thank Alex Krizhevsky for his valuable help in reproducing the ILSVRC-2012 results, as well as for providing an open source implementation of GPU-based CNN training  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1310.1531v1">arXiv:1310.1531v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qmtbozzms5awxocrchoyeg6w54">fatcat:qmtbozzms5awxocrchoyeg6w54</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824081053/https://arxiv.org/pdf/1310.1531v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ad/98/ad98998e60c63f8b0d552c03be38b31965e0b685.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1310.1531v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep learning with non-medical training used for chest pathology identification

Yaniv Bar, Idit Diamant, Lior Wolf, Hayit Greenspan, Lubomir M. Hadjiiski, Georgia D. Tourassi
<span title="2015-03-20">2015</span> <i title="SPIE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xfwg4fmybzazfktmdtzvhcujka" style="color: black;">Medical Imaging 2015: Computer-Aided Diagnosis</a> </i> &nbsp;
This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.  ...  The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features.  ...  This is a first-of-its-kind experiment that shows that Deep learning with ImageNet training may be sufficient for general medical image recognition tasks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/12.2083124">doi:10.1117/12.2083124</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/micad/BarDWG15.html">dblp:conf/micad/BarDWG15</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/egiteq5jqjcgxipbwjcfp6mf3q">fatcat:egiteq5jqjcgxipbwjcfp6mf3q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170824123948/http://www.cs.tau.ac.il/~wolf/papers/SPIE15chest.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b0/6e/b06e72fc20845b205707e01aa7595c8d47eca0f2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/12.2083124"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Network Comparison Study of Deep Activation Feature Discriminability with Novel Objects [article]

Michael Karnes, Alper Yilmaz
<span title="2022-02-08">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
More recently, state-of-the-art computer visions algorithms have incorporated Deep Neural Networks (DNN) in feature extracting roles, creating Deep Convolutional Activation Features (DeCAF).  ...  This study analyzes the general discriminability of novel object visual appearances encoded into the DeCAF space of six of the leading visual recognition DNN architectures.  ...  CONCLUSIONS Deep convolutional activation features (DeCAF) are an efficient way of systematically extending the knowledge domain of pretrained DNN.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.03695v1">arXiv:2202.03695v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kaguggvbfnbvncwsulovcycl4a">fatcat:kaguggvbfnbvncwsulovcycl4a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220215115929/https://arxiv.org/pdf/2202.03695v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/32/d7/32d7211d433c52ff6788b2bc776e02fb367a6711.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.03695v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Temporal and Fine-Grained Pedestrian Action Recognition on Driving Recorder Database

Hirokatsu Kataoka, Yutaka Satoh, Yoshimitsu Aoki, Shoko Oikawa, Yasuhiro Matsui
<span title="2018-02-20">2018</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
It is believed that the fine-grained action recognition induces a pedestrian intention estimation for a helpful advanced driver-assistance systems (ADAS).  ...  We find out how to learn an effective recognition model with only a small-scale database.  ...  Moreover, we assigned the deep convolutional activation features (DeCAF) since the databases have only 10 2 -order size in terms of videos.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s18020627">doi:10.3390/s18020627</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/29461473">pmid:29461473</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC5855092/">pmcid:PMC5855092</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qybfwjyuebdpzkoay2x53abrrq">fatcat:qybfwjyuebdpzkoay2x53abrrq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180728160012/https://res.mdpi.com/def5020051c3d845af65699f2e765634402ccd437d20a9d6e0e365d442eed4feadafbb1abcef465a39b91c56120f2f7adf99519f8878049ea80e244a467a9c936821775016fef82b79b3ed7615a0ba556da10c00fe2d015365cd01abd3d90acfe834e78d0f4eb2c4d656573639d8f72ec449d0d14f00de6921deb23a262488db074f8772f008de544c1740ecdce4c9e2e2d4e7?filename=&amp;attachment=1" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e0/d8/e0d878cc095eaae220ad1f681b33d7d61eb5e425.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s18020627"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5855092" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Classification of Artistic Styles Using Binarized Features Derived from a Deep Neural Network [chapter]

Yaniv Bar, Noga Levy, Lior Wolf
<span title="">2015</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
The recent interest in deep neural networks has provided powerful visual features that achieve state-of-the-art results in various visual classification tasks.  ...  Combined with the PiCodes descriptors, these features show excellent classification results on a large scale collection of paintings.  ...  Decaf 5 contains 9216 activations of the last convolutional layer, and Decaf 6 contains 4096 activations of the first fully-connected layer.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-16178-5_5">doi:10.1007/978-3-319-16178-5_5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/j7n3guyptjf6vmqrkcwwl6os3u">fatcat:j7n3guyptjf6vmqrkcwwl6os3u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170921213326/http://www.cs.tau.ac.il/~wolf/papers/fusedBinaryCnnFeaturesClassification_cr_final.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2b/20/2b2033af5ae4e705b90e970a586e0431678374b2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-16178-5_5"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Deep Convolution Neural Network with 2-Stage Transfer Learning for Medical Image Classification
2段階転移学習を用いたディープコンボリューションネットの医用画像認識

Hayaru Shouno, Aiga Suzuki, Satoshi Suzuki, Shoji Kido
<span title="">2017</span> <i title="Japanese Neural Network Society"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/nrjlvwuhyfg4hmafccjrv4we5q" style="color: black;">The Brain &amp; Neural Networks</a> </i> &nbsp;
): Decaf: A deep convolutional activation feature for generic visual recognition, In ICML,  ...  ., K., Zisserman, A. (2014): Very deep convolutional networks for large-scale image recognition, CoRR, abs/1409.1556. 5) Szegedy, C., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3902/jnns.24.3">doi:10.3902/jnns.24.3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q4c33prghrei5ptlgwrimj6pha">fatcat:q4c33prghrei5ptlgwrimj6pha</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190505065857/https://www.jstage.jst.go.jp/article/jnns/24/1/24_3/_pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8b/bc/8bbcf031bae659a515c8b179c59ee38ceacbf349.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3902/jnns.24.3"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

One-Shot Adaptation of Supervised Deep Convolutional Models [article]

Judy Hoffman, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, Trevor Darrell
<span title="2014-02-18">2014</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Though deep convolutional networks have proven to be a competitive approach for image classification, a question remains: have these models have solved the dataset bias problem?  ...  In general, training or fine-tuning a state-of-the-art deep model on a new domain requires a significant amount of data, which for many applications is simply not available.  ...  For extracting features from the deep source model, we follow the setup of Donahue et al. [9] , which extracts a visual feature DeCAF from the ImageNet-trained architecture of [17] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1312.6204v2">arXiv:1312.6204v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/z3zfoyzyobfpfn2cqg3raj36l4">fatcat:z3zfoyzyobfpfn2cqg3raj36l4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200904073058/https://arxiv.org/pdf/1312.6204v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/64/92/6492351b7e6a33d7a06d6141572467acd8ee3051.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1312.6204v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Understanding Convolutional Neural Networks in Terms of Category-Level Attributes [chapter]

Makoto Ozeki, Takayuki Okatani
<span title="">2015</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
for non-visual ones.  ...  It has been recently reported that convolutional neural networks (CNNs) show good performances in many image recognition tasks.  ...  In the experiments, we use DeCAF (Deep Convolutional Activation Features) of Donahue et el. [4] to analyze a CNN trained for the 1,000 object category recognition task of ILSVRC-2012.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-16808-1_25">doi:10.1007/978-3-319-16808-1_25</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uxo6jvvpdvaglf53v7zxwvahi4">fatcat:uxo6jvvpdvaglf53v7zxwvahi4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170829180317/http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/ACCV_2014/pages/PDF/845.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/86/27/8627e6ccb42c909b5c1f94304af986472effb6f1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-16808-1_25"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Learning a Semantic Space by Deep Network for Cross-media Retrieval

Zhao Li, Wei Lu, Egude Bao, Weiwei Xing
<span title="2015-09-01">2015</span> <i title="KSI Research Inc. and Knowledge Systems Institute Graduate School"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/bijobeithfesjfyctrnp5jfnmu" style="color: black;">Proceedings of the 21st International Conference on Distributed Multimedia Systems</a> </i> &nbsp;
visual features for the proposed deep network.  ...  To better bridge the gap between the images and the corresponding semantic concepts, an open-source CNN implementation called Deep Convolutional Activation Feature (DeCAF) is employed to extract input  ...  as general features for various tasks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18293/dms2015-005">doi:10.18293/dms2015-005</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/dms/LiLBX15.html">dblp:conf/dms/LiLBX15</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rshxdcbrcveyldjiehh7sj6afa">fatcat:rshxdcbrcveyldjiehh7sj6afa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170829120730/http://ksiresearchorg.ipage.com/seke/dms15paper/dms15paper_5.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/43/c6/43c6f6b683613b1759d34e757c48af4687aef666.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18293/dms2015-005"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Describing Textures in the Wild

Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, Andrea Vedaldi
<span title="">2014</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2014 IEEE Conference on Computer Vision and Pattern Recognition</a> </i> &nbsp;
We port from object recognition to texture recognition the Improved Fisher Vector (IFV) and Deep Convolutionalnetwork Activation Features (DeCAF), and show that surprisingly, they both outperform specialized  ...  The resulting Describable Textures Dataset (DTD) is a basis to seek the best representation for recognizing describable texture attributes in images.  ...  DeCAF. The DeCAF features [11] are obtained from an image ℓ as the output of the deep convolutional neural network of [18] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2014.461">doi:10.1109/cvpr.2014.461</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/CimpoiMKMV14.html">dblp:conf/cvpr/CimpoiMKMV14</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/j3qdnbnaqjfxfhcdexzqtxblem">fatcat:j3qdnbnaqjfxfhcdexzqtxblem</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20151102222544/https://hal.archives-ouvertes.fr/hal-01109284/file/Texture_CVPR14.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b5/38/b538e84773817285c18944a30a0e3ad31e93859e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2014.461"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Sociodemographic data and APOE-ε4 augmentation for MRI-based detection of amnestic mild cognitive impairment using deep learning systems

Obioma Pelka, Christoph M. Friedrich, Felix Nensa, Christoph Mönninghoff, Louise Bloch, Karl-Heinz Jöckel, Sara Schramm, Sarah Sanchez Hoffmann, Angela Winkler, Christian Weimar, Martha Jokisch, for the Alzheimer's Disease Neuroimaging Initiative (+1 others)
<span title="2020-09-25">2020</span> <i title="Public Library of Science (PLoS)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/s3gm7274mfe6fcs7e3jterqlri" style="color: black;">PLoS ONE</a> </i> &nbsp;
Deep convolutional activation features (DeCAF) are extracted from the average pooling layer of the deep learning system Inception_v3.  ...  These features from the fused MRI scans are used as visual representation for the Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) classification model.  ...  representation For visual representation, deep convolutional activation features (DeCAF) [37] were chosen.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1371/journal.pone.0236868">doi:10.1371/journal.pone.0236868</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/32976486">pmid:32976486</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/t53bfnfkgjcuriklw3fv6floyy">fatcat:t53bfnfkgjcuriklw3fv6floyy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200927000644/https://journals.plos.org/plosone/article/file?id=10.1371%2Fjournal.pone.0236868&amp;type=printable" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/37/c2/37c2e70c83b66763b6973222ff2330c1356abc98.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1371/journal.pone.0236868"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> plos.org </button> </a>

PANDA: Pose Aligned Networks for Deep Attribute Modeling

Ning Zhang, Manohar Paluri, Marc'Aurelio Ranzato, Trevor Darrell, Lubomir Bourdev
<span title="">2014</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2014 IEEE Conference on Computer Vision and Pattern Recognition</a> </i> &nbsp;
Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems [15] .  ...  We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs.  ...  [8] show that features extracted from the deep convolutional network trained on large datasets are generic and can help in other visual recognition problems.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2014.212">doi:10.1109/cvpr.2014.212</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/ZhangPRDB14.html">dblp:conf/cvpr/ZhangPRDB14</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pbqpvtjh4bdkfe5l6ccgffxuye">fatcat:pbqpvtjh4bdkfe5l6ccgffxuye</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20161118053132/http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Zhang_PANDA_Pose_Aligned_2014_CVPR_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/cf/e6/cfe6c77b9d3ad54572c1d2c32e275ac40096d768.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2014.212"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Semantic Change Detection with Hypermaps [article]

Teppei Suzuki, Soma Shirakabe, Yudai Miyashita, Akio Nakamura, Yutaka Satoh, Hirokatsu Kataoka
<span title="2017-03-16">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We also employ multi-scale feature representation captured by different image patches.  ...  obtained from convolutional neural networks (CNNs).  ...  In the 2015 ImageNet Large Scale Visual Recognition Challenge (ILSVRC2015), Microsoft proposed deep residual networks (ResNet) [13] for image recognition, object detection, and semantic segmentation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.07513v2">arXiv:1604.07513v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3w2r333ryzaobd36yihpcmhalm">fatcat:3w2r333ryzaobd36yihpcmhalm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200826132559/https://arxiv.org/pdf/1604.07513v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e9/28/e92801bec8bf1f6fb6970af70b45eb8e16cfd951.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.07513v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

FASON: First and Second Order Information Fusion Network for Texture Recognition

Xiyang Dai, Joe Yue-Hei Ng, Larry S. Davis
<span title="">2017</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</a> </i> &nbsp;
We then build a multi-level deep architecture to exploit the first and second order information within different convolutional layers.  ...  One of the most successful approaches is Bilinear CNN model that explicitly captures the second order statistics within deep features.  ...  To generate visually plausible style transfer results, the networks need to learn a good representation for both content and style.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2017.646">doi:10.1109/cvpr.2017.646</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/DaiND17.html">dblp:conf/cvpr/DaiND17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/53gvvzme5fc5hhwdltvyvxopya">fatcat:53gvvzme5fc5hhwdltvyvxopya</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180508103440/http://openaccess.thecvf.com:80/content_cvpr_2017/papers/Dai_FASON_First_and_CVPR_2017_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8f/bd/8fbd853ed26ac23b7ccdcda0af1113b2c50ecda4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2017.646"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

[Invited papers] Interactive Face Retrieval Framework for Clarifying User's Visual Memory

Yugo Sato, Tsukasa Fukusato, Shigeo Morishima
<span title="">2019</span> <i title="Institute of Image Information and Television Engineers"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/grfxe6wamvda5l6autaohwjjue" style="color: black;">ITE Transactions on Media Technology and Applications</a> </i> &nbsp;
Based on the user's selection, our proposed system automatically updates a deep convolutional neural network.  ...  Our system is designed for a situation in which the user wishes to find a person but has only visual memory of the person. We address a critical challenge of image retrieval across the user's inputs.  ...  By extending its features, Donahue et al. introduced the deep convolutional activation feature (DeCAF), which utilizes the representation layers as image descriptors and can compute image representations  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3169/mta.7.68">doi:10.3169/mta.7.68</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/e6f7p3dymbfi3nrwxiynm6tyoy">fatcat:e6f7p3dymbfi3nrwxiynm6tyoy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190503055316/https://www.jstage.jst.go.jp/article/mta/7/2/7_68/_pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/47/61/47615a5d94271f9aa2149aaf541d05a2fd208216.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3169/mta.7.68"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 310 results