Filters








22,732 Hits in 8.4 sec

Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study [article]

Dimitrios Kollias and Viktoriia Sharmanska and Stefanos Zafeiriou
<span title="2021-05-08">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Based on this approach, we build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.  ...  By conducting a very large experimental study, utilizing 10 databases, we illustrate that our approach outperforms, by large margins, the state-of-the-art in all tasks and in all databases, even in these  ...  Negative transfer largely depends on the size of labeled data per task [19] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.03790v1">arXiv:2105.03790v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dho6nle2szhe3mlv7hhft4exha">fatcat:dho6nle2szhe3mlv7hhft4exha</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210512105302/https://arxiv.org/pdf/2105.03790v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f5/f5/f5f58ce07c0078348f5db6b86d3b0af04c2012a9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.03790v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recent Advances in Zero-shot Recognition [article]

Yanwei Fu, Tao Xiang, Yu-Gang Jiang, Xiangyang Xue, Leonid Sigal, and Shaogang Gong
<span title="2017-10-13">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
However, to scale the recognition to a large number of classes with few or now training samples for each class remains an unsolved problem.  ...  annotated training data.  ...  Yanwei Fu is supported by The Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1710.04837v1">arXiv:1710.04837v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u3mp6dgj2rgqrarjm4dcywegmy">fatcat:u3mp6dgj2rgqrarjm4dcywegmy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200902194609/https://arxiv.org/pdf/1710.04837v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/db/9d/db9ddb2c730d75ab741544654c7c227831ed1243.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1710.04837v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Bimodal Vein Recognition Based on Task-Specific Transfer Learning

Guoqing WANG, Jun WANG, Zaiyu PAN
<span title="">2017</span> <i title="Institute of Electronics, Information and Communications Engineers (IEICE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xosmgvetnbf4zpplikelekmdqe" style="color: black;">IEICE transactions on information and systems</a> </i> &nbsp;
Both gender and identity recognition task with hand vein information is solved based on the proposed cross-selected-domain transfer learning model.  ...  State-of-the-art recognition results demonstrate the effectiveness of the proposed model for pattern recognition task, and the capability to avoid over-fitting of fine-tuning DCNN with small-scaled database  ...  model, and then it is finetuned on small-scaled PolyU NIR face database with only identical attribute annotation, and the shared pattern of face images could help speed up the convergence by starting  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1587/transinf.2017edl8031">doi:10.1587/transinf.2017edl8031</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/k5wgfqjzxbda7ce7appcxbe2zy">fatcat:k5wgfqjzxbda7ce7appcxbe2zy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20181102055938/https://www.jstage.jst.go.jp/article/transinf/E100.D/7/E100.D_2017EDL8031/_pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c7/84/c784d4918ad33f4dd2991155ea583b4789ba3c11.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1587/transinf.2017edl8031"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

A Large-scale Attribute Dataset for Zero-shot Learning [article]

Bo Zhao, Yanwei Fu, Rui Liang, Jiahong Wu, Yonggang Wang, Yizhou Wang
<span title="2018-05-16">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To overcome these problems, we propose a Large-scale Attribute Dataset (LAD). Our dataset has 78,017 images of 5 super-classes, 230 classes.  ...  Previous ZSL algorithms are tested on several benchmark datasets annotated with attributes. However, these datasets are defective in terms of the image distribution and attribute diversity.  ...  Conclusion In this paper, we present a Large-scale Attribute Dataset (LAD) for zero-shot learning. Many attributes about visual, semantic and subjective properties are annotated.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.04314v2">arXiv:1804.04314v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7lf5sdvzc5dlrbbusquwou7z54">fatcat:7lf5sdvzc5dlrbbusquwou7z54</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824201714/https://arxiv.org/pdf/1804.04314v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ec/17/ec1717987f1c9c9ddadcd7df2a50c26e689cff48.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.04314v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Multi-task Learning to Recognise Subtle Facial Expressions of Mental States [chapter]

Guosheng Hu, Li Liu, Yang Yuan, Zehao Yu, Yang Hua, Zhihong Zhang, Fumin Shen, Ling Shao, Timothy Hospedales, Neil Robertson, Yongxin Yang
<span title="">2018</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
It contains 176K images, manually annotated with 13 emotions, and thus provides the first subtle expression dataset large enough for training deep CNNs.  ...  In addition, we investigate transferring knowledge learned from LSEMSW database to traditional (nonsubtle) expression recognition.  ...  To advance subtle expression recognition, we contribute a Large-scale Subtle Emotions and Mental States in the Wild database (LSEMSW).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-01258-8_7">doi:10.1007/978-3-030-01258-8_7</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/34twdhjrmbafvjda4bzdtckvy4">fatcat:34twdhjrmbafvjda4bzdtckvy4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200305164354/https://pureadmin.qub.ac.uk/ws/files/161102840/ECCV2018_Subtle_Facial_Expressions.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1d/00/1d00b5326ad6f6321c9a4b4c965deca7e68d0435.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-01258-8_7"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Domain Specific, Semi-Supervised Transfer Learning for Medical Imaging [article]

Jitender Singh Virk, Deepti R. Bathula
<span title="2020-05-24">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Limited availability of annotated medical imaging data poses a challenge for deep learning algorithms.  ...  Although transfer learning minimizes this hurdle in general, knowledge transfer across disparate domains is shown to be less effective.  ...  DeepLesion [4] is a large-scale and diverse database of lesions identified in CT scans. It has over 32k slices of dimensions 512 × 512.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.11746v1">arXiv:2005.11746v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/z2iwfnqtarao5drhpavhxifyai">fatcat:z2iwfnqtarao5drhpavhxifyai</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930161539/https://arxiv.org/pdf/2005.11746v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/00/a7/00a79ba76b20d0b802a091cc6649977600624b56.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.11746v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

MAAD-Face: A Massively Annotated Attribute Dataset for Face Images

Philipp Terhorst, Daniel Fahrmann, Jan Niklas Kolf, Naser Damer, Florian Kirchbuchner, Arjan Kuijper
<span title="">2021</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xqa4uhvxwvgsbdpllabnuanz6e" style="color: black;">IEEE Transactions on Information Forensics and Security</a> </i> &nbsp;
Our investigation on the annotation quality by three human evaluators demonstrated the superiority of the MAAD-Face annotations over existing databases.  ...  Consequently, these databases contain a large number of face images but lack in the number of attribute annotations and the overall annotation correctness.  ...  three large-scale annotation face databases, LFW, CelebA, and MAAD-Face.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tifs.2021.3096120">doi:10.1109/tifs.2021.3096120</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lj4uhuzconfzxiuj6dcq4j2t3e">fatcat:lj4uhuzconfzxiuj6dcq4j2t3e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210711234348/https://ieeexplore.ieee.org/ielx7/10206/4358835/09478885.pdf?tp=&amp;arnumber=9478885&amp;isnumber=4358835&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/62/6c/626cda380d4ab86905ef65e6bb058e00f8f3095d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tifs.2021.3096120"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

MAAD-Face: A Massively Annotated Attribute Dataset for Face Images [article]

Philipp Terhörst, Daniel Fährmann, Jan Niklas Kolf, Naser Damer, Florian Kirchbuchner, Arjan Kuijper
<span title="2021-06-28">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Consequently, these databases contain large amount of face images but lack in the number of attribute annotations and the overall annotation correctness.  ...  In this work, we propose MAADFace, a new face annotations database that is characterized by the large number of its high-quality attribute annotations.  ...  three large-scale annotation face databases, LFW, CelebA, and MAAD-Face.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.01030v2">arXiv:2012.01030v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6zcybjd2yvgshh3dcoeji5f5k4">fatcat:6zcybjd2yvgshh3dcoeji5f5k4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210728220046/https://arxiv.org/pdf/2012.01030v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/94/f4/94f43a464cdb0fb83f09c094e78a06e2d2fa8607.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.01030v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

AI Challenger : A Large-scale Dataset for Going Deeper in Image Understanding [article]

Jiahong Wu, He Zheng, Bo Zhao, Yixin Li, Baoming Yan, Rui Liang, Wenjia Wang, Shipei Zhou, Guosen Lin, Yanwei Fu, Yizhou Wang, Yonggang Wang
<span title="2017-11-17">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Significant progress has been achieved in Computer Vision by leveraging large-scale image datasets.  ...  This paper proposed a large-scale dataset named AIC (AI Challenger) with three sub-datasets, human keypoint detection (HKD), large-scale attribute dataset (LAD) and image Chinese captioning (ICC).  ...  The second approaches retrieve the visually similarity images from a large database, and then transfer the captions of retrieved images to fit the query image.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.06475v1">arXiv:1711.06475v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qrwvpy4rwfehvnwfngqvasd72a">fatcat:qrwvpy4rwfehvnwfngqvasd72a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191012072034/https://arxiv.org/pdf/1711.06475v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d8/b3/d8b3aafb25c235be5c62da07881807872ac3e831.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.06475v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Automatic tagging and retrieval of E-Commerce products based on visual features

Vasu Sharma, Harish Karnick
<span title="">2016</span> <i title="Association for Computational Linguistics"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/d5ex6ucxtrfz3clshlkh3f6w2q" style="color: black;">Proceedings of the NAACL Student Research Workshop</a> </i> &nbsp;
In this paper we propose one such approach based on feature extraction using Deep Convolutional Neural Networks to learn descriptive semantic features from product images.  ...  Hence a scalable approach catering to such large number of product images and allocating meaningful tags is essential and could be used to make an efficient tag based product retrieval system.  ...  Besides we directly deal with tag annotation as a multi label problem allowing our approach to scale up to a large number of tag categories.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18653/v1/n16-2004">doi:10.18653/v1/n16-2004</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/naacl/SharmaK16.html">dblp:conf/naacl/SharmaK16</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/mjk3jnfdn5fqfhchpvkmqrhfta">fatcat:mjk3jnfdn5fqfhchpvkmqrhfta</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200508034454/https://www.aclweb.org/anthology/N16-2004.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0f/96/0f96459447373a32b7cbf40ddcdec9ae5620a4d5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18653/v1/n16-2004"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

One-Shot Learning of Scene Locations via Feature Trajectory Transfer

Roland Kwitt, Sebastian Hegenbart, Marc Niethammer
<span title="">2016</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</a> </i> &nbsp;
transferred to new image representations.  ...  This enables us to synthesize new data along the transferred trajectories with respect to the dimensions of the space spanned by the transient attributes.  ...  This work has been supported, in part, by the Austrian Science Fund (FWF KLI project 429) and the NSF grant ECCS-1148870.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2016.16">doi:10.1109/cvpr.2016.16</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/KwittHN16.html">dblp:conf/cvpr/KwittHN16</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ntzne4d7ljcdxjgwbq3r5lo5bu">fatcat:ntzne4d7ljcdxjgwbq3r5lo5bu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20161213092826/http://www.cv-foundation.org:80/openaccess/content_cvpr_2016/papers/Kwitt_One-Shot_Learning_of_CVPR_2016_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8a/d3/8ad322faec79ab46dce19657643560d1d35517f0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2016.16"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Unsupervised Category Discovery via Looped Deep Pseudo-Task Optimization Using a Large Scale Radiology Image Database [article]

Xiaosong Wang, Le Lu, Hoo-chang Shin, Lauren Kim, Isabella Nogues, Jianhua Yao, Ronald Summers
<span title="2016-03-25">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Obtaining semantic labels on a large scale radiology image database (215,786 key images from 61,845 unique patients) is a prerequisite yet bottleneck to train highly effective deep convolutional neural  ...  This allows for further investigation of the hierarchical semantic nature of the given large-scale radiology image database.  ...  By learning visually coherent and class balanced labels through LDPO, we expect that the studied large-scale radiology image database can markedly improve its feasibility in domain transfer to specific  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1603.07965v1">arXiv:1603.07965v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vkwg4itbhfavnhwfzvsiqmzxpq">fatcat:vkwg4itbhfavnhwfzvsiqmzxpq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200924121753/https://arxiv.org/pdf/1603.07965v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/70/dd/70dd83472314201bb49897f01f1e1894b58f360a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1603.07965v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recognizing Fine-Grained and Composite Activities Using Hand-Centric Features and Script Data

Marcus Rohrbach, Anna Rohrbach, Michaela Regneri, Sikandar Amin, Mykhaylo Andriluka, Manfred Pinkal, Bernt Schiele
<span title="2015-08-22">2015</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/hfdglwo5wbbmta6wop52fam7a4" style="color: black;">International Journal of Computer Vision</a> </i> &nbsp;
However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked.  ...  The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions.  ...  Acknowledgments This work was supported by a fellowship within the FITweltweit-Programme of the German Academic Exchange Service (DAAD), by the Cluster of Excellence "Multimodal Computing and Interaction  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s11263-015-0851-8">doi:10.1007/s11263-015-0851-8</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2xck42za7va2dldyvmyfaqrzsq">fatcat:2xck42za7va2dldyvmyfaqrzsq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200924031806/https://arxiv.org/pdf/1502.06648v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3c/20/3c20f8b8efc2226c2a98343d0bf13644b33eba44.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s11263-015-0851-8"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network [article]

Dimitrios Kollias and Viktoriia Sharmanska and Stefanos Zafeiriou
<span title="2020-05-29">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
However it is only recently, with the collection of large-scale datasets and powerful machine learning methods such as deep neural networks, that automatic facial behavior analysis started to thrive.  ...  For this we utilize all publicly available datasets in the community (around 5M images) that study facial behaviour tasks in-the-wild.  ...  The EmotioNet database [15] is a large-scale database with around 1M facial expression images; 950K images were automatically annotated and the remaining 50K images were manually annotated with 11 AUs  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.11111v3">arXiv:1910.11111v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2n2xtyge5fawxj7vlbujkqgidq">fatcat:2n2xtyge5fawxj7vlbujkqgidq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200602013058/https://arxiv.org/pdf/1910.11111v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.11111v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Detecting Smiles of Young Children via Deep Transfer Learning

Yu Xia, Di Huang, Yunhong Wang
<span title="">2017</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6s36fqp6q5hgpdq2scjq3sfu6a" style="color: black;">2017 IEEE International Conference on Computer Vision Workshops (ICCVW)</a> </i> &nbsp;
Thanks to DAN and JAN, the knowledge learned by deep models from adults can be transferred to infants, where very limited labeled data are available for training.  ...  However, the challenge caused by age variations has not been sufficiently focused on before.  ...  CelebA is a large-scale face database, which contains 202,599 images of 10,177 identities. Each image in CelebA is annotated with 40 facial attributes, and one of them is Smile/Non-Smile.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccvw.2017.196">doi:10.1109/iccvw.2017.196</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iccvw/Xia0W17.html">dblp:conf/iccvw/Xia0W17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/i7rxm7sfynaq5hsyn7ximx6qde">fatcat:i7rxm7sfynaq5hsyn7ximx6qde</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190819021609/http://openaccess.thecvf.com:80/content_ICCV_2017_workshops/papers/w23/Xia_Detecting_Smiles_of_ICCV_2017_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/45/82/45824905119ec09447d60e1809434062d5f4c1e4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccvw.2017.196"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 22,732 results