Filters








2,494 Hits in 4.3 sec

DEFINE: Friendship Detection Based on Node Enhancement [chapter]

Hanxiao Pan, Teng Guo, Hayat Dino Bedru, Qing Qing, Dongyu Zhang, Feng Xia
<span title="">2020</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
To bridge the gap, in this paper, we propose a Deep Incomplete Network Embedding method, namely DINE.  ...  Specifically, we first complete the missing part including both nodes and edges in a partially observable network by using the expectation-maximization framework.  ...  To solve the problem, we present a new framework, named DINE for deep incomplete network embedding. DINE intelligently combines network completion and NRL into a unified framework.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-39469-1_7">doi:10.1007/978-3-030-39469-1_7</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pzflvx2kdzgrvdhnz2zxmuhm6y">fatcat:pzflvx2kdzgrvdhnz2zxmuhm6y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200904173314/https://arxiv.org/pdf/2008.06311v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b4/18/b418aa76be185beddd63e1d4e7747238df36491d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-39469-1_7"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

What's in a word? Contextual diversity, urban ethnography and the linguistic limits of the street

Nick Dines
<span title="2018-04-17">2018</span> <i title="SAGE Publications"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/mqc6e4ljyjerpcvtwgxiftwtti" style="color: black;">Sociological Review</a> </i> &nbsp;
fading one) for the industrial proletariat (Dines 2015: 81-82) .  ...  The fact that Rettifilo and many other unofficial monikers for the spaces of the Risanamento and 2 For an in-depth discussion of my research on Piazza Plebiscito, see Dines 2012: 114-168.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1177/0038026118771289">doi:10.1177/0038026118771289</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/iu5tkf3t6fb7xddcg2ojf7b6ui">fatcat:iu5tkf3t6fb7xddcg2ojf7b6ui</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190501221934/http://cadmus.eui.eu/bitstream/handle/1814/60228/Dines_Whats%20in%20a%20word_post-print.pdf;jsessionid=13A4D9DAC09AE2E25E8D8F4ECEF861F3?sequence=2" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6c/96/6c96f46486416a9af0306332f45b43a8219c5257.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1177/0038026118771289"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> sagepub.com </button> </a>

Image-Driven Furniture Style for Interactive 3D Scene Modeling [article]

Tomer Weiss, Ilkay Yildiz, Nitin Agarwal, Esra Ataer-Cansizoglu, Jae-Woo Choi
<span title="2020-10-20">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To understand style, we train a deep learning network on a classification task. Based on image embeddings extracted from our network, we measure stylistic compatibility of furniture.  ...  We propose a method for fast-tracking style-similarity tasks, by learning a furniture's style-compatibility from interior scene images.  ...  deep neural network (Section 3.3).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.10557v1">arXiv:2010.10557v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/v4xfxonx35axxnyp7utgxq24uy">fatcat:v4xfxonx35axxnyp7utgxq24uy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201025112308/https://arxiv.org/pdf/2010.10557v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/22/01/2201a5c4d346c869dd96620a24a031200117fa78.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.10557v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

MOLTR: Multiple Object Localisation, Tracking, and Reconstruction from Monocular RGB Videos [article]

Kejie Li, Hamid Rezatofighi, Ian Reid
<span title="2021-02-15">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Given a new RGB frame, MOLTR firstly applies a monocular 3D detector to localise objects of interest and extract their shape codes that represent the object shapes in a learned embedding space.  ...  We evaluate localisation, tracking, and reconstruction on benchmarking datasets for indoor and outdoor scenes, and show superior performance over previous approaches.  ...  ACKNOWLEDGMENT We gratefully acknowledge the support of the Australian Research Council through the Centre of Excellence for Robotic Vision CE140100016 and Laureate Fellowship FL130100102  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.05360v2">arXiv:2012.05360v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/h7yhnpd6ynft7kdqwcuu6xwqcq">fatcat:h7yhnpd6ynft7kdqwcuu6xwqcq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210217105953/https://arxiv.org/pdf/2012.05360v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/92/de/92de3d2a2195737bab9f491098691e79f827651f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.05360v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction [article]

Manfred Eppe, Sean Trott, Jerome Feldman
<span title="2016-04-22">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We develop a natural language interface for human robot interaction that implements reasoning about deep semantics in natural language.  ...  We implement our NLU framework as a ROS package and present proof-of-concept scenarios with different robots, as well as a survey on the state of the art.  ...  We believe that the lack of mature computational systems for deep NLU is a major reason for this shortcoming.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.06721v1">arXiv:1604.06721v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/n6zfaafncrb35dhrx4vgpei7qa">fatcat:n6zfaafncrb35dhrx4vgpei7qa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200825012910/https://arxiv.org/pdf/1604.06721v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7f/a0/7fa020dc2226420635a2bb640bdcb2251b308fe6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.06721v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

CoupleNet: Coupling Global Structure with Local Parts for Object Detection

Yousong Zhu, Chaoyang Zhao, Jinqiao Wang, Xu Zhao, Yi Wu, Hanqing Lu
<span title="">2017</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/753trptklbb4nj6jquqadzwwdu" style="color: black;">2017 IEEE International Conference on Computer Vision (ICCV)</a> </i> &nbsp;
To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection  ...  The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the  ...  As shown in Figure 3 (b), our global FCN shows a large confidence for the dining table.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccv.2017.444">doi:10.1109/iccv.2017.444</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iccv/ZhuZWZWL17.html">dblp:conf/iccv/ZhuZWZWL17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cgtnfznnzne5pe75fy5gclj23u">fatcat:cgtnfznnzne5pe75fy5gclj23u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200309220928/https://scholarworks.iupui.edu/bitstream/handle/1805/17392/zhu_2017_couplenet.pdf;jsessionid=C8D82A37F992C20C4CC73EB6537B8348?sequence=1" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/43/65/4365eb43a635bc6431dfaf3af1f7bf7bf55522cc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccv.2017.444"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

CoupleNet: Coupling Global Structure with Local Parts for Object Detection [article]

Yousong Zhu, Chaoyang Zhao, Jinqiao Wang, Xu Zhao, Yi Wu, Hanqing Lu
<span title="2017-08-09">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection  ...  The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the  ...  As shown in Figure 3 (b), our global FCN shows a large confidence for the dining table.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.02863v1">arXiv:1708.02863v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ipuq73vvozgebbg4k7ql5sdqpy">fatcat:ipuq73vvozgebbg4k7ql5sdqpy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191023205101/https://arxiv.org/pdf/1708.02863v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/87/1a/871ada1bdcf2279e19601ba2146d2d8163b2afc4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.02863v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Novel Technique for Evidence based Conditional Inference in Deep Neural Networks via Latent Feature Perturbation [article]

Dinesh Khandelwal, Suyash Agrawal, Parag Singla, Chetan Arora
<span title="2019-12-06">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Designing such a network, as well as collecting jointly labeled data for training is a non-trivial task.  ...  Multi-modal techniques in Deep Neural Networks (DNNs) can be seen as perturbing the latent feature representation for incorporating evidence from the auxiliary modality.  ...  Ours-ES is able to detect dining table as compared to MR-MT. The dining table is labeled in the ground truth with incomplete segmentation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.09796v6">arXiv:1811.09796v6</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uwkvzn6xtffvddo66yf7zxbqtq">fatcat:uwkvzn6xtffvddo66yf7zxbqtq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824014151/https://arxiv.org/pdf/1811.09796v6.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/29/5e/295eebf5aea195f4c61049b61e75e124800465be.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.09796v6" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-Label Zero-Shot Learning with Transfer-Aware Label Embedding Projection [article]

Meng Ye, Yuhong Guo
<span title="2018-08-07">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
, while simultaneously learning a max-margin multi-label classifier with the projected label embeddings.  ...  In this paper we propose a transfer-aware embedding projection approach to tackle multi-label zero-shot learning.  ...  Introduction Despite the advances in the development of supervised learning techniques such as deep neural network models, the conventional supervised learning setting requires a large number of labelled  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.02474v1">arXiv:1808.02474v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dov2w7ofbvdg3kdfiprkb5sm3i">fatcat:dov2w7ofbvdg3kdfiprkb5sm3i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191025052607/https://arxiv.org/pdf/1808.02474v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3a/a0/3aa0a60102ff571992d05744834fed1797ec3cf7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.02474v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Interact as You Intend: Intention-Driven Human-Object Interaction Detection [article]

Bingjie Xu, Junnan Li, Yongkang Wong, Mohan S. Kankanhalli, Qi Zhao
<span title="2019-09-22">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The recent advances in instance-level detection tasks lay strong foundation for genuine comprehension of the visual scenes.  ...  Specifically, the proposed human intention-driven HOI detection (iHOI) framework models human pose with the relative distances from body joints to the object instances.  ...  Clark, “Attentional push: A deep convolutional network and C. L. Z. R.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.09796v2">arXiv:1808.09796v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rae6hhktzrfjbcj5d4nkeguxem">fatcat:rae6hhktzrfjbcj5d4nkeguxem</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200919004329/https://arxiv.org/pdf/1808.09796v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d5/d5/d5d5252a748e2cbd8734607da9535c3b68cdbeb8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.09796v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Rule-Enhanced Active Learning for Semi-Automated Weak Supervision

David Kartchner, Davi Nakajima An, Wendi Ren, Chao Zhang, Cassie S. Mitchell
<span title="2022-03-16">2022</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/bnh6csa7wfcb7fmvahv2ahfq2u" style="color: black;">AI</a> </i> &nbsp;
A major bottleneck preventing the extension of deep learning systems to new domains is the prohibitive cost of acquiring sufficient training labels.  ...  REGAL (Rule-Enhanced Generative Active Learning) is an improved framework for weakly supervised text classification that performs active learning over labeling functions rather than individual instances  ...  Training a robust deep learning model generally requires on the order of 10,000+ training examples [1, 2] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/ai3010013">doi:10.3390/ai3010013</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cf4765c2mrb2lkxkvfdolydio4">fatcat:cf4765c2mrb2lkxkvfdolydio4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220321111104/https://mdpi-res.com/d_attachment/ai/ai-03-00013/article_deploy/ai-03-00013.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e7/c1/e7c14972ce1799350970d9f5813020c3b951a4af.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/ai3010013"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

VISIR

Sreyasi Nag Chowdhury, Niket Tandon, Hakan Ferhatosmanoglu, Gerhard Weikum
<span title="">2018</span> <i title="ACM Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/puezkhxc3rggrgb456avsvxi34" style="color: black;">Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining - WSDM &#39;18</a> </i> &nbsp;
We consider the semantic coherence between the labels for different objects, leverage lexical and commonsense knowledge, and cast the label assignment into a constrained optimization problem solved by  ...  The social media explosion has populated the Internet with a wealth of images.  ...  TBIR-style tags inferred from query-and-click logs can be used to train a deep-learning network for more informative labels towards better CBIR.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3159652.3159693">doi:10.1145/3159652.3159693</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/wsdm/ChowdhuryTFW18.html">dblp:conf/wsdm/ChowdhuryTFW18</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4frchlofofd63o4fdo3utn7zoa">fatcat:4frchlofofd63o4fdo3utn7zoa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180720042830/http://wrap.warwick.ac.uk/95881/7/WRAP-VISIR-visual-semantic-image-label-refinement-Ferhatosmanoglu-2017.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7f/66/7f663715be0e4779ab2e31caf69fe90ab7b6bd13.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3159652.3159693"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

Learning Models for Actions and Person-Object Interactions with Transfer to Question Answering [chapter]

Arun Mallya, Svetlana Lazebnik
<span title="">2016</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
In this paper, we propose a convolutional deep network model which utilizes local and global context through feature fusion to make human activity label predictions and achieve state-of-the-art performance  ...  The MIL framework has been widely used in computer vision in problems where training data is often weakly supervised or incompletely labeled such as object detection [21, 22] , semantic segmentation  ...  We train deep networks on the HICO [14] and MPII [15] datasets to predict human activity.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-46448-0_25">doi:10.1007/978-3-319-46448-0_25</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ufqf7mklofgzrnpjqvcsrf7u7y">fatcat:ufqf7mklofgzrnpjqvcsrf7u7y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190218110118/http://pdfs.semanticscholar.org/0ae7/4fabc585cfd1cf60ea3f9e218c59a4539091.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0a/e7/0ae74fabc585cfd1cf60ea3f9e218c59a4539091.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-46448-0_25"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Learning Models for Actions and Person-Object Interactions with Transfer to Question Answering [article]

Arun Mallya, Svetlana Lazebnik
<span title="2016-07-28">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper proposes deep convolutional network models that utilize local and global context to make human activity label predictions in still images, achieving state-of-the-art performance on two recent  ...  Unusual use-cases of an object such as swinging around a backpack can confuse the deep network into misclassifying the object as in the leftmost image.  ...  Accordingly, our network gives a high score for skateboard-related activities, and a much lower score for the bicyclist in the background.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.04808v2">arXiv:1604.04808v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vypzttecqjdlphrurpabqbv27e">fatcat:vypzttecqjdlphrurpabqbv27e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191018064837/https://arxiv.org/pdf/1604.04808v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bb/9c/bb9c26ac2d6d111bf4a9d30f4b3f8d5965b9148d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.04808v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Indoor Scene Recognition in 3D [article]

Shengyu Huang, Mikhail Usvyatsov, Konrad Schindler
<span title="2020-07-02">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For instance, for a robot operating in indoors it is helpful to be aware whether it is in a kitchen, a hallway or a bedroom.  ...  In a series of ablation studies, we show that successful scene recognition is not just the recognition of individual objects unique to some scene type (such as a bathtub), but depends on several different  ...  Basic 3D learning framework We treat scene recognition as a supervised classification problem and solve it with a neural network.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.12819v2">arXiv:2002.12819v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/b34j3cpdbrbeldyzw7p7etfxuq">fatcat:b34j3cpdbrbeldyzw7p7etfxuq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200710052202/https://arxiv.org/pdf/2002.12819v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.12819v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 2,494 results