Filters








39 Hits in 2.1 sec

Egocentric 6-DoF Tracking of Small Handheld Objects [article]

Rohit Pandey, Pavel Pidlypenskyi, Shuoran Yang, Christine Kaeser-Chen
<span title="2018-04-16">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Virtual and augmented reality technologies have seen significant growth in the past few years. A key component of such systems is the ability to track the pose of head mounted displays and controllers in 3D space. We tackle the problem of efficient 6-DoF tracking of a handheld controller from egocentric camera perspectives. We collected the HMD Controller dataset which consist of over 540,000 stereo image pairs labelled with the full 6-DoF pose of the handheld controller. Our proposed
more &raquo; ... reo3D model achieves a mean average error of 33.5 millimeters in 3D keypoint prediction and is used in conjunction with an IMU sensor on the controller to enable 6-DoF tracking. We also present results on approaches for model based full 6-DoF tracking. All our models operate under the strict constraints of real time mobile CPU inference.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.05870v1">arXiv:1804.05870v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/77fv6el3fvgdnlct5umyzm6jke">fatcat:77fv6el3fvgdnlct5umyzm6jke</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200928001825/https://arxiv.org/pdf/1804.05870v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f0/d1/f0d1a2b676de51ac7b22626f2a3e43b1c5cdb793.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.05870v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Real-time Egocentric Gesture Recognition on Mobile Head Mounted Displays [article]

Rohit Pandey, Marie White, Pavel Pidlypenskyi, Xue Wang, Christine Kaeser-Chen
<span title="2017-12-13">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Mobile virtual reality (VR) head mounted displays (HMD) have become popular among consumers in recent years. In this work, we demonstrate real-time egocentric hand gesture detection and localization on mobile HMDs. Our main contributions are: 1) A novel mixed-reality data collection tool to automatic annotate bounding boxes and gesture labels; 2) The largest-to-date egocentric hand gesture and bounding box dataset with more than 400,000 annotated frames; 3) A neural network that runs real time
more &raquo; ... n modern mobile CPUs, and achieves higher than 76% precision on gesture recognition across 8 classes.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1712.04961v1">arXiv:1712.04961v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ib6zldqqgjdofh6izbxmkx2f2i">fatcat:ib6zldqqgjdofh6izbxmkx2f2i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200903162604/https://arxiv.org/pdf/1712.04961v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b9/e4/b9e4a7998738a33253cc3668d4c166f38ff9fe44.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1712.04961v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Neural Naturalist: Generating Fine-Grained Image Comparisons [article]

Maxwell Forbes, Christine Kaeser-Chen, Piyush Sharma, Serge Belongie
<span title="2019-11-14">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We introduce the new Birds-to-Words dataset of 41k sentences describing fine-grained differences between photographs of birds. The language collected is highly detailed, while remaining understandable to the everyday observer (e.g., "heart-shaped face," "squat body"). Paragraph-length descriptions naturally adapt to varying levels of taxonomic and visual distance---drawn from a novel stratified sampling approach---with the appropriate level of detail. We propose a new model called Neural
more &raquo; ... ist that uses a joint image encoding and comparative module to generate comparative language, and evaluate the results with humans who must use the descriptions to distinguish real images. Our results indicate promising potential for neural models to explain differences in visual embedding space using natural language, as well as a concrete path for machine learning to aid citizen scientists in their effort to preserve biodiversity.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.04101v3">arXiv:1909.04101v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gxjjcg6ox5bpdpmtwowsbcd3qa">fatcat:gxjjcg6ox5bpdpmtwowsbcd3qa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200914025213/https://arxiv.org/pdf/1909.04101v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6d/43/6d4328deb18320173c78ea01455f1f23ded660de.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.04101v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Efficient 6-DoF Tracking of Handheld Objects from an Egocentric Viewpoint [chapter]

Rohit Pandey, Pavel Pidlypenskyi, Shuoran Yang, Christine Kaeser-Chen
<span title="">2018</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
Virtual and augmented reality technologies have seen significant growth in the past few years. A key component of such systems is the ability to track the pose of head mounted displays and controllers in 3D space. We tackle the problem of efficient 6-DoF tracking of a handheld controller from egocentric camera perspectives. We collected the HMD Controller dataset which consist of over 540,000 stereo image pairs labelled with the full 6-DoF pose of the handheld controller. Our proposed
more &raquo; ... reo3D model achieves a mean average error of 33.5 millimeters in 3D keypoint prediction and is used in conjunction with an IMU sensor on the controller to enable 6-DoF tracking. We also present results on approaches for model based full 6-DoF tracking. All our models operate under the strict constraints of real time mobile CPU inference.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-01216-8_26">doi:10.1007/978-3-030-01216-8_26</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/etsbvs2ldvdhbhpwfnw7m6szjm">fatcat:etsbvs2ldvdhbhpwfnw7m6szjm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180922011339/http://openaccess.thecvf.com:80/content_ECCV_2018/papers/Rohit_Pandey_Efficient_6-DoF_Tracking_ECCV_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/25/d6/25d6564a87f7637a99e0bcfaf1ad3768ea4ea22c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-01216-8_26"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Neural Naturalist: Generating Fine-Grained Image Comparisons

Maxwell Forbes, Christine Kaeser-Chen, Piyush Sharma, Serge Belongie
<span title="">2019</span> <i title="Association for Computational Linguistics"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/u3ideoxy4fghvbsstiknuweth4" style="color: black;">Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</a> </i> &nbsp;
We introduce the new Birds-to-Words dataset of 41k sentences describing fine-grained differences between photographs of birds. The language collected is highly detailed, while remaining understandable to the everyday observer (e.g., "heart-shaped face," "squat body"). Paragraph-length descriptions naturally adapt to varying levels of taxonomic and visual distance-drawn from a novel stratified sampling approach-with the appropriate level of detail. We propose a new model called Neural Naturalist
more &raquo; ... that uses a joint image encoding and comparative module to generate comparative language, and evaluate the results with humans who must use the descriptions to distinguish real images. Our results indicate promising potential for neural models to explain differences in visual embedding space using natural language, as well as a concrete path for machine learning to aid citizen scientists in their effort to preserve biodiversity.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18653/v1/d19-1065">doi:10.18653/v1/d19-1065</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/emnlp/ForbesKSB19.html">dblp:conf/emnlp/ForbesKSB19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cggpdza7avfz7eto6khheqtd6e">fatcat:cggpdza7avfz7eto6khheqtd6e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191203111710/https://www.aclweb.org/anthology/D19-1065.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fd/9e/fd9e67b630d9696096c857f3215715b307424d4a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18653/v1/d19-1065"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Training Machines to Identify Species using GBIF-mediated Datasets

Tim Robertson, Serge Belongie, Hartwig Adam, Christine Kaeser-Chen, Chenyang Zhang, Kiat Chuan Tan, Yulong Liu, Denis Brulé, Cédric Deltheil, Scott Loarie, Grant Van Horn, Oisin Mac Aodha (+4 others)
<span title="2019-06-19">2019</span> <i title="Pensoft Publishers"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/cvn3ubdu7vac5pnihczw5ugcsy" style="color: black;">Biodiversity Information Science and Standards</a> </i> &nbsp;
Advances in machine vision technology are rapidly enabling new and innovative uses within the field of biodiversity. Computers are now able to use images to identify tens of thousands of species across a wide range of taxonomic groups in real time, notably demonstrated by iNaturalist.org, which suggests species IDs to users (https://www.inaturalist.org/pages/computer_vision_demo) as they create observation records. Soon it will be commonplace to detect species in video feeds or use the camera
more &raquo; ... a mobile device to search for species-related content on the Internet. The Global Biodiversity Information Facility (GBIF) has an important role to play in advancing and improving this technology, whether in terms of data, collaboration across teams, or citation practice. But in the short term, the most important role may relate to initiating a cultural shift in accepted practices for the use of GBIF-mediated data for training of artificial intelligence (AI). "Training datasets" play a critical role in achieving species recognition capability in any machine vision system. These datasets compile representative images containing the explicit, verifiable identifications of the species they include. High-powered computers run algorithms on these training datasets, analysing the imagery and building complex models that characterize defining features for each species or taxonomic group. Researchers can, in turn, apply the resulting models to new images, determining what species or group they likely contain. Current research in machine vision is exploring (a) the use of location and date information to further improve model results, (b) identification methods beyond species-level into attribute, character, trait, or part-level ID, with an eye toward human interpretability, and (c) expertise modeling for improved determination of "research grade" images and metadata. The GBIF community has amassed one of the largest datasets of labelled species images available on the internet: more than 33 million species occurrence records in GBIF.org have one or more images (https://www.gbif.org/occurrence/gallery). Machine vision models, when integrated into the data collection tools in use across the GBIF network, can improve the user experience. For example, in citizen science applications like iNaturalist, automated species suggestion helps even novice users contribute occurrence records to GBIF. Perhaps most importantly, GBIF has implemented uniform (and open) data licensing, established guidelines on citation and provided consistent methods for tracking data use through the Digital Object Identifiers (DOI) citation chain. GBIF would like to build on the lessons learned in these activities while striving to assist with this technology research and increase its power and availability. We envisage an approach as follows: To assist in developing and refining machine vision models, GBIF plans to provide training datasets, taking effort to ensure license and citation practice are respected. The training datasets will be issued with a DOI, and the contributing datasets will be linked through the DOI citation graph. To assist application developers, Google and Visipedia plan to build and publish openly-licensed models and tutorials for how to adapt them for localized use. Together we will strive to ensure that data is being used responsibly and transparently, to close the gap between machine vision scientists, application developers, and users and to share taxonomic trees capturing the taxon rank to which machine vision models can identify with confidence based on an image's visual characteristics. To assist in developing and refining machine vision models, GBIF plans to provide training datasets, taking effort to ensure license and citation practice are respected. The training datasets will be issued with a DOI, and the contributing datasets will be linked through the DOI citation graph. To assist application developers, Google and Visipedia plan to build and publish openly-licensed models and tutorials for how to adapt them for localized use. Together we will strive to ensure that data is being used responsibly and transparently, to close the gap between machine vision scientists, application developers, and users and to share taxonomic trees capturing the taxon rank to which machine vision models can identify with confidence based on an image's visual characteristics.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3897/biss.3.37230">doi:10.3897/biss.3.37230</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a6uq2izunzfuzn3dqysdn5og3u">fatcat:a6uq2izunzfuzn3dqysdn5og3u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200210110724/https://biss.pensoft.net/lib/ajax_srv/generate_pdf.php?document_id=37230&amp;readonly_preview=1&amp;file_id=0" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0e/6e/0e6e6e34139ee91fae04e4fbf45146c877858e49.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3897/biss.3.37230"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Page 1684 of Psychological Abstracts Vol. 88, Issue 4 [page]

<span title="">2001</span> <i title="American Psychological Association"> <a target="_blank" rel="noopener" href="https://archive.org/details/pub_psychological-abstracts" style="color: black;">Psychological Abstracts </a> </i> &nbsp;
., 11553 Holland, Dwight, 12705 Hollick, Christine, 11530 Holliday, Stephen G., 9661 Hollin, Clive R., 11320 Hollins, Sheila, 12113 Hollis, Chris, 11143 Hollnagel, Hanne, 12053 Holloway, Richard L., 12706  ...  Juraska, Janice M., 10240 Jurkovicoca, J., 11482 Jurkovicova, J., 10858 Jurkowlaniec, Edyta, 10138 Jusezyk, Peter W., 10468 Jiittner, Martin, 9831, 12770 Kaczmarek, Leszek, 10067 Kadar, Endre E., 9885 Kaeser  ... 
<span class="external-identifiers"> </span>
<a target="_blank" rel="noopener" href="https://archive.org/details/sim_psychological-abstracts_2001-04_88_4/page/1684" title="read fulltext microfilm" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Archive [Microfilm] <div class="menu fulltext-thumbnail"> <img src="https://archive.org/serve/sim_psychological-abstracts_2001-04_88_4/__ia_thumb.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a>

Unifying data for fine-grained visual species classification [article]

Sayali Kulkarni, Tomer Gadot, Chen Luo, Tanya Birch, Eric Fegraus
<span title="2020-09-24">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
ACKNOWLEDGMENTS We thank our collaborators Jonathan Huang, Christine Kaeser-Chen, Wildlife Insights partners, Katherine Chou, Sara Beery, Rebecca Moore, Karin Tuxen-Bettman for their contribution to the  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.11433v1">arXiv:2009.11433v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/stjx5xqdprdh5otcotqpoxoa6e">fatcat:stjx5xqdprdh5otcotqpoxoa6e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200926001435/https://arxiv.org/pdf/2009.11433v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.11433v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Index 2009-2010

<span title="2010-01-01">2010</span> <i title="Walter de Gruyter GmbH"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6kwjvoipzvgsplch4zf7ti3dje" style="color: black;">e-Neuroforum</a> </i> &nbsp;
Intrazelluläre Ionenhomöostase und deren Beein- trächtigung bei hepatischer Enzephalopathie (Tony Kelly und Christine Rosemarie Rose) 2/10, 181-188 Auf dem Weg zu einer kognitiven Neurowis- senschaft  ...  vorgestellt Andreas Püschel, 2/10, 193-195 Dendritic organization of sensory input to cortical neurons in vivo (Jia, H., Rochefort, N.L., Chen, X. und Konnerth, A.) vorgestellt von Ulf Eysel, 3/10  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1515/nf-2010-0414">doi:10.1515/nf-2010-0414</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/su7nbrn57rbp3adfojri7ums6i">fatcat:su7nbrn57rbp3adfojri7ums6i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180724224510/https://www.degruyter.com/downloadpdf/j/nf.2010.16.issue-4/nf-2010-0414/nf-2010-0414.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/34/e7/34e729e566b1ea2ff672ad773ccec7c9270c68f6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1515/nf-2010-0414"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> degruyter.com </button> </a>

Role of primary motor cortex in the control of manual dexterity assessed via sequential bilateral lesion in the adult macaque monkey: A case study

Julie Savidan, Mélanie Kaeser, Abderraouf Belhaj-Saïf, Eric Schmidlin, Eric M. Rouiller
<span title="">2017</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/l52eh66fdzhhbmbsefbbyzwkaq" style="color: black;">Neuroscience</a> </i> &nbsp;
Acknowledgments-The authors wish to thank the technical assistance of Ve´ronique Moret, Christine Roulin, and Christiane Marti (histology), Laurent Bossy and Jacques Maillard (animal care taking), Andre  ...  et al., 2010 Kaeser et al., , 2011 Kaeser et al., , 2013 Kaeser et al., , 2014 Chatagny et al., 2013; Wyss et al., 2013) .  ...  Bashir et al., 2012; Freund et al., 2006 Freund et al., , 2009 Kaeser et al., 2010 Kaeser et al., , 2011 Kaeser et al., , 2013 Kaeser et al., , 2014 Liu and Rouiller, 1999; Schmidlin et al., 2004 Schmidlin  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.neuroscience.2017.06.018">doi:10.1016/j.neuroscience.2017.06.018</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/28629845">pmid:28629845</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/z54iu3yeeranfcgnthmr5g7pva">fatcat:z54iu3yeeranfcgnthmr5g7pva</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180722060640/http://doc.rero.ch/record/305113/files/rou_rpm.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d3/d5/d3d5848618892b9b4e6fdc5ff0659ba47dc7f708.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.neuroscience.2017.06.018"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

The Herbarium Challenge 2019 Dataset [article]

Kiat Chuan Tan, Yulong Liu, Barbara Ambrose, Melissa Tulig, Serge Belongie
<span title="2019-06-15">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Guha and Kiran Panesar from dataCommons.org for help with preparing the dataset; Christine Kaeser-Chen and Hartwig Adam from Google Research; Maggie Demkin from Kaggle; the Herbarium Challenge 2019 competitors  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.05372v2">arXiv:1906.05372v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5jdi2kd4sbgkld6nwrfqtyzzv4">fatcat:5jdi2kd4sbgkld6nwrfqtyzzv4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191222113157/https://arxiv.org/pdf/1906.05372v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/41/ba/41ba1e4e37a8c6a29de492d073fabe674bce47b1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.05372v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Page 22 of Journal of Surveying Engineering Vol. 110, Issue Annual Combined Index [page]

<i title="American Society of Civil Engineers"> <a target="_blank" rel="noopener" href="https://archive.org/details/pub_journal-of-surveying-engineering" style="color: black;">Journal of Surveying Engineering </a> </i> &nbsp;
Chen, (Engineering Mechanics in Civil rr A.P.  ...  Nagy and Christine Wiita- Dworkin, ST Oct. 82, p2ien. -2174. Buckling of Coped Steel ms, Ajaya K. Gupta, ST Sept. 84, p1977-1987.  ... 
<span class="external-identifiers"> </span>
<a target="_blank" rel="noopener" href="https://archive.org/details/sim_journal-of-surveying-engineering_1984_110_annual-combined-index/page/22" title="read fulltext microfilm" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Archive [Microfilm] <div class="menu fulltext-thumbnail"> <img src="https://archive.org/serve/sim_journal-of-surveying-engineering_1984_110_annual-combined-index/__ia_thumb.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a>

Page 120 of Psychological Abstracts Vol. 85, Issue Author Index [page]

<i title="American Psychological Association"> <a target="_blank" rel="noopener" href="https://archive.org/details/pub_psychological-abstracts" style="color: black;">Psychological Abstracts </a> </i> &nbsp;
., 7842 Kaeser, | 8573 Kafka, Helen Kafka, John Kafka, Mart Kafle, K.  ...  R Kao, Shu-Chen, 15648 Kao, Shu-Fen, 15680 Ka'opua, Lana S., 22046 Kapadia, Asha S., 16020 Kapadia, Asha, 12625 Kapadia, Shailesh, 30598 Kapardis, Andreas, 6303 Kape'ahiokalani, Maenette, 32112 Kapell,  ... 
<span class="external-identifiers"> </span>
<a target="_blank" rel="noopener" href="https://archive.org/details/sim_psychological-abstracts_1998_85_author-index/page/120" title="read fulltext microfilm" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Archive [Microfilm] <div class="menu fulltext-thumbnail"> <img src="https://archive.org/serve/sim_psychological-abstracts_1998_85_author-index/__ia_thumb.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a>

Geo-Aware Networks for Fine-Grained Recognition [article]

Grace Chu, Brian Potetz, Weijun Wang, Andrew Howard, Yang Song, Fernando Brucher, Thomas Leung, Hartwig Adam
<span title="2019-09-04">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Acknowledgements We would like to thank Yanan Qian, Fred Fung, Christine Kaeser-Chen, Professor Serge Belongie, Chenyang Zhang, Grant Van Horn and Oisin Mac Aodha for their help and useful discussions.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.01737v2">arXiv:1906.01737v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/utxyfwqlobcsjf37t3bq2actee">fatcat:utxyfwqlobcsjf37t3bq2actee</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200827100927/https://arxiv.org/pdf/1906.01737v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/20/43/2043e0af302a22c4ac5c6bdb4a0c84207f1b9a60.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.01737v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Geo-Aware Networks for Fine-Grained Recognition

Grace Chu, Brian Potetz, Weijun Wang, Andrew Howard, Yang Song, Fernando Brucher, Thomas Leung, Hartwig Adam
<span title="">2019</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6s36fqp6q5hgpdq2scjq3sfu6a" style="color: black;">2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)</a> </i> &nbsp;
Acknowledgements We would like to thank Yanan Qian, Fred Fung, Christine Kaeser-Chen, Professor Serge Belongie, Chenyang Zhang, Grant Van Horn and Oisin Mac Aodha for their help and useful discussions.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccvw.2019.00033">doi:10.1109/iccvw.2019.00033</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iccvw/ChuPWHSBLA19.html">dblp:conf/iccvw/ChuPWHSBLA19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ir4bv5bwd5dkxljsjwgkm6gmzm">fatcat:ir4bv5bwd5dkxljsjwgkm6gmzm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200709022403/https://openaccess.thecvf.com/content_ICCVW_2019/papers/CVWC/Chu_Geo-Aware_Networks_for_Fine-Grained_Recognition_ICCVW_2019_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8f/6c/8f6c23d7db59dce6920bcc2170e1c733353804eb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccvw.2019.00033"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 39 results