Filters








14 Hits in 4.9 sec

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection [article]

Mohammadamin Barekatain, Miquel Martí, Hsueh-Fu Shih, Samuel Murray, Kotaro Nakayama, Yutaka Matsuo, Helmut Prendinger
<span title="2017-06-15">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes.  ...  Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios.  ...  Eduardo Cordeiro who contributed to the annotation of the dataset and to Ms. Marcia Baptista and Mr. Sergi Caelles for their feedback. The work was partially supported by Prof.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1706.03038v2">arXiv:1706.03038v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hpfgoe2hnzazhapyc6qbfvosse">fatcat:hpfgoe2hnzazhapyc6qbfvosse</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191021062002/https://arxiv.org/pdf/1706.03038v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/18/d9/18d951c5900b5f13aaad047f4f16f74f1e89569a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1706.03038v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

Mohammadamin Barekatain, Miquel Marti, Hsueh-Fu Shih, Samuel Murray, Kotaro Nakayama, Yutaka Matsuo, Helmut Prendinger
<span title="">2017</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</a> </i> &nbsp;
We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes.  ...  Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios.  ...  Eduardo Cordeiro who contributed to the annotation of the dataset and to Ms. Marcia Baptista and Mr. Sergi Caelles for their feedback. The work was partially supported by Prof.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2017.267">doi:10.1109/cvprw.2017.267</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/BarekatainMSMNM17.html">dblp:conf/cvpr/BarekatainMSMNM17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qsnokx3i6zbzjmrzqe6dv6a564">fatcat:qsnokx3i6zbzjmrzqe6dv6a564</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190819010349/http://openaccess.thecvf.com:80/content_cvpr_2017_workshops/w34/papers/Barekatain_Okutama-Action_An_Aerial_CVPR_2017_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bd/0e/bd0e100a91ff179ee5c1d3383c75c85eddc81723.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2017.267"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Convolutional Neural Networks for Aerial Multi-Label Pedestrian Detection [article]

Amir Soleimani, Nasser M. Nasrabadi
<span title="2018-07-16">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The low resolution of objects of interest in aerial images makes pedestrian detection and action detection extremely challenging tasks.  ...  Second, another deep network, is used to learn a latent common sub-space which associates the high resolution aerial imagery and the pedestrian action labels that are provided by the human-based sources  ...  Dataset Okutama-Action [4] is an aerial view concurrent human action detection dataset. It contains 43 fully-annotated sequences of 12 human action classes.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.05983v1">arXiv:1807.05983v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/z5nman7hbbagnjohqkvejwbwvq">fatcat:z5nman7hbbagnjohqkvejwbwvq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200907152937/https://arxiv.org/pdf/1807.05983v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4d/42/4d42d42de4445545b5e3045be296f917acd33ab5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.05983v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multiple Human Tracking using Multi-Cues including Primitive Action Features [article]

Hitoshi Nishimura, Kazuyuki Tasaka, Yasutomo Kawanishi, Hiroshi Murase
<span title="2019-09-18">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The accurate human tracking result using PAF helps multi-frame-based action recognition. In the experiments, we verified the effectiveness of the proposed method using the Okutama-Action dataset.  ...  MHT-PAF can perform the accurate human tracking in dynamic aerial videos captured by a drone. PAF employs a global context, rich information by multi-label actions, and a middle level feature.  ...  Dataset We used an Okutama-Action dataset [25] , which is an aerial view of a concurrent human action detection dataset.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.08171v1">arXiv:1909.08171v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jx63aqne6verdllfie42n7t3xa">fatcat:jx63aqne6verdllfie42n7t3xa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200901020037/https://arxiv.org/pdf/1909.08171v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/84/ca/84caa8a506e47b322cf9dc116806b06bca0c0981.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.08171v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

UAV-GESTURE: A Dataset for UAV Control and Gesture Recognition [article]

Asanka G Perera, Yee Wei Law, Javaan Chahl
<span title="2019-01-09">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Currently, there is no outdoor recorded public video dataset for UAV commanding signals.  ...  Current UAV-recorded datasets are mostly limited to action recognition and object tracking, whereas the gesture signals datasets were mostly recorded in indoor spaces.  ...  A 4K-resolution video dataset called Okutama-Action was introduced in [1] for concurrent action detection by multiple subjects.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.02602v1">arXiv:1901.02602v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hkwx3qn2w5e2rhlohpn2cujmh4">fatcat:hkwx3qn2w5e2rhlohpn2cujmh4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200904015344/https://arxiv.org/pdf/1901.02602v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c2/8d/c28d9c4b2d3cca5f149c2924a60490b7200fedc1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.02602v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Multiviewpoint Outdoor Dataset for Human Action Recognition

Asanka G. Perera, Yee Wei Law, Titilayo T. Ogunwa, Javaan Chahl
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/b32fg3eaw5htxo27fbdgmysupe" style="color: black;">IEEE Transactions on Human-Machine Systems</a> </i> &nbsp;
Owing to the articulated nature of the human body, it is challenging to detect an action from multiple viewpoints, particularly from an aerial viewpoint.  ...  The dataset consists of 20 dynamic human action classes, 2324 video clips and 503086 frames.  ...  We thank Anoop Cherian for his help with our kernelized rank pooling implementation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/thms.2020.2971958">doi:10.1109/thms.2020.2971958</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q4gs4twbyjbsdgsc7rqc5zbfhy">fatcat:q4gs4twbyjbsdgsc7rqc5zbfhy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211013224029/https://arxiv.org/pdf/2110.04119v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5b/ef/5bef391712865de3c3cfdf55a6271342dbc5ad97.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/thms.2020.2971958"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Drone-Action: An Outdoor Recorded Drone Video Dataset for Action Recognition

Asanka G. Perera, Yee Wei Law, Javaan Chahl
<span title="2019-11-28">2019</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ycpqunwwuja6beynxtnfswzqgy" style="color: black;">Drones</a> </i> &nbsp;
Aerial human action recognition is an emerging topic in drone applications. Commercial drone platforms capable of detecting basic human actions such as hand gestures have been developed.  ...  However, a limited number of aerial video datasets are available to support increased research into aerial human action analysis.  ...  A 4k resolution Okutama-Action [16] video dataset was introduced to detect 12 concurrent actions by multiple subjects. The dataset was recorded in a baseball field using two drones.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/drones3040082">doi:10.3390/drones3040082</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pfxcxeesmbbpzg2jtyibngnqe4">fatcat:pfxcxeesmbbpzg2jtyibngnqe4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191207212338/https://res.mdpi.com/d_attachment/drones/drones-03-00082/article_deploy/drones-03-00082.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/02/e9/02e91d03e8e38a9041fbe41a1e9fc90b8cb11bfd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/drones3040082"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Background Invariant Faster Motion Modeling for Drone Action Recognition

Ketan Kotecha, Deepak Garg, Balmukund Mishra, Pratik Narang, Vipual Kumar Mishra
<span title="2021-08-31">2021</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ycpqunwwuja6beynxtnfswzqgy" style="color: black;">Drones</a> </i> &nbsp;
One of the fundamental challenges in recognizing crowd monitoring videos' human action is the precise modeling of an individual's motion feature.  ...  Out of few datasets proposed recently, most of them have multiple humans performing different actions in the same scene, such as a crowd monitoring video, and hence not suitable for directly applying to  ...  Okutama-Action: An Aerial View Video dataset for Concurrent Human Action Detection [13] 0.18 mAP@0.50IOU 1. SSD-based object detection model is used for object detection and action recognition. 2.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/drones5030087">doi:10.3390/drones5030087</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/amtund2gajbftkzunyyuphipr4">fatcat:amtund2gajbftkzunyyuphipr4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210910112249/https://mdpi-res.com/d_attachment/drones/drones-05-00087/article_deploy/drones-05-00087-v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1d/08/1d08de811c6281ffc6842335b979e1e2b92ae416.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/drones5030087"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Vision Meets Drones: A Challenge [article]

Pengfei Zhu, Longyin Wen, Xiao Bian, Haibin Ling, Qinghua Hu
<span title="2018-04-23">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In particular, we design four popular tasks with the benchmark, including object detection in images, object detection in videos, single object tracking, and multi-object tracking.  ...  All these tasks are extremely challenging in the proposed dataset due to factors such as occlusion, large scale and pose variation, and fast motion.  ...  Barekatain [21] present a new Okutama-Action dataset for concurrent human action detection with the aerial view. The dataset includes 43 minute-long fullyannotated sequences with 12 action classes.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.07437v2">arXiv:1804.07437v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ai4czdtqg5bwrp6xxykdr62yi4">fatcat:ai4czdtqg5bwrp6xxykdr62yi4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200827030858/https://arxiv.org/pdf/1804.07437v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/39/88/3988ed2b900af26c07432d0f9f3c2679f3c532ac.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.07437v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Real-Time Multiple Object Tracking - A Study on the Importance of Speed [article]

Samuel Murray
<span title="2017-10-02">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
By running our model on the Okutama-Action dataset, sampled at different frame-rates, we show that the performance is greatly reduced when running the model - including detecting objects - in real-time  ...  In this project, we implement a multiple object tracker, following the tracking-by-detection paradigm, as an extension of an existing method.  ...  Okutama-Action [11] A recently published dataset is Okutama-Action, which consists of aerial-view video captured from UAVs.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1709.03572v2">arXiv:1709.03572v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/adfhseknyregbnlsyrr2lijysq">fatcat:adfhseknyregbnlsyrr2lijysq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200913151645/https://arxiv.org/pdf/1709.03572v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a8/42/a8420e7fa53b81b8069ced8d9c743c141e2fc432.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1709.03572v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Secured Perimeter with Electromagnetic Detection and Tracking with Drone Embedded and Static Cameras

Pedro Teixidó, Juan Antonio Gómez-Galán, Rafael Caballero, Francisco J. Pérez-Grau, José M. Hinojo-Montero, Fernando Muñoz-Chavero, Juan Aponte
<span title="2021-11-06">2021</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
powered off except when an event has been detected.  ...  human supervision.  ...  In the case of the public dataset, the Okutama Action dataset [29] was used.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s21217379">doi:10.3390/s21217379</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/34770685">pmid:34770685</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC8587886/">pmcid:PMC8587886</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jc7wag2y3bfefhq6k3mbukfe5q">fatcat:jc7wag2y3bfefhq6k3mbukfe5q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211109184707/https://mdpi-res.com/d_attachment/sensors/sensors-21-07379/article_deploy/sensors-21-07379.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/67/8a/678a96d4f7cb53a84e8dbb382a3685b68cbdc024.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s21217379"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8587886" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Detection and Tracking Meet Drones Challenge [article]

Pengfei Zhu, Longyin Wen, Dawei Du, Xiao Bian, Heng Fan, Qinghua Hu, Haibin Ling
<span title="2021-10-04">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We provide a large-scale drone captured dataset, VisDrone, which includes four tracks, i.e., (1) image object detection, (2) video object detection, (3) single object tracking, and (4) multi-object tracking  ...  Being the largest such dataset ever published, VisDrone enables extensive evaluation and investigation of visual analysis algorithms for the drone platform.  ...  ACKNOWLEDGEMENTS We would like to thank Jiayu Zheng and Tao Peng for valuable and constructive suggestions to improve the quality of this paper.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2001.06303v3">arXiv:2001.06303v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q2nekdwiz5gulaaa4b66o6khhy">fatcat:q2nekdwiz5gulaaa4b66o6khhy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211006055732/https://arxiv.org/pdf/2001.06303v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/42/4c/424c0206424372ae502198bec7a49ca18c0aa636.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2001.06303v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unsupervised Visual Knowledge Discovery and Accumulation in Dynamic Environments

Ziyin Wang
<span title="2019-11-13">2019</span>
Extensive experiments on prevalent aerial video datasets showed that the approaches efficiently and accurately discover salient ground objects.  ...  We give strong provable guarantees of the clustering accuracy in statistic view.  ...  datasets: UCF Aerial Action Dataset 1 , UCLA Aerial Event Dataset [81] and Okutama Action Dataset [115] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.25394/pgs.10298894.v1">doi:10.25394/pgs.10298894.v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hdtebb7vl5herommg75amje7le">fatcat:hdtebb7vl5herommg75amje7le</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200302163259/https://s3-eu-west-1.amazonaws.com/pstorage-purdue-258596361474/18716642/Purdue_University_Thesis_Ziyin_Wang.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e8/5f/e85fc11571943d99b79f37f9b7dec853e3a3435a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.25394/pgs.10298894.v1"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Multimodal machine learning for intelligent mobility

Jamie Roche
<span title="2020-07-21">2020</span>
Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments.  ...  While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such  ...  in over 77 hours of video footage using stereo camera The Okutama Action dataset [278] Opensource Unstructured 2017 The Okutama-Action dataset was captured using stereo camera from an aerial view.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.26174/thesis.lboro.12245483.v1">doi:10.26174/thesis.lboro.12245483.v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bf4jrnf54bevra6v7sjf76i6ye">fatcat:bf4jrnf54bevra6v7sjf76i6ye</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200728210137/https://s3-eu-west-1.amazonaws.com/pstorage-loughborough-53465/22527383/AJRocheMultimodalMachineLearningforIntelligentMobility.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/51/30/5130e0667ac22347c23bf0c128c571e66cc5f9ea.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.26174/thesis.lboro.12245483.v1"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>