Filters








98,054 Hits in 5.0 sec

Extreme Low Resolution Activity Recognition with Multi-Siamese Embedding Learning [article]

Michael S. Ryoo, Kiyoon Kim, Hyun Jong Yang
<span title="2018-02-04">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper presents an approach for recognizing human activities from extreme low resolution (e.g., 16x12) videos.  ...  Our approach learns the shared embedding space that maps LR videos with the same content to the same location regardless of their transformations.  ...  This research was supported by the Tech Incubator Program for Startup Korea (TIPS), "deep learning-based low resolution video analysis," and the Miraeholdings grant funded by the Korean government (Ministry  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.00999v2">arXiv:1708.00999v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q55axmk2hfhhbiq2mu3erf7pwi">fatcat:q55axmk2hfhhbiq2mu3erf7pwi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191013184832/https://arxiv.org/pdf/1708.00999v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7b/e1/7be10245c4fe699626dd072cd538c0a2c6aff525.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.00999v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Automatic Meeting Segmentation Using Dynamic Bayesian Networks

Alfred Dielmann, Steve Renals
<span title="">2007</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/sbzicoknnzc3tjljn7ifvwpooi" style="color: black;">IEEE transactions on multimedia</a> </i> &nbsp;
This results in an effective approach to the segmentation problem, resulting in an action error rate of 12.2%, compared with 43% using an approach based on hidden Markov models.  ...  His research interests concern multimodal signal processing and machine learning, in particular probabilistic graphical models for multiparty interaction modelling and natural language processing.  ...  This multistream model was implemented by considering the whole cartesian product of the two independent stream state spaces (HMMs).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tmm.2006.886337">doi:10.1109/tmm.2006.886337</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/l6rtk7yjl5gqvmkaivahry2rey">fatcat:l6rtk7yjl5gqvmkaivahry2rey</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180721223123/http://www.cstr.ed.ac.uk/downloads/publications/2007/dielmann2007-tmm.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d5/47/d5474359fdbcd166673018406acf843c377cbec2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tmm.2006.886337"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Leaving Some Stones Unturned: Dynamic Feature Prioritization for Activity Detection in Streaming Video [article]

Yu-Chuan Su, Kristen Grauman
<span title="2016-04-01">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
in ongoing video.  ...  We propose a new active approach to activity recognition that prioritizes "what to compute when" in order to make timely predictions.  ...  APPENDIX A STREAMING RECOGNITION We show the confidence score improvement during recognition episodes with an 8 fps object detector speed in Sec-  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.00427v1">arXiv:1604.00427v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/m5dhizfdcrabnp3k7goac66qcu">fatcat:m5dhizfdcrabnp3k7goac66qcu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191014044544/https://arxiv.org/pdf/1604.00427v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d9/e5/d9e5375cfeb4e3d1490519a0b0ef5e8e7d779360.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.00427v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Active Learning for Online Recognition of Human Activities from Streaming Videos [article]

Rocco De Rosa, Ilaria Gori, Fabio Cuzzolin, Barbara Caputo and Nicolò Cesa-Bianchi
<span title="2016-04-11">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recognising human activities from streaming videos poses unique challenges to learning algorithms: predictive models need to be scalable, incrementally trainable, and must remain bounded in size even when  ...  We present here an approach to the recognition of human actions from streaming data which meets all these requirements by: (1) incrementally learning a model which adaptively covers the feature space with  ...  . (6) Active Learning: in a streaming setting, the system needs to learn from each incoming video stream.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.02855v1">arXiv:1604.02855v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nkdr7qu5ondybptxtqshjp5jqq">fatcat:nkdr7qu5ondybptxtqshjp5jqq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200831233923/https://arxiv.org/pdf/1604.02855v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c1/5f/c15f367a087a79bb0c430e9bedd6566b72ba5ac0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1604.02855v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Attention-Driven Body Pose Encoding for Human Activity Recognition [article]

B Debnath, M O'brien, S Kumar, A Behera
<span title="2020-10-02">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Finally, the RGB video stream is combined with the fused body pose stream to give a novel end-to-end deep model for effective human activity recognition.  ...  In this paper, we propose a novel approach that learns enhanced feature representations from a given sequence of 3D body joints.  ...  The proposed model outperforms the state-of-the-art approaches on three challenging datasets. Typically, pose information consists of human joint positions in 2D/3D and is provided for each frame.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.14326v2">arXiv:2009.14326v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pbfnq5qsk5ajritj3apoz6cyiq">fatcat:pbfnq5qsk5ajritj3apoz6cyiq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201006233728/https://arxiv.org/pdf/2009.14326v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.14326v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Enrich multi-channel P2P VoD streaming based on dynamic replication strategy

K.T. Meena Abarna, T. Suresh
<span title="2020-06-01">2020</span> <i title="Institute of Advanced Engineering and Science"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ojahcxzn5ja27dfxbw3yqgfqee" style="color: black;">International Journal of Advances in Applied Sciences</a> </i> &nbsp;
In order to growth the streaming capacity, this paper highlights completely different effective helpers created resource balancing scheme that actively recognizes the supply-and-demand inequity in multiple  ...  Peer-to-Peer Video-on-Demand (VoD) is a favorable solution which compromises thousands of videos to millions of users with completeinteractive video watching stream.  ...  Peer has to decide the video to be removed from the local cache to provide space for newly arrived data.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.11591/ijaas.v9.i2.pp110-116">doi:10.11591/ijaas.v9.i2.pp110-116</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qgqrezxc65et5pnsdv7ofrqkz4">fatcat:qgqrezxc65et5pnsdv7ofrqkz4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200516041217/http://ijaas.iaescore.com/index.php/IJAAS/article/download/20238/12876" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.11591/ijaas.v9.i2.pp110-116"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Scaling Human-Object Interaction Recognition in the Video through Zero-Shot Learning

Vali Ollah Maraghi, Karim Faez, Miguel Cazorla
<span title="2021-06-09">2021</span> <i title="Hindawi Limited"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/3wwzxqpotbc73bzpemzybzg7ee" style="color: black;">Computational Intelligence and Neuroscience</a> </i> &nbsp;
We propose an approach for scaling human-object interaction recognition in video data through the zero-shot learning technique to solve this problem.  ...  Recognition of human activities is an essential field in computer vision. The most human activity consists of the interaction between humans and objects.  ...  Verb recognition or, in general, activity understanding in video space is different from single image space.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/2021/9922697">doi:10.1155/2021/9922697</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/b6a73bphufcbzjssfyjocs4m4i">fatcat:b6a73bphufcbzjssfyjocs4m4i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210716004907/https://downloads.hindawi.com/journals/cin/2021/9922697.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2a/9c/2a9c5419d182c9d3e8b4c98e1cd2e23afda79998.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/2021/9922697"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> hindawi.com </button> </a>

A REVIEW ON MACHINE LEARNING ALGORITHMS ON HUMAN ACTION RECOGNITION

Ankush Rai, Jagadeesh Kannan R
<span title="2017-04-01">2017</span> <i title="Innovare Academic Sciences Pvt Ltd"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/exe2cs2mnnd5xor5poxecc5qlu" style="color: black;">Asian Journal of Pharmaceutical and Clinical Research</a> </i> &nbsp;
We examine both the approaches produced for basic human actions and those for abnormal action states.  ...  Next, hierarchical recognition approaches for abnormal action states are introduced and looked at.  ...  State model-based approaches Rather than representing human action as a succession of perceptions state model-based methodologies take in a state model for every action and every action is represented  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.22159/ajpcr.2017.v10s1.19977">doi:10.22159/ajpcr.2017.v10s1.19977</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pzroxovz75bkpilavebs2ig6fm">fatcat:pzroxovz75bkpilavebs2ig6fm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180720184606/https://innovareacademics.in/journals/index.php/ajpcr/article/download/19977/11864" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/26/3f/263fcb257df719fe5e316ab210de44a7fb55b6a2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.22159/ajpcr.2017.v10s1.19977"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Multimodal Integration for Meeting Group Action Segmentation and Recognition [chapter]

Marc Al-Hames, Alfred Dielmann, Daniel Gatica-Perez, Stephan Reiter, Steve Renals, Gerhard Rigoll, Dong Zhang
<span title="">2006</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
We compare three different multimodal feature sets and four modelling infrastructures: a higher semantic feature approach, multi-layer HMMs, a multistream DBN, as well as a multi-stream mixed-state DBN  ...  for disturbed data.  ...  Starting from this hypothesis we further subdivided the model state space according to the nature of features that are processed, modelling each feature stream independently (multistream approach).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/11677482_5">doi:10.1007/11677482_5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ran5xsrghfha5flhhof5vncwpm">fatcat:ran5xsrghfha5flhhof5vncwpm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170808093129/http://www.idiap.ch/ftp/reports/2005/mlmi-05-joint.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e9/72/e9729ba54f9486359b8eec229d5454c0cd2f967f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/11677482_5"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Hierarchical Hidden Markov Model in detecting activities of daily living in wearable videos for studies of dementia

Svebor Karaman, Jenny Benois-Pineau, Vladislavs Dovgalecs, Rémi Mégret, Julien Pinquier, Régine André-Obrecht, Yann Gaëstel, Jean-François Dartigues
<span title="2012-06-01">2012</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/7inqmh346zfjjizjyieh7ijtma" style="color: black;">Multimedia tools and applications</a> </i> &nbsp;
Our work introduces an automatic motion based segmentation of the video and a video structuring approach in terms of activities by a hierarchical two-level Hidden Markov Model.  ...  This paper presents a method for indexing activities of daily living in videos obtained from wearable cameras.  ...  Amongst the variety of HMMs, hierarchical and segmental HMMs turned to be the most popular for modeling activities in video streams. b.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s11042-012-1117-x">doi:10.1007/s11042-012-1117-x</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/iilquwk3yffzjoal2qnnq4jf2e">fatcat:iilquwk3yffzjoal2qnnq4jf2e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170812114851/http://www.sveborkaraman.com/wp-content/papercite-data/pdf/karaman2012hierarchical.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/eb/4f/eb4f887b356f870af65909014dd502471350b5d5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s11042-012-1117-x"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Knowledge-based detection of events in video streams from salient regions of activity

Nicolas Moënne-Loccoz, Eric Bruno, Stéphane Marchand-Maillet
<span title="">2004</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/at6h2du27zgp5hrmgtzl2rteea" style="color: black;">Pattern Analysis and Applications</a> </i> &nbsp;
Visual events occurring in video streams (such as human postures or more complex activities) are detected from a robust and generic region-based representation of the visual content and inferred using  ...  Occurrences of events, modelled as assertions of a language representing spatio-temporal relationships between facts, are in-  ...  Such points are extracted for each frame of the video stream. Figure 1 shows examples of salient points extracted in two different videos sequences.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s10044-004-0235-0">doi:10.1007/s10044-004-0235-0</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fwzbbzgkcfh35gybrmhc6lzgp4">fatcat:fwzbbzgkcfh35gybrmhc6lzgp4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20090709153133/http://viper.unige.ch/documents/pdf/moenneloccoz2004-paa.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/75/03/7503865d35257d7e33eb5d962ba831417fd83a47.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s10044-004-0235-0"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Survey Paper On Bandwidth Estimation For Video Streaming

Sumant Deo
<span title="2016-08-20">2016</span> <i title="Valley International"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/7itxqhdltnewtn4ueymnlhnkea" style="color: black;">International Journal Of Engineering And Computer Science</a> </i> &nbsp;
To achieve smooth and high quality video streaming, we define several actions and reward functions for each state, thus calculating the estimated bandwidth.  ...  In order to maintain high video streaming quality while reducing the wireless service cost,various approaches such as improving band with using adaptive algorithms are devised.  ...  [1] A pull-based algorithm is required for video streaming, as shown in Fig. 1 .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18535/ijecs/v5i1.07">doi:10.18535/ijecs/v5i1.07</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/t3ncw4zde5bath6yl6k756omva">fatcat:t3ncw4zde5bath6yl6k756omva</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180602042348/http://ijecs.in/issue/v5-i1/7%20ijecs.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/58/8a/588a0e063ba5389bfd0c951108bc2494dadca717.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18535/ijecs/v5i1.07"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Evolving Space-Time Neural Architectures for Videos [article]

AJ Piergiovanni, Anelia Angelova, Alexander Toshev, Michael S. Ryoo
<span title="2019-08-20">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present a new method for finding video CNN architectures that capture rich spatio-temporal information in videos.  ...  More importantly they are both more accurate and faster than prior models, and outperform the state-of-the-art results on multiple datasets we test, including HMDB, Kinetics, and Moments in Time.  ...  Our approach discovers models which outperform the state-of-the-art on public datasets we tested, including HMDB, Kinetics, and Moments in time.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.10636v2">arXiv:1811.10636v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vixgehilrng3po4r7gsqdh47pm">fatcat:vixgehilrng3po4r7gsqdh47pm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824165925/https://arxiv.org/pdf/1811.10636v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/79/3b/793ba81fa3bc728e9e35e0f82a8c4286d61d3aff.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.10636v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Active Incremental Recognition of Human Activities in a Streaming Context

Rocco De Rosa, Ilaria Gori, Fabio Cuzzolin, Nicolò Cesa-Bianchi
<span title="">2017</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6r4znskbk5h2ngu345slqsm6eu" style="color: black;">Pattern Recognition Letters</a> </i> &nbsp;
Furthermore, as tuning is problematic in a streaming setting, suitable approaches should be parameterless (as initially tuned parameter values may not prove optimal for future streams).  ...  Here, we present an approach to the recognition of human actions from streaming data which meets all these requirements by: (1) incrementally learning a model which adaptively covers the feature space  ...  Next, we illustrate the workflow of our approach in the specific case of activity recognition from streaming videos: 1.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patrec.2017.03.005">doi:10.1016/j.patrec.2017.03.005</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/iei2cdohmncrtjftu565bglxim">fatcat:iei2cdohmncrtjftu565bglxim</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190427202445/https://radar.brookes.ac.uk/radar/file/0042ccbb-0478-44be-a00d-a7eaba376bda/1/derosa2017active(2).pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6f/82/6f82af854ac6e1e90ca836e99139b561a31b3c67.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patrec.2017.03.005"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

Analysis of Deep Neural Networks For Human Activity Recognition in Videos – A Systematic Literature Review

Hadiqa Aman Ullah, Sukumar Letchmunan, M. Sultan Zia, Umair Muneer Butt, Fadratul Hafinaz Hassan
<span title="">2021</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
-What evaluation measures have been used in literature for validating those models/approaches?  ...  [28] reviewed the activity recognition trends with deep learning models. This paper examined various models based on two-stream networks, C3D and RNN, used for activity recognition.  ...  HADIQA AMAN ULLAH received her BS (Hons.) degree in Information Technology from Punjab University in 2017. She is currently pursuing her MS (CS) degree from The University of Lahore.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2021.3110610">doi:10.1109/access.2021.3110610</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ussooxm7azfljpb5prsm7creaa">fatcat:ussooxm7azfljpb5prsm7creaa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210911115953/https://ieeexplore.ieee.org/ielx7/6287639/6514899/09530410.pdf?tp=&amp;arnumber=9530410&amp;isnumber=6514899&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7e/f6/7ef6ffef623ab8600a9dfa65073005ca142420ba.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2021.3110610"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 98,054 results