Filters








360 Hits in 5.2 sec

Dominant Codewords Selection with Topic Model for Action Recognition

Hirokatsu Kataoka, Kenji Iwata, Yutaka Satoh, Masaki Hayashi, Yoshimitsu Aoki, Slobodan Ilic
<span title="">2016</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</a> </i> &nbsp;
In this paper, we propose a framework for recognizing human activities that uses only in-topic dominant codewords and a mixture of intertopic vectors.  ...  In LDA topic modeling, action videos (documents) are represented by a bag-of-words (input from a dictionary), and these are based on improved dense trajectories ([18]).  ...  Conclusion In this paper, we have proposed topic-based codewords and dominant vector selection for fine-grained action recognition.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2016.101">doi:10.1109/cvprw.2016.101</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/KataokaISHAI16.html">dblp:conf/cvpr/KataokaISHAI16</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/te5ndoqkyvftfkzryg5bgrm5l4">fatcat:te5ndoqkyvftfkzryg5bgrm5l4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170828170212/http://www.cv-foundation.org:80/openaccess/content_cvpr_2016_workshops/w18/papers/Kataokai_Dominant_Codewords_Selection_CVPR_2016_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/76/82765c2abbe4c20a25e71ec6aa5e65a710636486.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2016.101"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Dominant Codewords Selection with Topic Model for Action Recognition [article]

Hirokatsu Kataoka, Masaki Hayashi, Kenji Iwata, Yutaka Satoh, Yoshimitsu Aoki, Slobodan Ilic
<span title="2016-05-01">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we propose a framework for recognizing human activities that uses only in-topic dominant codewords and a mixture of intertopic vectors.  ...  In LDA topic modeling, action videos (documents) are represented by a bag-of-words (input from a dictionary), and these are based on improved dense trajectories.  ...  Conclusion In this paper, we have proposed topic-based codewords and dominant vector selection for fine-grained action recognition.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1605.00324v1">arXiv:1605.00324v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jvwcorpb2fcbpmzp6uxcemb4bm">fatcat:jvwcorpb2fcbpmzp6uxcemb4bm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191014113935/https://arxiv.org/pdf/1605.00324v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1e/48/1e489690c811d1eda06eccbb4c9461dea41f9e64.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1605.00324v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Fast Supervised LDA for Discovering Micro-Events in Large-Scale Video Datasets

Angelos Katharopoulos, Despoina Paschalidou, Christos Diou, Anastasios Delopoulos
<span title="">2016</span> <i title="ACM Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/lahlxihmo5fhzpexw7rundu24u" style="color: black;">Proceedings of the 2016 ACM on Multimedia Conference - MM &#39;16</a> </i> &nbsp;
In addition to its scalability, our method also overcomes the drawbacks of standard, unsupervised LDA for video, including its focus on dominant but often irrelevant video information (e.g. background,  ...  Furthermore, analysis shows that class-relevant topics of fsLDA lead to sparse video representations and encapsulate high-level information corresponding to parts of video events, which we denote "micro-events  ...  UCF11 is composed of 11 action classes with 1600 videos, the majority of which contain heavy camera motion. UCF101 is one of the state-of-the-art datasets for action recognition.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/2964284.2967237">doi:10.1145/2964284.2967237</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/mm/KatharopoulosPD16.html">dblp:conf/mm/KatharopoulosPD16</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ee5ob32umbhvzg67ttm2af3yaa">fatcat:ee5ob32umbhvzg67ttm2af3yaa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190218131359/https://static.aminer.org/pdf/20170130/pdfs/mm/3mgfjuxn2zu9awvdbpahrkqjoykl7sct.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2a/65/2a65d7d5336b377b7f5a98855767dd48fa516c0f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/2964284.2967237"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

Discriminative Dictionary Design for Action Classification in Still Images and Videos [article]

Abhinaba Roy, Biplab Banerjee, Amir Hussain, Soujanya Poria
<span title="2020-06-06">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we address the problem of action recognition from still images and videos.  ...  The underlying visual entities are subsequently represented based on the learned dictionary and this stage is followed by action classification using the random forest model followed by label propagation  ...  Selection of the discriminative local descriptors for effective codebook generation coping with action recognition from images is the very core topic of this paper.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.10149v2">arXiv:2005.10149v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nf2kqeojxvhxzfepohjomyqcae">fatcat:nf2kqeojxvhxzfepohjomyqcae</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200610005951/https://arxiv.org/pdf/2005.10149v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.10149v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Trajectory-Based Modeling of Human Actions with Motion Reference Points [chapter]

Yu-Gang Jiang, Qi Dai, Xiangyang Xue, Wei Liu, Chong-Wah Ngo
<span title="">2012</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
Human action recognition in videos is a challenging problem with wide applications.  ...  model object relationships.  ...  Introduction The recognition of human actions in videos is a topic of active research in computer vision.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-33715-4_31">doi:10.1007/978-3-642-33715-4_31</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/o6wjwnhstjaobnlwqk3ng4xd5m">fatcat:o6wjwnhstjaobnlwqk3ng4xd5m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170706075951/http://www.ee.columbia.edu/%7Ewliu/ECCV12_action.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/12/8212f80c3017689b83c2c22c4d9408d0990113f0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-33715-4_31"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Supervised Learning and Codebook Optimization for Bag-of-Words Models

Mingyuan Jiu, Christian Wolf, Christophe Garcia, Atilla Baskurt
<span title="2012-04-24">2012</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/fyqcejpxl5gdvbw7yq7x2x5abu" style="color: black;">Cognitive Computation</a> </i> &nbsp;
This type of models is frequently used in visual recognition tasks like object class recognition or human action recognition.  ...  In this paper, we present a novel approach for supervised codebook learning and optimization for bag-ofwords models.  ...  human action recognition.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s12559-012-9137-4">doi:10.1007/s12559-012-9137-4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/l5mfsv7uxraj5lbdgtvqi5adsy">fatcat:l5mfsv7uxraj5lbdgtvqi5adsy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20121114192323/http://liris.cnrs.fr/Documents/Liris-5462.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/92/d4/92d4eae968d15e195bf98ff3a10e15c573dd765d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s12559-012-9137-4"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Review of Action Recognition and Detection Methods [article]

Soo Min Kang, Richard P. Wildes
<span title="2016-11-01">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The process of action recognition and detection often begins with extracting useful features and encoding them to ensure that the features are specific to serve the task of action recognition and detection  ...  In computer vision, action recognition refers to the act of classifying an action that is present in a given video and action detection involves locating actions of interest in space and/or time.  ...  Thus, action recognition and detection remains a fundamental problem for its related recognition and classification tasks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1610.06906v2">arXiv:1610.06906v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tqq32ml4cjf4nlv6ivf3ydw3ce">fatcat:tqq32ml4cjf4nlv6ivf3ydw3ce</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201204175808/https://arxiv.org/pdf/1610.06906v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f0/21/f021cbfa5f3483889c3980b62c6cec329c8c5aec.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1610.06906v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A survey on vision-based human action recognition

Ronald Poppe
<span title="">2010</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/z7tk7kanxjcz7hgk77ae6t3ofy" style="color: black;">Image and Vision Computing</a> </i> &nbsp;
Vision-based human action recognition is the process of labeling image sequences with action labels.  ...  The author wishes to thank the reviewers for their valuable comments and the authors that contributed figures to this survey.  ...  Acknowledgements This work was supported by the European IST Programme Project FP6-033812 (Augmented Multi-party Interaction with Distant Access), and is part of the ICIS program.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.imavis.2009.11.014">doi:10.1016/j.imavis.2009.11.014</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fvg2xksjabggpbkswwxf3q7kzy">fatcat:fvg2xksjabggpbkswwxf3q7kzy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170811194205/http://pancamr.lecture.ub.ac.id/files/2012/02/A-survey-on-vision-based-human-action-recognition_Image-and-Vision-Computing_2010.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d9/84/d984b580e02da76cd4d991953e6d430fadf3d578.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.imavis.2009.11.014"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

Simultaneous segmentation and classification of human actions in video streams using deeply optimized Hough transform

Adrien Chan-Hon-Tong, Catherine Achard, Laurent Lucat
<span title="">2014</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jm6w2xclfzguxnhmnmq5omebpi" style="color: black;">Pattern Recognition</a> </i> &nbsp;
Most researches on human activity recognition do not take into account the temporal localization of actions. In this paper, a new method is designed to model both actions and their temporal domains.  ...  Experiments are performed to select skeleton features adapted to this method and relevant to capture human actions.  ...  Introduction Human activity recognition is becoming a major research topic (see [1, 30] for reviews).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patcog.2014.05.010">doi:10.1016/j.patcog.2014.05.010</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lmop2jyem5cybe7o265bzybd3u">fatcat:lmop2jyem5cybe7o265bzybd3u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190428033753/https://hal-cea.archives-ouvertes.fr/cea-01818435/document" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d5/69/d569fe64650a29281b503fa4b0a0e94219d196a1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patcog.2014.05.010"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

Reliable Smart Road Signs [article]

Muhammed O. Sayin, Chung-Wei Lin, Eunsuk Kang, Shinichi Shiraishi, Tamer Basar
<span title="2019-06-03">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In the recognition of smart road signs, however, humans are out of the loop since they cannot see or interpret them.  ...  In this paper, we propose a game theoretical adversarial intervention detection mechanism for reliable smart road signs.  ...  Suppose we have eliminated the actions dominated by Strategy-0 in A. Then, A's action space can be written as h(a x ) ) .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.10622v2">arXiv:1901.10622v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cxb76c5qgnct5m42rv3zcmzffy">fatcat:cxb76c5qgnct5m42rv3zcmzffy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200925061142/https://arxiv.org/pdf/1901.10622v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/00/67/0067fe79060d2d4677b211b8d778ead470d1da7c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.10622v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Bag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice [article]

Xiaojiang Peng and Limin Wang and Xingxing Wang and Yu Qiao
<span title="2014-05-18">2014</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We conclude that every step is crucial for contributing to the final recognition rate.  ...  Bag of Visual Words model (BoVW) with local features has become the most popular method and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101  ...  for action recognition.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1405.4506v1">arXiv:1405.4506v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/te7zsidp55dsrhry55pv4ma7bq">fatcat:te7zsidp55dsrhry55pv4ma7bq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200829160333/https://arxiv.org/pdf/1405.4506v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e9/56/e9569c0636a6f544e0db38da6f577544f67eabd6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1405.4506v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice

Xiaojiang Peng, Limin Wang, Xingxing Wang, Yu Qiao
<span title="">2016</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/dsarpzh6wfgadnyijwv5l6gh2i" style="color: black;">Computer Vision and Image Understanding</a> </i> &nbsp;
Fusing these descriptors is crucial for boosting the final performance of an action recognition system.  ...  Bag of visual words model (BoVW) with local features has been very popular for a long time and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101  ...  Thus, these descriptors only vote for a subset of codewords, that are highly related with action class.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.cviu.2016.03.013">doi:10.1016/j.cviu.2016.03.013</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tdzxkcodgndmlkacw6g53wsqbe">fatcat:tdzxkcodgndmlkacw6g53wsqbe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170102215551/http://wanglimin.github.io/papers/PengWWQ_CVIU16.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fa/cb/facbedfe90956c720f70aab14767b5e25dcc6478.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.cviu.2016.03.013"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

Video scene categorization by 3D hierarchical histogram matching

Paritosh Gupta, Sai Sankalp Arrabolu, Mathew Brown, Silvio Savarese
<span title="">2009</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/753trptklbb4nj6jquqadzwwdu" style="color: black;">2009 IEEE 12th International Conference on Computer Vision</a> </i> &nbsp;
A scene is represented by a collection of 3D points with an appearance based codeword attached to each point.  ...  In this paper we present a new method for categorizing video sequences capturing different scene classes.  ...  Acknowledgements We thank Andrey Del Pozo for the hardwork put in collecting the dataset and for insightful suggestions in a preliminary version of this work.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccv.2009.5459373">doi:10.1109/iccv.2009.5459373</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iccv/GuptaABS09.html">dblp:conf/iccv/GuptaABS09</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/t6nlctvswffyjewads4ashqvlq">fatcat:t6nlctvswffyjewads4ashqvlq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20100731220220/http://www.eecs.umich.edu/~silvio/papers/gupta_iccv09.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/21/e6/21e6e1696af55f76d9f1edb7b43c72d7116320ff.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccv.2009.5459373"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

A Performance Evaluation on Action Recognition with Local Features

Xiantong Zhen, Ling Shao
<span title="">2014</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jsl2pgelqja2piczru3a6nqkg4" style="color: black;">2014 22nd International Conference on Pattern Recognition</a> </i> &nbsp;
However, in the video domain, the BoW model still dominates the action recognition field.  ...  ., the bag-of-words (BoW) model and sparse coding, have shown their effectiveness in image and object recognition in the past decades.  ...  INTRODUCTION Human action recognition has been an active topic in the computer vision community for many years.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icpr.2014.769">doi:10.1109/icpr.2014.769</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icpr/ZhenS14.html">dblp:conf/icpr/ZhenS14</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bbkarqunvbhwvkbnvjlekvcj5i">fatcat:bbkarqunvbhwvkbnvjlekvcj5i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170812054011/http://lshao.staff.shef.ac.uk/pub/ActionEvaluation_ICPR2014.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c9/e1/c9e128334205f02d27ef011dd19578cedd54aa9c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icpr.2014.769"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Visual Search for Objects in a Complex Visual Context: What We Wish to See [chapter]

Hugo Boujut, Aurélie Bugeau, Jenny Benois-Pineau
<span title="2014-06-18">2014</span> <i title="CRC Press"> Digital Imaging and Computer Vision </i> &nbsp;
Despite the fact that the visual saliency modeling is an old research topic, object recognition frameworks using such models is a new trend [18, 53] .  ...  Saliency model evaluation results are presented in section 1.6.1. Object recognition evaluation For the evaluation process we separate learning and testing images by a random selection.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1201/b17080-4">doi:10.1201/b17080-4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3twtmc5vurb3xezrlmygaastqu">fatcat:3twtmc5vurb3xezrlmygaastqu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170928030558/https://hal.archives-ouvertes.fr/hal-00785701/document" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f5/72/f572710b06738533b3ddc5fae915bac9fab6d28d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1201/b17080-4"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 360 results