A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is application/pdf
.
Filters
Nested motion descriptors
2015
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Furthermore, this structure enables an elegant visualization of salient motion using the reconstruction properties of the steerable pyramid. ...
We demonstrate that the quadrature steerable pyramid can be used to pool phase, and that pooling phase rather than magnitude provides an estimate of camera motion. ...
First, bandpass filtering is performed to decompose each image in a video into a set of orientation and scale selective subbands using the complex steerable pyramid [23, 22, 18] . ...
doi:10.1109/cvpr.2015.7298648
dblp:conf/cvpr/Byrne15
fatcat:4iz7sj4znze73bl473dm2m5hbi
Spatiotemporal saliency for video classification
2009
Signal processing. Image communication
Perceptual decomposition of the input, spatiotemporal center-surround interactions and the integration of heterogeneous feature conspicuity values are described and an experimental framework for video ...
This framework consists of a series of experiments that shows the effect of saliency in classification performance and let us draw conclusions on how well the detected salient regions represent the visual ...
A multi-scale representation of these volumes is then obtained using Gaussian pyramids. Each level of the pyramid consists of a 3D smoothed and subsampled version of the original video volume. ...
doi:10.1016/j.image.2009.03.002
fatcat:nj3aewdllzejramdeqwwiwv6mq
Bottom-up spatiotemporal visual attention model for video analysis
2007
IET Image Processing
Towards this goal we extend a common image-oriented computational model of saliency-based visual attention to handle spatiotemporal analysis of video in a volumetric framework. ...
1 ABSTRACT A video analysis framework based on spatiotemporal saliency calculation is presented. ...
Each of them encodes a certain property of the video. The different scales are created using Gaussian pyramids (Burt et al. ...
doi:10.1049/iet-ipr:20060040
fatcat:bmg66b2uwnepxky6ykyffku6he
Dense saliency-based spatiotemporal feature points for action recognition
2009
2009 IEEE Conference on Computer Vision and Pattern Recognition
Our method uses a multi-scale volumetric representation of the video and involves spatiotemporal operations at the voxel level. ...
Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. ...
Intensity and color features are based on color opponent theory and spatiotemporal orientation (motion) is computed using 3D steerable filters. ...
doi:10.1109/cvprw.2009.5206525
fatcat:g64qx6tm55eg5jyp7vd4lfxn7y
Dense saliency-based spatiotemporal feature points for action recognition
2009
2009 IEEE Conference on Computer Vision and Pattern Recognition
Our method uses a multi-scale volumetric representation of the video and involves spatiotemporal operations at the voxel level. ...
Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. ...
Intensity and color features are based on color opponent theory and spatiotemporal orientation (motion) is computed using 3D steerable filters. ...
doi:10.1109/cvpr.2009.5206525
dblp:conf/cvpr/RapantzikosAK09
fatcat:ifyzockjp5cmjfc6ovul6d6sae
Movie summarization based on audiovisual saliency detection
2008
2008 15th IEEE International Conference on Image Processing
Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). ...
Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. ...
The video volume is initially decomposed into a set of feature volumes, namely intensity, color and spatiotemporal orientations. ...
doi:10.1109/icip.2008.4712308
dblp:conf/icip/EvangelopoulosRPMZA08
fatcat:4g2rld6bqbconca4l5xeh5jcpa
Estimating the Material Properties of Fabric from Video
2013
2013 IEEE International Conference on Computer Vision
of fabric from a video, and (c) a perceptual study of humans' ability to estimate the material properties of fabric from videos and images. ...
A discriminatively trained regression model is then used to predict the physical properties of fabric from these features. ...
We would also like to thank Adrian Dalca for all of his helpful discussions and feedback.This work was partially supported by NSF CGV-1111415 and NSF CGV-1212928. ...
doi:10.1109/iccv.2013.455
dblp:conf/iccv/BoumanXBF13
fatcat:urpxoj6fofdnbkfo5iojoz6xfa
Jerk-Aware Video Acceleration Magnification
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
The bottom row shows the spatiotemporal slices along a single diagonal red line in the top of row of (a), and the green and cyan circles in them respectively indicate the swing phase and impact phase. ...
In this paper, we present a novel use of jerk to make the acceleration method robust to quick large motions. ...
To decompose each video frame into magnitude and phase information, we used a complex steerable pyramid [25] with half-octave bandwidth filters and eight orientations. ...
doi:10.1109/cvpr.2018.00190
dblp:conf/cvpr/TakedaOMIK18
fatcat:arurmns7bvevpbhl5l2injv3hm
Mobile Active-Vision Traffic Surveillance System for Urban Networks
2005
Computer-Aided Civil and Infrastructure Engineering
approach makes use of a recent trend in computer vision research; namely, the active vision paradigm. ...
Mounting active vision systems on buses will have the advantage of providing real-time feedback of the current traffic conditions, while possessing the intelligence and visual skills that allow them to ...
ACKNOWLEDGMENTS The authors would like to acknowledge the valuable support of Dr. Gasser Auda during the early stages of the project. ...
doi:10.1111/j.1467-8667.2005.00390
fatcat:argen6weqnavta2xwhjt6m2x44
Video quality assessment for computer graphics applications
2010
ACM SIGGRAPH Asia 2010 papers on - SIGGRAPH ASIA '10
luminance adaptation, spatiotemporal contrast sensitivity and visual masking. ...
Our work enables new applications including objective evaluation of video tone mapping and HDR compression. ...
Pisa HDR image and RNL HDR video courtesy of Paul Debevec. ...
doi:10.1145/1866158.1866187
fatcat:spq27ikhsjdu5mwjnhplv5urly
Video quality assessment for computer graphics applications
2010
ACM SIGGRAPH Asia 2010 papers on - SIGGRAPH ASIA '10
luminance adaptation, spatiotemporal contrast sensitivity and visual masking. ...
Our work enables new applications including objective evaluation of video tone mapping and HDR compression. ...
Pisa HDR image and RNL HDR video courtesy of Paul Debevec. ...
doi:10.1145/1882262.1866187
fatcat:mvpsztj7krfjxcclut5z55z2pe
Video quality assessment for computer graphics applications
2010
ACM Transactions on Graphics
luminance adaptation, spatiotemporal contrast sensitivity and visual masking. ...
Our work enables new applications including objective evaluation of video tone mapping and HDR compression. ...
Pisa HDR image and RNL HDR video courtesy of Paul Debevec. ...
doi:10.1145/1882261.1866187
fatcat:d35fj4ruavhhrhdsd67zv6jree
Video event detection and summarization using audio, visual and text saliency
2009
2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Visual saliency is measured through a spatiotemporal attention model driven by intensity, color and motion. ...
Detection of perceptually important video events is formulated here on the basis of saliency models for the audio, visual and textual information conveyed in a video stream. ...
For the intensity and color features, we adopt the opponent process color theory while spatiotemporal orientations are computed using steerable filters and measuring their strength along particular directions ...
doi:10.1109/icassp.2009.4960393
dblp:conf/icassp/EvangelopoulosZSRPMA09
fatcat:hvqgfwvuc5d3pndvvsjbctaknq
Spatiotemporal Features for Action Recognition and Salient Event Detection
2011
Cognitive Computation
In this paper, we propose a novel method to compute visual saliency from video sequences by counting in the actual spatiotemporal nature of the video. ...
The resulting saliency volume is used to detect prominent spatiotemporal regions and consequently applied to action recognition and perceptually salient event detection in video sequences. ...
The orientation feature volume is computed using spatiotemporal steerable filters tuned to respond to moving stimuli. ...
doi:10.1007/s12559-011-9097-0
fatcat:galcmwtdnfhkzl3krvo324424m
Audiovisual Attention Modeling and Salient Event Detection
[chapter]
2008
Multimodal Processing and Interaction
Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). ...
Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. ...
Spatiotemporal orientations are computed using steerable filters [8] . A steerable filter may be of arbitrary orientation and is synthesized as a linear combination of rotated versions of itself. ...
doi:10.1007/978-0-387-76316-3_8
fatcat:kyug5zcilnd7nkbrekru75guie
« Previous
Showing results 1 — 15 out of 112 results