Filters








5,514 Hits in 6.2 sec

Red Shift: procedural shift-reduce parsing (vision paper)

Nicolas Laurent
2017 Proceedings of the 10th ACM SIGPLAN International Conference on Software Language Engineering - SLE 2017  
Red Shift is a new design pattern for implementing parsers. The pattern draws ideas from traditional shift-reduce parsing as well as procedural PEG parsers.  ...  Red Shift parsers behave like shift-reduce parsers, but eliminate ambiguity by always prioritizing reductions over shifts.  ...  A Red Shift parser can be extended by adding new reducers.  ... 
doi:10.1145/3136014.3136036 dblp:conf/sle/Laurent17 fatcat:3hlwjqbhhbfx5nd6l23qbin4mu

Prior-Constrained Scale-Space Mean Shift

K. Okada, M. Singh, V. Ramesh
2006 Procedings of the British Machine Vision Conference 2006  
This paper proposes a new variational bound optimization framework for incorporating spatial prior information to the mean shift-based data-driven mode analysis, offering flexible control of the mean shift  ...  This approach is used to propose a mode parsing algorithm using the inhibitionof-return principle.  ...  This procedure traverses from mode to mode starting from an arbitrary initial point, parsing all the blob-like data structures located nearby.This parsing process is efficient because each detected mode  ... 
doi:10.5244/c.20.85 dblp:conf/bmvc/OkadaSR06 fatcat:cr56dv5rsbalndj6w6jynyowdy

Noninvasive optical inhibition with a red-shifted microbial rhodopsin

Amy S Chuong, Mitra L Miri, Volker Busskamp, Gillian A C Matthews, Leah C Acker, Andreas T Sørensen, Andrew Young, Nathan C Klapoetke, Mike A Henninger, Suhasa B Kodandaramaiah, Masaaki Ogawa, Shreshtha B Ramanlal (+10 others)
2014 Nature Neuroscience  
DISCUSSION We here report Jaws, a red light sensitive opsin with the most red-shifted spectrum of any optogenetic inhibitor known to us.  ...  We present a red-shifted cruxhalorhodopsin, Jaws, derived from Haloarcula (Halobacterium) salinarum (strain Shark) and engineered to result in red light-induced photocurrents three times those of earlier  ... 
doi:10.1038/nn.3752 pmid:24997763 pmcid:PMC4184214 fatcat:46hp7ugswrcunphoby32r7qirq

3DSSD: Point-Based 3D Single Stage Object Detector

Zetong Yang, Yanan Sun, Shu Liu, Jiaya Jia
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
In this paper, we present a lightweight point-based 3D single stage object detector 3DSSD to achieve decent balance of accuracy and efficiency.  ...  It is the first attempt in this paper not to use FP layers and the refinement module, so as to speed up the whole procedure.  ...  The red dot represents instance center. We only shift points from F-FPS under the supervision of their distances to the center of an instance. Figure 4 . 4 Figure 4.  ... 
doi:10.1109/cvpr42600.2020.01105 dblp:conf/cvpr/YangS0J20 fatcat:aofaer6njbdc7khlfgz53y6g5e

Hierarchical face parsing via deep learning

Ping Luo, Xiaogang Wang, Xiaoou Tang
2012 2012 IEEE Conference on Computer Vision and Pattern Recognition  
This paper investigates how to parse (segment) facial components from face images which may be partially occluded.  ...  The proposed hierarchical face parsing is not only robust to partial occlusions but also provide richer information for face analysis and face synthesis compared with face keypoint detection and face alignment  ...  Red points are the positions of parts (a) or components (b). Red boxes are extracted image patches for training.  ... 
doi:10.1109/cvpr.2012.6247963 dblp:conf/cvpr/LuoWT12 fatcat:ccvnsuuz7berva7hn3dhzeztnm

Multi-objective convolutional learning for face labeling

Sifei Liu, Jimei Yang, Chang Huang, Ming-Hsuan Yang
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
This paper formulates face labeling as a conditional random field with unary and pairwise classifiers.  ...  All are kept unchanged throughout the training procedures.  ...  Therefore, we apply a two-stage training procedure with different sampling approaches.  ... 
doi:10.1109/cvpr.2015.7298967 dblp:conf/cvpr/LiuYHY15 fatcat:km4zr53bhjawpfq3f7hqvg3sfq

Robust Image Segmentation Using Contour-Guided Color Palettes

Xiang Fu, Chien-Yi Wang, Chen Chen, Changhu Wang, C.-C. Jay Kuo
2015 2015 IEEE International Conference on Computer Vision (ICCV)  
colors of an image, color samples along long contours between regions, similar in spirit to machine learning methodology that focus on samples near decision boundaries, are collected followed by the mean-shift  ...  The rest of this paper is organized as follows.  ...  Introduction Automatic image segmentation is a fundamental problem in computer vision.  ... 
doi:10.1109/iccv.2015.189 dblp:conf/iccv/FuWCWK15 fatcat:m373bj26tfew5fymj5eiiymvna

Temporally Consistent Superpixels

Matthias Reso, Jorn Jachalsky, Bodo Rosenhahn, Jorn Ostermann
2013 2013 IEEE International Conference on Computer Vision  
In this regards, this paper presents a highly competitive approach for temporally consistent superpixels for video content.  ...  Superpixel algorithms represent a very useful and increasingly popular preprocessing step for a wide range of computer vision applications, as they offer the potential to boost efficiency and effectiveness  ...  This reduces to some extent the noisy flickering of the superpixels from one frame to the next.  ... 
doi:10.1109/iccv.2013.55 dblp:conf/iccv/ResoJRO13 fatcat:f7lwdj2q65galfk6ybqz7rzmoi

Online Building Segmentation from Ground-Based LiDAR Data in Urban Scenes

Jizhou Gao, Ruigang Yang
2013 2013 International Conference on 3D Vision  
In this paper, we present an online algorithm to automatically detect and segment buildings from large scale unorganized 3D point clouds of urban scenes acquired by ground-based LiDAR devices.  ...  Scene Parsing algorithms on color images and some point cloud processing techniques like local normal distribution and local plane fitting can help to reduce the false detection but might incur high computation  ...  A scanning path is shown as a red line strip.  ... 
doi:10.1109/3dv.2013.15 dblp:conf/3dim/GaoY13 fatcat:63qqfnw5ird5vgdpv64aewtb3e

Regularizing max-margin exemplars by reconstruction and generative models

Jose C. Rubio, Bjorn Ommer
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
HoG cells) and thus reduces its degrees of freedom.  ...  The red crosses indicate the optimal margin trained using the sequential SOCP optimization (Sect. 3.3).  ... 
doi:10.1109/cvpr.2015.7299049 dblp:conf/cvpr/RubioO15 fatcat:okeul4inh5bhviih6zqdpxucju

Using k-Poselets for Detecting People and Localizing Their Keypoints

Georgia Gkioxari, Bharath Hariharan, Ross Girshick, Jitendra Malik
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
For Y&R, we used their released model, which was trained on the PARSE dataset.  ...  We made two major changes to the poselet training procedure with respect to [3] .  ... 
doi:10.1109/cvpr.2014.458 dblp:conf/cvpr/GkioxariHGM14 fatcat:ujo5wzilibfqlm6ie6gr6z4oje

MSFSR: A Multi-Stage Face Super-Resolution with Accurate Facial Representation via Enhanced Facial Boundaries

Yunchen Zhang, Yi Wu, Liang Chen
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
For the preprocess procedure on Helen and WFLW, we adopt the latest MaskGAN [25] to generate ground-truth face parsings.  ...  Conclusion In this paper, we have presented a novel MSFSR model for FSR.  ... 
doi:10.1109/cvprw50498.2020.00260 dblp:conf/cvpr/ZhangWC20 fatcat:7hahdn6i6fdqrdjs55bjdc7czu

Geospatial Correspondences for Multimodal Registration

Diego Marcos, Raffay Hamid, Devis Tuia
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
To do so, images of the same geographical areas acquired at different times and, potentially, with different sensors must be efficiently parsed to update maps and detect land-cover changes.  ...  This undesired effect is reduced by using SDSN+SIFT.  ...  For example, a 15 pixel shift in the original image becomes a sub-pixel shift of 0.15 pixels using downscaling factor of 100.  ... 
doi:10.1109/cvpr.2016.550 dblp:conf/cvpr/GonzalezHT16 fatcat:3qpbc6iezbgy3kbggthlhjf5xy

CrossInfoNet: Multi-Task Information Sharing Based Hand Pose Estimation

Kuo Du, Xiangbo Lin, Yi Sun, Xiaohong Ma
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
This paper focuses on the topic of vision based hand pose estimation from single depth map using convolutional neural network (CNN).  ...  Introduction The research of vision based 3D hand pose estimation is a hotspot in the field of computer vision, virtual reality and robotics.  ...  The issues discussed in the competition summary paper [43] are also our concerns.  ... 
doi:10.1109/cvpr.2019.01013 dblp:conf/cvpr/DuLSM19 fatcat:cowijmuzpjcsbhn33nsmvzp34y

Conformative Filter: A Probabilistic Framework for Localization in Reduced Space

Chatavut Viriyasuthee, Gregory Dudek
2011 2011 Canadian Conference on Computer and Robot Vision  
To solve a problem using this scheme, we must reduce the problem into another one for which solutions exist.  ...  Their approach is literally a version of reduction where they parse the environments into navigation states.  ...  The left frame shows a rectangle mesh placed on the agent's (red triangle) sensor space.  ... 
doi:10.1109/crv.2011.11 dblp:conf/crv/ViriyasutheeD11 fatcat:jcpwznqflbel7ezjjbqex4tzg4
« Previous Showing results 1 — 15 out of 5,514 results