Filters








178 Hits in 5.4 sec

Change Detection from a Street Image Pair using CNN Features and Superpixel Segmentation

Ken Sakurada, Takayuki Okatani
2015 Procedings of the British Machine Vision Conference 2015  
This paper proposes a method for detecting changes of a scene using a pair of its vehicular, omnidirectional images.  ...  Ground-truth superimposed on input image (query) Input image (database) Change estimation (binarized) Feature distance (each grid) Figure 1 : Results of change detection using pool-5 feature of CNN.  ...  This paper proposes a method for detecting changes of a scene using a pair of its vehicular, omnidirectional images.  ... 
doi:10.5244/c.29.61 dblp:conf/bmvc/SakuradaO15 fatcat:ytd5mta6yjd2fpvq7frrbbqlmi

A Joint Convolutional Neural Networks and Context Transfer for Street Scenes Labeling

Qi Wang, Junyu Gao, Yuan Yuan
2018 IEEE transactions on intelligent transportation systems (Print)  
For improving the above problems, this paper proposes a joint method of priori convolutional neural networks at superpixel level (called as "priori s-CNNs") and soft restricted context transfer.  ...  Our contributions are threefold: (1) A priori s-CNNs model that learns priori location information at superpixel level is proposed to describe various objects discriminatingly; (2) A hierarchical data  ...  They propose a region CNN (R-CNN), which use a high-capacity CNNs (AlexNet [20] ) to process region proposal for localizing and segmenting object.  ... 
doi:10.1109/tits.2017.2726546 fatcat:s3n6mi6xirg5xfzgkni4mgkzye

Using Deep Learning in Infrared Images to Enable Human Gesture Recognition for Autonomous Vehicles

Keke Geng, Guodong Yin
2020 IEEE Access  
The saliency maps are obtained by multiscale superpixel segmentation, superpixel block clustering and cellular automata saliency detection.  ...  The obtained five scale saliency maps are fused using a Bayesian based fusion algorithm, and the final saliency image is generated.  ...  FIGURE 11 . 11 Pure RGB, pure infrared, and saliency thermal image pair dataset: (a) street light in dusk conditions; (b) dim street light in the evening; (c) condition without street light in the late  ... 
doi:10.1109/access.2020.2990636 fatcat:czvyfs5llzexjga2wijxwatbsu

AN UNSUPERVISED LABELING APPROACH FOR HYPERSPECTRAL IMAGE CLASSIFICATION

J. González Santiago, F. Schenkel, W. Gross, W. Middelmann
2020 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
Ultimately, a CNN classifier is trained using the computed image to pixel-wise predict classes on unseen datasets.  ...  Based on these facts, an unsupervised labeling approach is presented to automatically generate labeled images used during the training of a convolutional neural network (CNN) classifier.  ...  The 3D CNN eases the joint spatial-spectral feature representation from a pile of spectral bands while the 2D CNN learns more abstract representations at spatial level.  ... 
doi:10.5194/isprs-archives-xliii-b3-2020-407-2020 fatcat:42he4tnuwreybnpu23ru7ko6li

Superpixel-Based Shallow Convolutional Neural Network (SSCNN) for Scanned Topographic Map Segmentation

Tiange Liu, Qiguang Miao, Pengfei Xu, Shihui Zhang
2020 Remote Sensing  
AGWT utilizes the information from both linear and area elements to modify detected boundary maps and sequentially achieve superpixels based on the watershed transform.  ...  Based on AGWT, a benchmark for STM segmentation based on superpixels and a shallow convolutional neural network (SCNN), termed SSCNN, is proposed.  ...  Change detection from a street image pair using CNN features and superpixel segmentation. In Proceedings of the BMCV, Swansea, UK, 7–10 September 2015; Volume 61, pp. 1–12.  ... 
doi:10.3390/rs12203421 fatcat:35mmxq7xqvetpdsfsk4wiqo2wi

Co-Segmentation and Superpixel-Based Graph Cuts for Building Change Detection from Bi-Temporal Digital Surface Models and Aerial Images

Shiyan Pang, Xiangyun Hu, Mi Zhang, Zhongliang Cai, Fengzhu Liu
2019 Remote Sensing  
Secondly, for each period of aerial images, semantic segmentation based on a deep convolutional neural network is used to extract building areas, and this is the basis for subsequent superpixel feature  ...  Therefore, with the bi-temporal aerial images and point cloud data obtained by airborne laser scanner (ALS) or DIM as the data source, a novel building change detection method combining co-segmentation  ...  The authors would like to thank Kai Deng for his processing of semantic segmentation. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/rs11060729 fatcat:s4gdb2ornvgc5bxrevqbcdjyby

Superpixel-wise Assessment of Building Damage from Aerial Images

Lukas Lucks, Dimitri Bulatov, Ulrich Thönnessen, Melanie Böge
2019 Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications  
Then, 52 spectral and textural features are extracted to classify each superpixel as damaged or undamaged using a Random Forest algorithm.  ...  Thus, to make this process more feasible, we developed an automated approach for assessing roof damage from post-loss close-range aerial images and roof outlines.  ...  Moreover, (Fujita et al., 2017) applied Convolutional Neural Networks (CNNs) to analyze pairs of pre-and post-event color images if available or only post-event images where pre-event images were not  ... 
doi:10.5220/0007253802110220 dblp:conf/visapp/LucksBTB19 fatcat:oakstzjzwbbyrccnmjlr6cxnim

Superpixel Segmentation with Fully Convolutional Networks [article]

Fengting Yang, Qian Sun, Hailin Jin, Zihan Zhou
2020 arXiv   pre-print
In computer vision, superpixels have been widely used as an effective way to reduce the number of image primitives for subsequent processing.  ...  Specifically, we modify a popular network architecture for stereo matching to simultaneously predict superpixels and disparities.  ...  This work is supported in part by NSF award #1815491 and a gift from Adobe.  ... 
arXiv:2003.12929v1 fatcat:muqjv2as3bb3togsae4kowcm4q

Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes

Yang Zhang, Philip David, Boqing Gong
2017 2017 IEEE International Conference on Computer Vision (ICCV)  
However, to train CNNs requires a considerable amount of data, which is difficult to collect and laborious to annotate.  ...  These are easy to estimate because images of urban scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.).  ...  This work is supported by the NSF award IIS #1566511, a gift from Adobe Systems Inc., and a GPU from NVIDIA. We thank the anonymous reviewers and area chairs for their insightful comments.  ... 
doi:10.1109/iccv.2017.223 dblp:conf/iccv/ZhangDG17 fatcat:5j473mwet5aidiny2c45auzhae

Geometry Aware Evaluation of Handcrafted Superpixel-Based Features and Convolutional Neural Networks for Land Cover Mapping Using Satellite Imagery

Dawa Derksen, Jordi Inglada, Julien Michel
2020 Remote Sensing  
using the local class histograms as contextual features.  ...  Alternatively, there are several methods based on the manual selection of contextual features in a chosen neighborhood, guided by the knowledge of the data and past experience from similar problems.  ...  Acknowledgments: We would like to thank Andrei Stoian from Thales ThereSiS Lab, Vincent Poulain from Thales Services, and Victor Poughon from the Centre National d'Etudes Spatiales for providing us with  ... 
doi:10.3390/rs12030513 fatcat:ixqerddjhvdfbenykfnk7leve4

A Curriculum Domain Adaptation Approach to the Semantic Segmentation of Urban Scenes [article]

Yang Zhang, Philip David, Hassan Foroosh, Boqing Gong
2019 arXiv   pre-print
However, to train CNNs requires a considerable amount of data, which is difficult to collect and laborious to annotate.  ...  These are easy to estimate because images of urban scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.).  ...  ACKNOWLEDGMENTS This work was supported by the NSF award IIS #1566511, a gift from Adobe Systems Inc., and a GPU from NVIDIA.  ... 
arXiv:1812.09953v3 fatcat:do34jsgclzbb5hod2b5m4rove4

Multitemporal Very High Resolution From Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest

L. Mou, X. Zhu, M. Vakalopoulou, K. Karantzalos, N. Paragios, B. Le Saux, G. Moser, D. Tuia
2017 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing  
The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection.  ...  The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and  ...  ACKNOWLEDGMENT The authors would like to express their greatest appreciation to Deimos Imaging and Urthecast, for acquiring and providing the data used in the competition and for indispensable contribution  ... 
doi:10.1109/jstars.2017.2696823 fatcat:v2ilmcthqbcdffsberu72ctxlu

Clothes Co-Parsing Via Joint Image Segmentation and Labeling With Application to Clothing Retrieval

Xiaodan Liang, Liang Lin, Wei Yang, Ping Luo, Junshi Huang, Shuicheng Yan
2016 IEEE transactions on multimedia  
Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores  ...  ., "region colabeling"), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual  ...  For testing, we used 5000 online-offline image pairs, and the offline images from customers are treated as queries and online images are used as the retrieval gallery.  ... 
doi:10.1109/tmm.2016.2542983 fatcat:jhgwjoiohvfxxkdl3p7xpktzni

Complex Relations in a Deep Structured Prediction Model for Fine Image Segmentation [article]

Cristina Mata, Guy Ben-Yosef, Boris Katz
2018 arXiv   pre-print
Many deep learning architectures for semantic segmentation involve a Fully Convolutional Neural Network (FCN) followed by a Conditional Random Field (CRF) to carry out inference over an image.  ...  We incorporate two relations that were shown to be useful to human object identification - containment and attachment - into the energy term of the CRF and evaluate their performance on the Pascal VOC  ...  Acknowledgments This work was supported in part by the Center for Brains, Minds, and Machines, NSF STC award 1231216, as well as the MIT-IBM Brain-Inspired Multimedia Comprehension project.  ... 
arXiv:1805.09462v1 fatcat:egvsv2skjbdhxbgfvr2vfajxoi

Detecting Hands in Egocentric Videos: Towards Action Recognition [chapter]

Alejandro Cartas, Mariella Dimiccoli, Petia Radeva
2018 Lecture Notes in Computer Science  
However, besides extreme illumination changes in egocentric images, hand detection is not a trivial task because of the intrinsic large variability of hand appearance.  ...  Since the hands are involved in a vast set of daily tasks, detecting hands in egocentric images is an important step towards the recognition of a variety of egocentric actions.  ...  A.C. was supported by a doctoral fellowship from the Mexican Council of Science and Technology (CONACYT) (grant-no. 366596).  ... 
doi:10.1007/978-3-319-74727-9_39 fatcat:pb7322k2qzhbhiunfupybcjwxe
« Previous Showing results 1 — 15 out of 178 results