Filters








4,572 Hits in 5.2 sec

W-Net: Two-stage U-Net with misaligned data for raw-to-RGB mapping [article]

Kwang-Hyun Uhm, Seung-Wook Kim, Seo-Won Ji, Sung-Jin Cho, Jun-Pyo Hong, Sung-Jea Ko
<span title="2019-11-22">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
A challenging data set namely the Zurich Raw-to-RGB data set (ZRR) has been released in the AIM 2019 raw-to-RGB mapping challenge.  ...  Recent research on learning a mapping between raw Bayer images and RGB images has progressed with the development of deep convolutional neural networks.  ...  AIM 2019 raw-to-RGB Mapping Challenge AIM 2019 raw-to-RGB mapping challenge [11] consists of two tracks: fidelity track (Track 1) and perceptual track (Track 2).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.08656v3">arXiv:1911.08656v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7g4dm3mtdvhwxnzczufybtc5ca">fatcat:7g4dm3mtdvhwxnzczufybtc5ca</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200823002912/https://arxiv.org/pdf/1911.08656v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f1/c6/f1c6d961a6bff3f4be73b0be66a1136ed5867be3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.08656v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results [article]

Abdelrahman Abdelhamed, Mahmoud Afifi, Radu Timofte, Michael S. Brown, Yue Cao, Zhilu Zhang, Wangmeng Zuo, Xiaoling Zhang, Jiye Liu, Wendong Chen, Changyuan Wen, Meng Liu (+78 others)
<span title="2020-05-08">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper reviews the NTIRE 2020 challenge on real image denoising with focus on the newly introduced dataset, the proposed methods and their results.  ...  The challenge is a new version of the previous NTIRE 2019 challenge on real image denoising that was based on the SIDD benchmark.  ...  Acknowledgements We thank the NTIRE 2020 sponsors: Huawei, Oppo, Voyage81, MediaTek, DisneyResearch|Studios, and Computer Vision Lab (CVL) ETH Zurich. A. Teams and  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.04117v1">arXiv:2005.04117v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/iwtpyxikerbqhhvkpmwghqxeke">fatcat:iwtpyxikerbqhhvkpmwghqxeke</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200512044647/https://arxiv.org/pdf/2005.04117v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.04117v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

AIM 2019 Challenge on Video Temporal Super-Resolution: Methods and Results [article]

Seungjun Nah, Sanghyun Son, Radu Timofte, Kyoung Mu Lee
<span title="2020-05-04">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper reviews the first AIM challenge on video temporal super-resolution (frame interpolation) with a focus on the proposed solutions and results.  ...  The challenge winning methods achieve the state-of-the-art in video temporal superresolution.  ...  Acknowledgments We thank the AIM 2019 sponsors. A. Teams and affiliations AIM 2019 team Title: AIM 2019 Challenge on Video Temporal Super-  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01233v1">arXiv:2005.01233v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/aqxpmzat7jcwdiztapx5sttceq">fatcat:aqxpmzat7jcwdiztapx5sttceq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929194221/https://arxiv.org/pdf/2005.01233v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/29/6c/296ca2f9e19a63cdb9dfa054637968a56b6e82c1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01233v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

NTIRE 2020 Challenge on Real-World Image Super-Resolution: Methods and Results [article]

Andreas Lugmayr, Martin Danelljan, Radu Timofte, Namhyuk Ahn, Dongwoon Bai, Jie Cai, Yun Cao, Junyang Chen, Kaihua Cheng, SeYoung Chun, Wei Deng, Mostafa El-Khamy (+34 others)
<span title="2020-05-05">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper reviews the NTIRE 2020 challenge on real world super-resolution. It focuses on the participating methods and final results.  ...  This is the second challenge on the subject, following AIM 2019, targeting to advance the state-of-the-art in super-resolution. To measure the performance we use the benchmark protocol from AIM 2019.  ...  Acknowledgements We thank the NTIRE 2020 sponsors: Huawei, Oppo, Voyage81, MediaTek, DisneyResearch|Studios, and Computer Vision Lab (CVL) ETH Zurich.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01996v1">arXiv:2005.01996v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ewngd7chdve3fbvwis32v64ruq">fatcat:ewngd7chdve3fbvwis32v64ruq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200528032059/https://arxiv.org/pdf/2005.01996v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01996v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Depth Maps Restoration for Human using RealSense

Jingfang Yin, Dengming Zhu, Min Shi, Zhaoqi Wang
<span title="">2019</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
Furthermore, in order to show the effectiveness of the proposed method, we register and measure human 3D models based on optimized depth maps.  ...  The experimental results show that our method can restore depth maps for human using RealSense effectively.  ...  FIGURE 6 . 6 Experimental results of different methods on our testing dataset, for every group with three images, from left to right: Input RGB image, [22] result, our method result.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2019.2934863">doi:10.1109/access.2019.2934863</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/diiicfiwhjcozhcckr7zdrw4py">fatcat:diiicfiwhjcozhcckr7zdrw4py</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210429173247/https://ieeexplore.ieee.org/ielx7/6287639/8600701/08794839.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0c/27/0c27fe27e9bb1e0f74f49e0a79349a7380f86437.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2019.2934863"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Predicting Unobserved Space For Planning via Depth Map Augmentation [article]

Marius Fehr, Tim Taubner, Yang Liu, Roland Siegwart, Cesar Cadena
<span title="2019-11-13">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
from RGB-D sensors, semi-dense methods and stereo matchers.  ...  On real world MAV data the augmented system demonstrates superior performance compared to a planner based on very dense RGB-D depth maps.  ...  ACKNOWLEDGMENT We would like to thank Helen Oleynikova for her help with the planner, Zachary Taylor for enabling the real world experiments and providing the VI-LiDAR setup and Fangchang Ma for his help  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.05761v1">arXiv:1911.05761v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xbdc46dnnzh6jbx7iiqm5sbaca">fatcat:xbdc46dnnzh6jbx7iiqm5sbaca</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200913072815/https://arxiv.org/pdf/1911.05761v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d6/0b/d60bc885e062cf60f25eb2a5ce4de071b9c69c7d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.05761v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Technical Report: Co-learning of geometry and semantics for online 3D mapping [article]

Marcela Carvalho, Maxime Ferrera, Alexandre Boulch, Julien Moras, Bertrand Le Saux, Pauline Trouvé-Peloux
<span title="2019-11-04">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The performances of each step of the proposed method are evaluated on the dataset and multiple tasks of the 3DRMS Challenge, and repeatedly surpass state-of-the-art approaches.  ...  The resulting semantic 3D point clouds are then merged in order to create a consistent 3D mesh, in turn used to produce dense semantic 3D reconstruction maps.  ...  Fig. 3 . 3 Comparison of the semantic segmentation maps generated by U-Net trained on RGB images; and the proposed multi-task network architecture trained on RGB and raw depth images on input and extra  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.01082v1">arXiv:1911.01082v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ynsc6vrnwnfs5lnhwxe3az62iy">fatcat:ynsc6vrnwnfs5lnhwxe3az62iy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200906104954/https://arxiv.org/pdf/1911.01082v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9e/e2/9ee27240a48ea99bdb4ec9af75c36d821abc2a0b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.01082v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Appearance Maps [article]

Maxim Maximov, Laura Leal-Taixé, Mario Fritz, Tobias Ritschel
<span title="2019-10-29">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Finally, we show the example of an appearance estimation-and-segmentation task, mapping from an image showingmultiple materials to multiple deep appearance maps.  ...  First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions.  ...  To the one hand, this is more that what we do as it factors out lighting, to the other hand our approach is more general as it makes no assumption on light or geometry and works on raw 4D samples.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.00863v3">arXiv:1804.00863v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2hykyu5rzjaazjbxknkuoqzosu">fatcat:2hykyu5rzjaazjbxknkuoqzosu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200822193034/https://arxiv.org/pdf/1804.00863v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/de/a0/dea0a2108f3a754e8072a3c2b229037ccff07b61.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.00863v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps [article]

Nícolas Rosa, Vitor Guizilini, Valdir Grassi Jr
<span title="2019-10-21">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution.  ...  Rather than modifying the neural network architecture to deal with sparse depth maps, this article introduces a novel densification method for depth maps, using the Hilbert Maps framework.  ...  The first one, non-Guided Depth Upsampling, aims to generate denser maps using only sparse maps obtained directly from 3D data or SLAM features.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.09061v3">arXiv:1809.09061v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ajd62exzr5ef7erhedvzb4ulja">fatcat:ajd62exzr5ef7erhedvzb4ulja</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200912144639/https://arxiv.org/pdf/1809.09061v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/66/3b/663bf02e631bd1eed1ab81c5f1d251a815b2aac7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.09061v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Component Decomposition-Based Hyperspectral Resolution Enhancement for Mineral Mapping

Puhong Duan, Jibao Lai, Pedram Ghamisi, Xudong Kang, Robert Jackisch, Jian Kang, Richard Gloaguen
<span title="2020-09-07">2020</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/kay2tsbijbawliu45dnhvyvgsq" style="color: black;">Remote Sensing</a> </i> &nbsp;
Experimental results verify that the fused results can successfully achieve mineral mapping, producing better results qualitatively and quantitatively over single sensor data.  ...  Based on this idea, the proposed method is comprised of several steps.  ...  Figure 11a,b is the classification maps on the raw data, i.e., RGB and original HSI. Figure 11b–j exhibits the mineral mapping results of different resolution enhancement methods on fused results.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/rs12182903">doi:10.3390/rs12182903</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zw6sebeskbakjhpysnptukqsve">fatcat:zw6sebeskbakjhpysnptukqsve</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201211025236/https://www.mdpi.com/2072-4292/12/18/2903/htm" title="fulltext access" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [HTML] </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/rs12182903"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Assessment of Burned Area Mapping Methods for Smoke Covered Sentinel-2 Data

Alexandru-Cosmin Grivei, Corina Vaduva, Mihai Datcu
<span title="">2020</span> <i title="IEEE"> 2020 13th International Conference on Communications (COMM) </i> &nbsp;
To improve both the usability of optical remote sensing data and the quality of the obtained information we compare multiple feature extraction, classification, and visual enhancement methods and algorithms  ...  for land cover mapping of smoke covered Sentinel-2 data.  ...  ACKNOWLEDGMENT This work has been performed within the frame of the "Multispectral Data Analysis Toolbox for SNAP -ESA's SentiNel Application Platform" project, funded by ESA, and it will be made available  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/comm48946.2020.9141999">doi:10.1109/comm48946.2020.9141999</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dbraajbirfgfzpnadwuiocxfza">fatcat:dbraajbirfgfzpnadwuiocxfza</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210716044925/https://elib.dlr.de/140997/1/09141999.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4e/97/4e9710244335413040ac9f22aac9318de0d6b6de.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/comm48946.2020.9141999"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

A Survey of Simultaneous Localization and Mapping with an Envision in 6G Wireless Networks [article]

Baichuan Huang, Jun Zhao, Jingbin Liu
<span title="2020-02-14">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Simultaneous Localization and Mapping (SLAM) achieves the purpose of simultaneous positioning and map construction based on self-perception.  ...  For Lidar or visual SLAM, the survey illustrates the basic type and product of sensors, open source system in sort and history, deep learning embedded, the challenge and future.  ...  [287] aims at the tracking part of SLAM using an RGB-D camera and 2d low-cost LIDAR to finish a robust indoor SLAM by a mode switch and data fusion.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.05214v4">arXiv:1909.05214v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/itnluvkewfd6fel7x65wdgig3e">fatcat:itnluvkewfd6fel7x65wdgig3e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321164709/https://arxiv.org/pdf/1909.05214v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.05214v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Modelling the Milky Way. I – Method and first results fitting the thick disk and halo with DES-Y3 data [article]

A. Pieres, L. Girardi, E. Balbinot, B. Santiago, L. N. da Costa, A. Carnero Rosell, A. B. Pace, K. Bechtol, M. A. T. Groenewegen, A. Drlica-Wagner, T. S. Li, M. A. G. Maia (+42 others)
<span title="2020-04-01">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Validation tests on synthetic data possessing similar properties to the DES data show that the method is able to recover input parameters with a precision better than 3%.  ...  We present MWFitting, a method to fit the stellar components of the Galaxy by comparing Hess Diagrams (HDs) from TRILEGAL models to real data.  ...  ACKNOWLEDGEMENTS The authors are grateful to James Binney for many useful suggestions and comments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.04350v2">arXiv:1904.04350v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/n3lxqvmplzbq5cqxssutgqd45i">fatcat:n3lxqvmplzbq5cqxssutgqd45i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200403001011/https://arxiv.org/pdf/1904.04350v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.04350v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Innovative Geospatial Solutions for Land Tenure Mapping

M. Koeva, C Stöcker, S Crommelinck, M Chipofya, K Kundert, A Schwering, J Sahib, T Zein, C Timm, M.I Humayun, J Crompvoets, E Tan (+2 others)
<span title="2020-07-10">2020</span> <i title="African Journals Online (AJOL)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/umwpgxfwmbdrppuzj6smh7ihsu" style="color: black;">Rwanda Journal of Engineering Science Technology and Environment</a> </i> &nbsp;
The solutions are based on specific needs, market opportunities, and readiness of end-users. Moreover, aiming in scaling up broader governance implications are examined.  ...  In response to this need, the consortia of "its4land" European Commission Horizon 2020 project developed the "its4land toolbox" based on the continuum of land rights and fit-for-purpose approach.  ...  The multiresolution combinatorial grouping method (MCG) was selected for the creation of closed contours based on the UAV data (S. Crommelinck et al., 2019).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.4314/rjeste.v3i1.3s">doi:10.4314/rjeste.v3i1.3s</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jtyhy3qc2ra5zoqqi5pfvm7g34">fatcat:jtyhy3qc2ra5zoqqi5pfvm7g34</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200801210519/https://www.ajol.info/index.php/rjeste/article/download/197505/186314" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c0/1e/c01e7366fe7b8e8f3b47c4b4f8478d0244d2c0fa.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.4314/rjeste.v3i1.3s"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Integration of Text-maps in Convolutional Neural Networks for Region Detection among Different Textual Categories [article]

Roberto Arroyo, Javier Tovar, Francisco J. Delgado, Emilio J. Almazán, Diego G. Serrador, Antonio Hurtado
<span title="2019-05-26">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The reported results demonstrate that our approach focused on visual and textual data outperforms state-of-the-art algorithms only based on appearance, such as standard Faster R-CNN.  ...  This representation, referred to as text-map, is integrated with the actual image to provide a much richer input to the network.  ...  In Section 3, we present several experiments and results for item coding to validate our approach based on text-maps with respect to methods only based on appearance, such as standard Faster R-CNN.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.10858v1">arXiv:1905.10858v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rtu6d7g5ynenxmfn466al55ayu">fatcat:rtu6d7g5ynenxmfn466al55ayu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191221210537/https://arxiv.org/pdf/1905.10858v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/39/3f/393fd9901d38214d8948107ae848bb3a12dcc8cc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.10858v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 4,572 results