A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2006; you can also visit the original URL.
The file type is application/pdf
.
Filters
Photographing long scenes with multi-viewpoint panoramas
2006
ACM Transactions on Graphics
Abstract We present a system for producing multi-viewpoint panoramas of long, roughly planar scenes, such as the facades of buildings along a city street, from a relatively sparse set of photographs captured ...
Figure 1 A multi-viewpoint panorama of a street in Antwerp composed from 107 photographs taken about one meter apart with a hand-held camera. ...
Acknowledgements: We are deeply indebted to Noah Snavely for the use of his structure-from-motion system. ...
doi:10.1145/1141911.1141966
fatcat:fahkcpis4rh4xd4ebep27wc72e
Projective Urban Texturing
[article]
2022
arXiv
pre-print
We demonstrate both quantitative and qualitative evaluation of the generated textures. ...
This paper proposes a method for automatic generation of textures for 3D city meshes in immersive urban environments. ...
., between adjacent buildings or streets and buildings) are not modelled, mesh semantic information is rarely available, and mesh training data is rather sparse. ...
arXiv:2201.10938v2
fatcat:naw2zoh25vhxdcw2j6scjks5hi
Moving in a 360 World: Synthesizing Panoramic Parallaxes from a Single Panorama
[article]
2021
arXiv
pre-print
Conversely, OmniNeRF can generate panorama images for unknown viewpoints given a single equirectangular image as training data. ...
Recent works for novel view synthesis focus on perspective images with limited field-of-view and require sufficient pictures captured in a specific condition. ...
able to train a multi-layer perceptron (MLP) for predicting each pixel in the panorama being viewed from an arbitrary location. ...
arXiv:2106.10859v1
fatcat:m7qtfh55cfau7flgvvmivzitku
Locating key views for image indexing of spaces
2008
Proceeding of the 1st ACM international conference on Multimedia information retrieval - MIR '08
In this paper, we will not only visualize areas with images, but also determine where the most distinct viewpoints should be located. ...
sequence of views can be concatenated with minimum continuity along mostexposed-paths. ...
We can notice that the path has sparse viewpoints at wide and open areas and dense viewpoints at narrow streets. ...
doi:10.1145/1460096.1460103
dblp:conf/mir/CaiZ08
fatcat:boda3v7qsbfbvl2pwy66v3l5jq
Toward Seamless Multiview Scene Analysis From Satellite to Street Level
2017
Proceedings of the IEEE
What makes the combination of overhead and street-level images challenging, is the strongly varying viewpoint, different scale, illumination, sensor modality and time of acquisition. ...
In this paper, we discuss and review how combined multi-view imagery from satellite to street-level can benefit scene analysis. ...
[25] locate Google Street view images using a vector layer of buildings footprints by finding corners and building outlines in both sources. ...
doi:10.1109/jproc.2017.2684300
fatcat:r3hyfdtjzzaknl25cpygxrcgq4
Automated Building Image Extraction from 360-degree Panoramas for Post-Disaster Evaluation
[article]
2019
arXiv
pre-print
Several panoramas are used so that the detected building images provide various viewpoints of the building. ...
By providing a geotagged image collected near the target building as the input, panoramas close to the input image location are automatically downloaded through street view services (e.g., Google or Bing ...
with multiple viewpoints. ...
arXiv:1905.01524v1
fatcat:376jaxcn4za73avtix4cfuzaqq
Key views for visualizing large spaces
2009
Journal of Visual Communication and Image Representation
In this paper, we will not only visualize areas with images, but also propose a general framework to determine where the most distinct viewpoints should be located. ...
The location of image capture is, however, subjective and has relied on the esthetic sense of photographers up until this point. ...
This is because a cylindrical panorama with a limited vertical FOV may cut off high buildings as the viewpoint moves too close to buildings. ...
doi:10.1016/j.jvcir.2009.04.005
fatcat:nzolm2dhbjb4png3gdxzdlnvhy
A Multi-source Data Based Analysis Framework for Urban Greenway Safety
2021
Tehnički Vjesnik
The results showed that the greenway with high safety has the characteristics of low density of arbor shrubs, low enclosure degree of walls, low distribution density of various buildings, high traffic ...
Through the utilization of big geodata information from each platform, including street view analysis, POI analysis, and sports activity data analysis, four factors including space boundary, maintenance ...
Science Foundation of China (51978050). ...
doi:10.17559/tv-20201101064943
fatcat:pbmxrmjsw5ch5m4jqdprwq3qdi
HoliCity: A City-Scale Data Platform for Learning Holistic 3D Structures
[article]
2021
arXiv
pre-print
Currently, this dataset has 6,300 real-world panoramas of resolution 13312 × 6656 that are accurately aligned with the CAD model of downtown London with an area of more than 20 km^2, in which the median ...
The accurate alignment of the 3D CAD models and panoramas also benefits low-level 3D vision tasks such as surface normal estimation, as the surface normal extracted from previous LiDAR-based datasets is ...
Panorama. HoliCity uses panorama images from Google Street View with a resolution 13312 × 6656. ...
arXiv:2008.03286v2
fatcat:fw3qlfbk2vhjvanmxr65uamtmq
A Unified Model for Near and Remote Sensing
2017
2017 IEEE International Conference on Computer Vision (ICCV)
To evaluate our approach, we created a large dataset of overhead and ground-level images from a major urban area with three sets of labels: land use, building function, and building age. ...
The output of this network is a dense estimate of the geospatial function in the form of a pixel-level labeling of the overhead image. ...
to the University of Kentucky Center for Computational Sciences. ...
doi:10.1109/iccv.2017.293
dblp:conf/iccv/WorkmanZCJ17
fatcat:3c2fcr5hnjazbepiqlwa5aqvte
Differentiable Mapping Networks: Learning Structured Map Representations for Sparse Visual Localization
[article]
2020
arXiv
pre-print
We apply the DMN to sparse visual localization, where a robot needs to localize in a new environment with respect to a small number of images from known viewpoints. ...
The benefit of spatial structure increases with larger environments, more viewpoints for mapping, and when training data is scarce. Project website: http://sites.google.com/view/differentiable-mapping ...
We thank Dan Rosenbaum for fruitful discussions on the approach and for suggesting the Street View dataset. We thank Chad Richards for editing the manuscript. ...
arXiv:2005.09530v1
fatcat:ffwwxekwdbhg7gf5fjgu7ue5ji
Cataloging Public Objects Using Aerial and Street-Level Images — Urban Trees
2016
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Each corner of the inhabited world is imaged from multiple viewpoints with increasing frequency. ...
The main technical challenge is combining test time information from multiple views of each geographic location (e.g., aerial and street views). ...
Multi View Detection: We begin with an input region (left image), where red dots show available street view locations. ...
doi:10.1109/cvpr.2016.647
dblp:conf/cvpr/WegnerBHSP16
fatcat:jshtpklzbna7pmmcf3hmi55olq
A Unified Model for Near and Remote Sensing
[article]
2017
arXiv
pre-print
To evaluate our approach, we created a large dataset of overhead and ground-level images from a major urban area with three sets of labels: land use, building function, and building age. ...
The output of this network is a dense estimate of the geospatial function in the form of a pixel-level labeling of the overhead image. ...
to the University of Kentucky Center for Computational Sciences. ...
arXiv:1708.03035v1
fatcat:44rhyygfgffy3oz2s76bzzuts4
Predicting Ground-Level Scene Layout from Aerial Imagery
[article]
2016
arXiv
pre-print
We show that a model learned using this strategy, with no additional training, is already capable of rough semantic labeling of aerial imagery. ...
Instead of manually labeling the aerial imagery, we propose to predict (noisy) semantic features automatically extracted from co-located ground imagery. ...
Acknowledgements We gratefully acknowledge the support of NSF CA-REER grant (IIS-1553116), a Google Faculty Research Award, and an AWS Research Education grant. ...
arXiv:1612.02709v1
fatcat:est2frmganhqdexqw7nz2ylxwm
Geometry-Guided Street-View Panorama Synthesis from Satellite Imagery
[article]
2022
arXiv
pre-print
Taking a small satellite image patch as input, our method generates a Google's omnidirectional street-view type panorama, as if it is captured from the same geographical location as the center of the satellite ...
With these projected satellite images as network input, we next employ a generator to synthesize realistic street-view panoramas that are geometrically consistent with the satellite images. ...
synthesis, where the camera location of street-view panorama corresponds exactly to the center of the satellite image. ...
arXiv:2103.01623v4
fatcat:f4tk56sb3rhfjb5ysjw33mxkry
« Previous
Showing results 1 — 15 out of 539 results