Filters








11,531 Hits in 4.5 sec

Photometric Modeling for Mixed Reality [chapter]

Katsushi Ikeuchi, Yoichi Sato, Ko Nishino, Imari Sato
1999 Mixed Reality  
The model-based rendering method rst analyzes input images of real objects, obtains re ectance parameters from this analysis, and then, using the determined re ectance parameters, generates the virtual  ...  For model creation, we h a ve developed two methods, the model-based rendering method and the eigen-texture method, both of which automatically create such rendering models by observing real objects.  ...  Unfortunately, however, some classes of object surfaces reveal complicated re ectance models, and the model-based rendering method cannot be applied to those classes of objects.  ... 
doi:10.1007/978-3-642-87512-0_8 fatcat:4gxssqd5xrdfjngpephwextoii

Open Logo Detection Challenge [article]

Hang Su, Xiatian Zhu, Shaogang Gong
2018 arXiv   pre-print
OpenLogo contains 27,083 images from 352 logo classes, built by aggregating/refining 7 existing datasets and establishing an open logo detection evaluation protocol.  ...  Such assumptions are often invalid in realistic logo detection scenarios where new logo classes come progressively and require to be detected with little or none budget for exhaustively labelling fine-grained  ...  0.02 scale ratio.  ... 
arXiv:1807.01964v3 fatcat:ih6tgdepbjcctfgj4hrvppueg4

Training Object Detectors on Synthetic Images Containing Reflecting Materials [article]

Sebastian Hartwig, Timo Ropinski
2019 arXiv   pre-print
We investigate the influence of rendering approach used for image synthesis, the effect of domain randomization, as well as the amount of used training data.  ...  Therefore, within this paper we examine the effect of reflecting materials in the context of synthetic image generation for training object detectors.  ...  s [26] physically-based data set. They sampled roughly 500K images from 45K realistic indoor scenes, varying render methods, and lighting conditions. Their data set consists of 40 classes (e.g.  ... 
arXiv:1904.00824v1 fatcat:cnap5qxwmrbjlhborkit75ahkq

ABO: Dataset and Benchmarks for Real-World 3D Object Understanding [article]

Jasmine Collins, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang, Tomas F. Yago Vicente, Thomas Dideriksen, Himanshu Arora, Matthieu Guillaumin, Jitendra Malik
2022 arXiv   pre-print
ABO contains product catalog images, metadata, and artist-created 3D models with complex geometries and physically-based materials that correspond to real, household objects.  ...  3D reconstruction, material estimation, and cross-domain multi-view object retrieval.  ...  In Table 8 , we compare the results when using rendered images of test classes as queries, against the union of catalog images of test classes and catalog images of train classes.  ... 
arXiv:2110.06199v2 fatcat:k4xmm7dwszf7pdmoen7r3riykm

Fast re-rendering of volume and surface graphics by depth, color, and opacity buffering

A Bhalerao, Hanspeter Pfister, Michael Halle, Ron Kikinis
2000 Medical Image Analysis  
For the knee data (δ = 1), the ratio for re-compositing to rendering is approximately 1 : 25. The timings show a worstcase re-compositing when all object colors are re-composited.  ...  In our experiments, the render/re-compositing time ratio ranged between 7 and 40 for the best and worst case approximation.  ...  Four frames from a simulated haptic probing of the surface of a volume rendered tibia from the knee joint data set.  ... 
doi:10.1016/s1361-8415(00)00017-7 pmid:11145311 fatcat:spdytvqm3radxeciivzcefkbry

On rendering synthetic images for training an object detector

Artem Rozantsev, Vincent Lepetit, Pascal Fua
2015 Computer Vision and Image Understanding  
Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object.  ...  images in such way that they look very realistic, as is often done when only limited amounts of training data are available.  ...  , 21, 22, 23] , gesture recognition and pose estimation [24, 25] , or rendering virtual objects that merge well with real images [26] .  ... 
doi:10.1016/j.cviu.2014.12.006 fatcat:hmqt4qumejcehn4qv6nfufvlvm

Warp and Learn: Novel Views Generation for Vehicles and Other Objects [article]

Andrea Palazzi, Luca Bergamini, Simone Calderara, Rita Cucchiara
2020 arXiv   pre-print
An Image Completion Network (ICN) is then trained to generate a realistic image starting from this geometric guidance.  ...  Differently from parametric (i.e. entirely learning-based) methods, we show how a-priori geometric knowledge about the object and the 3D world can be successfully integrated into a deep learning based  ...  Still, these methods require a large amount of data at test time: entire image banks for collaging, multiple photographs and depth data for image-based rendering.  ... 
arXiv:1907.10634v3 fatcat:n4e622lpd5g4bnlo44r54kigpi

A photometric approach to digitizing cultural artifacts

Tim Hawkins, Jonathan Cohen, Paul Debevec
2001 Proceedings of the 2001 conference on Virtual reality, archeology, and cultural heritage - VAST '01  
From this database of recorded images, we compute linear combinations of the captured images to synthetically illuminate the object under arbitrary forms of complex incident illumination, correctly capturing  ...  In this paper we present a photometry-based approach to the digital documentation of cultural artifacts.  ...  We thank Chris Tchou for his extensive contributions to the software employed in creating the renderings and both Chris Tchou and Dan Maas for writing the interactive reflectance field visualization program  ... 
doi:10.1145/584993.585053 dblp:conf/vast/HawkinsCD01 fatcat:mskxnogwf5ehxcvde66e4djguu

A photometric approach to digitizing cultural artifacts

Tim Hawkins, Jonathan Cohen, Paul Debevec
2001 Proceedings of the 2001 conference on Virtual reality, archeology, and cultural heritage - VAST '01  
From this database of recorded images, we compute linear combinations of the captured images to synthetically illuminate the object under arbitrary forms of complex incident illumination, correctly capturing  ...  In this paper we present a photometry-based approach to the digital documentation of cultural artifacts.  ...  We thank Chris Tchou for his extensive contributions to the software employed in creating the renderings and both Chris Tchou and Dan Maas for writing the interactive reflectance field visualization program  ... 
doi:10.1145/585051.585053 fatcat:oonqr7fwgba7zked6p4ed65uxi

Towards peer-assisted rendering in networked virtual environments

Minhui Zhu, Sebastien Mondet, Géraldine Morin, Wei Tsang Ooi, Wei Cheng
2011 Proceedings of the 19th ACM international conference on Multimedia - MM '11  
Second, by combining three different rendering methods, each contributing to rendering of different classes of objects in the scene, we show that it is possible for a client to render the scene efficiently  ...  This approach is more scalable than previous solutions based on server-based pre-rendering.  ...  The assistee partitions the scene into three classes of rendering elements: background objects, near static objects, and dynamic objects.  ... 
doi:10.1145/2072298.2072324 dblp:conf/mm/ZhuMMOC11 fatcat:f5md2otqbbfahb2cylwk423yn4

Seeing Beyond Appearance - Mapping Real Images into Geometrical Domains for Unsupervised CAD-based Recognition [article]

Benjamin Planche, Sergey Zakharov, Ziyan Wu, Andreas Hutter, Harald Kosch, Slobodan Ilic
2018 arXiv   pre-print
synthetic data, and still perform better than methods trained with domain-relevant information (eg real images or realistic textures for the 3D models).  ...  a more refined mapping for unseen color images.  ...  and their annotations, nor captured textures for the 3D models to render realistic images.  ... 
arXiv:1810.04158v1 fatcat:vhywhkfbjnbsndpjjpabf537b4

Automatic classification of images on the Web

Alexander Hartmann, Rainer W. Lienhart, Minerva M. Yeung, Chung-Sheng Li, Rainer W. Lienhart
2001 Storage and Retrieval for Media Databases 2002  
In the subset of photo-like images, true photos could be separated from ray-traced/rendered image with an accuracy of 87.3%, while with an accuracy of 93.2% the subset of graphical images was successfully  ...  On a large image database, our classification algorithm achieved an accuracy of 97.3% in separating photo-like images from graphical images.  ...  RELATED WORK Only recently automatic semantic classification of images based on broad and general-purpose classes has been the topic of some research, i.e., automatic classification of images into semantic  ... 
doi:10.1117/12.451108 dblp:conf/spieSR/HartmannL02 fatcat:nteprhbi7jbaholojleoi2ascm

DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort [article]

Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, Sanja Fidler
2021 arXiv   pre-print
Our method relies on the power of recent GANs to generate realistic images. We show how the GAN latent code can be decoded to produce a semantic segmentation of the image.  ...  As only a few images need to be manually segmented, it becomes possible to annotate images in extreme detail and generate datasets with rich object and part segmentations.  ...  Key to our approach is an observation that GANs trained to synthesize images must acquire rich semantic knowledge in their ability to render diverse and realistic examples of objects.  ... 
arXiv:2104.06490v2 fatcat:rtw46jinvbegxmdlf53rsvpapi

Classification of Illumination Methods for Mixed Reality

Katrien Jacobs, Celine Loscos
2006 Computer graphics forum (Print)  
A mixed reality (MR) represents an environment composed both by real and virtual objects.  ...  For some of these applications it is important to merge the real and virtual elements using consistent illumination.  ...  One successful technique is image-based modeling, in which objects are rendered with textures based on real images. • The illumination of the virtual object(s) needs to resemble the illumination of the  ... 
doi:10.1111/j.1467-8659.2006.00816.x fatcat:mcc4xiwf2jdeblcgn4t36kjbhe

In-Place Scene Labelling and Understanding with Implicit Scene Representation [article]

Shuaifeng Zhi, Tristan Laidlow, Stefan Leutenegger, Andrew J. Davison
2021 arXiv   pre-print
Semantic labelling is highly correlated with geometry and radiance reconstruction, as scene entities with similar shape and appearance are more likely to come from similar classes.  ...  and then sort the label maps in the sequence based on this occupied area ratio.  ...  Figure 6 shows the qualitative results of the re-rendered semantic labels after training.  ... 
arXiv:2103.15875v2 fatcat:dkjq7aafl5f2rbunnkwxbtn64q
« Previous Showing results 1 — 15 out of 11,531 results