Filters








107,722 Hits in 5.6 sec

Towards understanding what makes 3D objects appear simple or complex

Sreenivas R. Sukumar, David L. Page, Andreas F. Koschan, Mongi A. Abidi
2008 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops  
Towards the goal of understanding why the geometry of some 3D objects appear more complex than others, we conducted a psychophysical study and identified contributing attributes.  ...  Humans perceive some objects more complex than others and learning or describing a particular object is directly related to the judged complexity.  ...  Towards understanding what makes 3D objects appear simple or complex Sreenivas R. Sukumar, David L. Page, Andreas F. Koschan and Mongi A.  ... 
doi:10.1109/cvprw.2008.4562975 dblp:conf/cvpr/SukumarPKA08 fatcat:ztiavyj4w5gmvew742n567xrpq

Native browser support for 3D rendering and physics using WebGL, HTML5 and Javascript

Rovshen Nazarov, John Galletly
2013 Balkan Conference in Informatics  
In the last few years, JavaScript libraries have been developed to enable developers to create and manipulate 3D objects in the browser.  ...  These JavaScript libraries incorporate physics and 3D processing algorithms, HTML 5 elements and technologies (such as canvas and background workers), and the Web Graphics Library (WebGL).  ...  Creating a 3D object, such as a box, is as simple as a few lines of code.  ... 
dblp:conf/bci/NazarovG13 fatcat:htfmoxezazc6df6hi6cvvvdqbe

Three-dimensional widgets

Brookshire D. Conner, Scott S. Snibbe, Kenneth P. Herndon, Daniel C. Robbins, Robert C. Zeleznik, Andries van Dam
1992 Proceedings of the 1992 symposium on Interactive 3D graphics - SI3D '92  
Direct interaction with 3D objects has been limited thus far to gestural picking, manipulation with linear transformations, and simple camera motion.  ...  Our widgets are first-class objects in the same 3D environment used to develop the application.  ...  Comparing common 2D widgets and 3D widgets Despite their often complex appearance, most 2D widgets have very simple behavior.  ... 
doi:10.1145/147156.147199 dblp:conf/si3d/ConnerSHRZD92 fatcat:7s6avargk5bkrbijoqaztho7uq

Three Dimensional Auditory Display: Issues in Applications for Visually Impaired Students

Martyn Cooper, Helen Petrie
2004 International Conference on Auditory Display  
This paper discusses issues arising from both practical investigations and conceptual work directed towards applications of three dimensional (3D) audio displays for blind students.  ...  In the second area this paper outlines how various l earning objectives may be achieved in 3D rendered audio and issues emerging from this are discussed with reference to an illustrative example of a sonic  ...  Here an important leaning objective might be to understand particular alignments of atoms within a given molecule. Can these alignments be perceived from just a 3D sonic representation?  ... 
dblp:conf/icad/CooperP04 fatcat:mwqkyr5cujgppge6zy7laicbfu

Towards a Tracking Algorithm based on the Clustering of Spatio-temporal Clouds of Points

Andrea Cavagna, Chiara Creato, Lorenzo Del Castello, Stefania Melillo, Leonardo Parisi, Massimiliano Viale
2016 Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications  
Experimental data in this field are generally noisy and at low spatial resolution, so that individuals appear as small featureless objects and trajectories must be retrieved by making use of epipolar information  ...  We can then use a simple connected components labeling routine, which is linear in time, to solve optical occlusions, hence lowering from NP to P the complexity of the problem.  ...  In this respect, we deal with hard 3D proximity occlusions similarly to what former methods deal with simple optical occlusions: we formulate the problem in terms of NP optimization, whose complexity is  ... 
doi:10.5220/0005770106790685 dblp:conf/visapp/CavagnaCCMPV16 fatcat:vovi7ras3bfunbjoxkt5bti3qa

Things, tags, topics: Thingiverse's object-centred network

Robbie Fordyce, Luke Heemsbergen, Thomas Apperley, Michael Arnold, Thomas Birtchnell, Michael Luo, Bjorn Nansen
2016 Communication Research and Practice  
By granting value to the collected work of users and the objects they create, patterns of use and understanding emerge from the tagging of objects. 3D printing has a complex relationship towards human  ...  Yet tags are not only a mere second-tier means for users to try and understand what these objects are here to do.  ... 
doi:10.1080/22041451.2016.1155337 fatcat:gy6hutpo7zbbhjhto3slgkwumy

Vision for Autonomous Vehicles and Probes (Dagstuhl Seminar 15461)

André Bruhn, Atsushi Imiya, Ales Leonardis, Tomas Pajdla, Marc Herbstritt
2016 Dagstuhl Reports  
Continuing topics of interest in computer vision are scene and environmental understanding using singleand multiple-camera systems, which are fundamental techniques for autonomous driving, navigation in  ...  Therefore, we strictly focuses on mathematical, geometrical and computational aspects of autonomous vehicles and autonomous vehicular technology which make use of computer vision and pattern recognition  ...  Towards 3D Scene Understanding This talk highlights recent progress on some essential components (such as object recognition and person detection), on our attempt towards 3D scene understanding, as well  ... 
doi:10.4230/dagrep.5.11.36 dblp:journals/dagstuhl-reports/BruhnILP15 fatcat:l2nqd45tnrabpdqmwex6enkxei

Towards a tracking algorithm based on the clustering of spatio-temporal clouds of points [article]

Andrea Cavagna, Chiara Creato, Lorenzo Del Castello, Stefania Melillo, Leonardo Parisi, Massimiliano Viale
2015 arXiv   pre-print
Experimental data in this field are generally noisy and at low spatial resolution, so that individuals appear as small featureless objects and trajectories must be retrieved by making use of epipolar information  ...  We can then use a simple connected components labeling routine, which is linear in time, to solve optical occlusions, hence lowering from NP to P the complexity of the problem.  ...  In this respect, we deal with hard 3D proximity occlusions similarly to what former methods deal with simple optical occlusions: we formulate the problem in terms of NP optimization, whose complexity is  ... 
arXiv:1511.01293v1 fatcat:c2m7etqtafh7zoorowohie3ymu

What Vision Can, Can't and Should Do [chapter]

Michael Zillich
2014 Cognitive Systems Monographs  
What Vision Should Do So what should be done to alleviate the above problems? There is of course no simple answer to that. But let us first look at some of the apparent solutions. It isn't 3D.  ...  Methods of the 1990's allowed recognition of complex textured objects in cluttered scenes, however objects were now esentially 2D appearance models of specific instances (and even views), thus closing  ... 
doi:10.1007/978-3-319-06614-1_9 fatcat:anzjef4ipfcy3g5by7etefokui

Toward human-centric deep video understanding

Wenjun Zeng
2020 APSIPA Transactions on Signal and Information Processing  
We also discuss the future perspectives of video understanding.  ...  Human–computer interaction plays a significant role in human-machine hybrid intelligence, and human understanding becomes a critical step in addressing the tremendous challenges of video understanding.  ...  The robustness requires a tracker not to lose tracking when the appearance of the target changes due to illumination, motion, view angle, or object deformation.  ... 
doi:10.1017/atsip.2019.26 fatcat:rtrqzokr6bc4lj6vs6megf5xru

Looking at people: sensing for ubiquitous and wearable computing

A. Pentland
2000 IEEE Transactions on Pattern Analysis and Machine Intelligence  
., who, what, when, where, and why) so that the computer can act or respond appropriately without detailed instructions.  ...  Four areas will receive particular attention: person identification, surveillance/monitoring, 3D methods, and smart rooms/ perceptual user interfaces.  ...  The shape or appearance of a face, the presence or absence of a person, their position, and body pose are all simple physical observations.  ... 
doi:10.1109/34.824823 fatcat:emk266gdc5eudj5olzzfxfclmy

Biological Models for Active Vision: Towards a Unified Architecture [chapter]

Kasim Terzić, David Lobato, Mário Saleiro, Jaime Martins, Miguel Farrajota, J. M. F. Rodrigues, J. M. H. du Buf
2013 Lecture Notes in Computer Science  
We present some of the experiments from our ongoing work, where our system leverages a combination of algorithms to solve complex tasks.  ...  We apply a number of biologically plausible algorithms which address different aspects of vision, such as edge and keypoint detection, feature extraction, optical flow and disparity, shape detection, object  ...  Long-Term Memory and High-Level Reasoning Our 3D world model is object-based and represented in a 3D coordinate system.  ... 
doi:10.1007/978-3-642-39402-7_12 fatcat:lll4p2mahzalbau23ewmxz5uhu

Is there a wave excitation in the Thalamus? [article]

R.P. Worden
2020 arXiv   pre-print
To represent positions in space only by neural firing rates would be complex and inefficient. It is possible that that the brain represents 3D space in a direct and natural way, by a 3D wave  ...  The things include your own limbs, so you can make appropriate actions towards the other things you perceive -move towards them, or move around them, or bite them, or grasp them, or strike them.  ...  Any representation other than a simple 'single firing rate' representation makes this a complex computation.  ... 
arXiv:2006.03420v1 fatcat:zvcqqpbjx5bjpdaoushjgtcvwe

David Marr's Vision: floreat computational neuroscience

E. T. Rolls
2011 Brain  
The important characteristics of this type of organization are: (i) each 3D model is a self-contained unit of shape information and has a limited complexity; (ii) information appears in shape contexts  ...  It is very hard to extract all the cylinders or shape components that describe objects from a complex scene; very hard to know which shape primitives belong to a single object; very hard to represent the  ... 
doi:10.1093/brain/awr013 fatcat:fxjeyb7kajdsxpov5nr4lzvk6m

Metaview

James R. Miller
2012 Proceedings of the 43rd ACM technical symposium on Computer Science Education - SIGCSE '12  
Metaview is packaged with a set of built-in 3D models used to demonstrate major concepts. In addition, external and/or student-programmed models are easily imported into the tool.  ...  Metaview is an interactive tool that helps to teach concepts related to nested 3D coordinate systems, especially in the context of defining and establishing views of 3D scenes in common graphics APIs like  ...  Unfortunately the students are left at this point without a real understanding of what they have done or why it worked.  ... 
doi:10.1145/2157136.2157178 dblp:conf/sigcse/Miller12 fatcat:jstz3kwa5fb2tfjybjp23f7ywe
« Previous Showing results 1 — 15 out of 107,722 results