Interacting above and beyond the Display [Guest editorial]

Andy Wilson, Hrvoje Benko
2014 IEEE Computer Graphics and Applications  
I t's surprising that for all the computer mouse's popularity and utility, it reduces the entirety of the user's input to a single 2D motion in a plane. As human beings, we might bemoan this vast simplification of ourselves. We are, after all, more than a point running around on a flat display! In the real world, we use much of our bodies in everyday tasks, and we communicate powerfully by gesture, gaze, and speech. But the point cursor continues to be a useful input abstraction, even long
more » ... our machines can do much more than simple point-rectangle hit testing. The classic event-driven mouse interface is now easy to program, and the mouse's precision is hard to beat, although many have tried. When was the last time you blamed your computer when you tried to click a button and missed? Touch interfaces expand on the bandwidth of mouse input by adding multitouch capability and ease of use through direct manipulation. Their recent success is due partly to the hardware advances necessary to rapidly sense, process, and render fluid manipulation of onscreen objects. This movement has spurred a wave of innovation generally around interaction models, form factors, and hardware design. Fundamentally, however, even multitouch systems model our input as a small number of contact points confined to a flat screen. Might the next leap in human-computer interaction use more complex models of input to finally liberate us from the display plane? Transcending 2D Input Just as the touchscreen-computing era was enabled by refined sensing and signal-processing techniques, the next shift in human-computer interaction might be driven by even more sophisticated sensing techniques. For example, by using cameras and other sensors, future interfaces might exploit knowledge of users' 3D position and shape as they move in front of the display. Such interfaces might leverage knowledge of the user's pose to enable gesture-based input from a distance. Applications involving rendering and manipulation of 3D graphics abound, including CAD, data visualization, and augmented reality. Recently, commodity depth cameras such as the Microsoft Kinect sensor have put sophisticated 3D sensing technology within millions of computer users' reach. Yet sensing hardware is just one piece of the puzzle. What signal-processing algorithms and interaction models can we use to approach and exceed the touchscreen's precision, performance, and utility? How can we use the more detailed, nuanced information made available by new sensors to enable more expressive interfaces, going beyond what a mouse can do but preserving its familiar predictability? As we explore these questions in this special issue, it becomes clear that the variety of sensing platforms and interaction models available with today's technology doesn't deliver easy answers. As we give our systems increasing capability to sense the world, we perhaps shouldn't be surprised to find that just as a tremendous variety of ways exist to interact with the real world, so too are there many modes of interaction above and beyond the display. In This Issue The five articles in this issue cover the spectrum from specialized sensing hardware to high-level interaction models, across multiple physical scales and applications. Many touchscreens would have us write with our fingers, and camera-based interfaces tout the ability to interact without a hardware device in hand. However, there's still value in familiar tangible tools such as the stylus, particularly where precise
doi:10.1109/mcg.2014.54 fatcat:ifcxwqzcordzlpnkxp3ajygffi