Integrating information appliances into an interactive workspace

A. Fox, B. Johanson, P. Hanrahan, T. Winograd
2000 IEEE Computer Graphics and Applications  
M ost of today's computing environments-by design-support interaction between one person and one computer. The user sits at a workstation or laptop, or holds a personal digital assistant (PDA), focusing on a single device at a time-even with several devices around, linked and synchronized. Collaboration occurs over the network using e-mail, shared files, or in some cases explicitly designed groupware. In noncomputerized work settings, on the other hand, people interact in a rich environment
more » ... includes information from many sources-paper, whiteboards, computers, physical models, and so on. They can use these simultaneously and move among them flexibly and quickly. The few integrated multidevice computer environments existing today tend to be highly specialized and based on application-specific software. The Interactive Workspaces Project at Stanford explores new possibilities for people to work together in technology-rich spaces with computing and interaction devices on many different scales. It includes faculty and students from the areas of graphics, human-computer interaction (HCI), networking, ubiquitous computing, and databases, and draws on previous work in all those areas. We design and experiment with multidevice, multiuser environments based on a new architecture that makes it easy to create and add new display and input devices, to move work of all kinds from one computing device to another, and to support and facilitate group interactions. In the same way that today's standard operating systems make it feasible to write single-workstation software that uses multiple devices and networked resources, we are constructing a higher level operating system for the world of ubiquitous computing. We combine research on infrastructure (ways of flexibly configuring and connecting devices, processes, and communication links) with research on HCI (ways of interacting with heterogeneous changing collections of devices with multiple modalities). The Interactive Room (iRoom) infrastructure described in this article is brand new: the physical plant for the room was constructed during the summer of 1999, and the room became operational for the first time in late September 1999. We report here on our very early work on our strategy for integrating PDAs into this infrastructure. Application target areas We chose to focus our current work on an augmented dedicated space (a meeting room, rather than an individual's office or home, or a tele-connected set of spaces) and to concentrate on task-oriented work rather than entertainment, personal communication, or ambient information. In this section we describe some of our initial research goals in terms of specific applications we developed. These applications also serve as motivating examples for the programming mechanisms described in later sections. The photo of the current iRoom configuration (Figure 1) illustrates the basic room hardware: three touchsensitive SmartBoard displays; a bottom-projected table; a front-projected (non-input-responsive) screen; a variety of wireless mice, keyboards, and PDAs for interacting with the screens; and approximately eight PCs (not visible) providing computing, rendering, and display server capabilities. Our interest in multimodal input forms the basis of our investigation of human-centric interaction, 1 in which contextual information provided by software observers is integrated to identify user intent based on multiple input sources and modalities. The focus remains on the 0272-1716/00/$10.00
doi:10.1109/38.844373 fatcat:rx4jpp4buvgoxdjw5vz2c3m2jy