Visual Sensor Networks

Deepa Kundur, Ching-Yung Lin, Chun-Shien Lu
2007 EURASIP Journal on Advances in Signal Processing  
Research into the design, development, and deployment of networked sensing devices for high-level inference and surveillance of the physical environment has grown tremendously in the last few years. This trend has been motivated, in part, by recent technological advances in electronics, communication networking, and signal processing. Sensor networks are commonly comprised of lightweight distributed sensor nodes, such as low-cost video cameras. There is inherent redundancy in the number of
more » ... deployed and corresponding networking topology. Operation of the network requires autonomous peer-based collaboration amongst the nodes and intermediate data-centric processing amongst local sensors. The intermediate processing known as in-network processing is application-specific. Often, the sensors are untethered so that they must communicate wirelessly and be battery-powered. Initial focus was placed on the design of sensor networks in which scalar phenomena such as temperature, pressure, or humidity were measured. It is envisioned that much societal use of sensor networks will also be based on employing content-rich vision-based sensors. The volume of data collected as well as the sophistication of the necessary in-network stream content processing provides a diverse set of challenges in comparison generic scalar sensor network research. Applications that will be facilitated through the development of visual sensor networking technology include automatic tracking, monitoring and signaling of intruders within a physical area, assisted living for the elderly or physically disabled, environmental monitoring, and command and control of unmanned vehicles. Many current video-based surveillance systems have centralized architectures that collect all visual data at a central location for storage or real-time interpretation by a human operator. The use of distributed processing for automated event detection would significantly alleviate human operators from mundane or time-critical activities, and provides better network scalability. Thus, it is expected that video surveillance solutions of the future will successfully utilize visual sensor networking technologies. Given that the field of visual sensor networking is still in its infancy, it is critical that researchers from the diverse disciplines including signal processing, communications, and electronics address the many challenges of this emerging field. This special issue aims to bring together a diverse set of research results that are essential for the development of robust and practical visual sensor networks. In the first paper entitled "Determining vision graphs for distributed camera networks using feature digests" by Chen et al., the authors present a new framework to determine image relationships in a large network of visual sensors in which communication between sensor nodes is constrained. The work focuses, in part, on the problem of estimating the vision graph for an ad hoc visual sensor network, in which a node represents each camera and an edge appears between a node pair if the two cameras jointly image a sufficiently large part of the observation area. The approach is decentralized, requires no camera order, and works under limited communication. The authors demonstrate how camera calibration algorithms that exploit the vision graph can perform in a distributed manner. In the next paper by Devarajan and Radke entitled "Calibrating distributed camera networks using belief propagation," a fully distributed 3D camera calibration approach that leverages belief propagation is presented. Here, each camera node communicates only with its neighbors that image a sufficient number of scene points. The authors demonstrate how the natural geometry of the system and the formulation of the estimation problem give rise to statistical dependencies
doi:10.1155/2007/21515 fatcat:4pmwmjwh2bgnjngxwmsbsoueyq