Affective Stack — A Model for Affective Computing Application Development

Nik Thompson, Tanya Jane McGill
2015 Journal of Software  
Affective computing applications hold promise to revolutionize human-computer interaction by enabling more natural and intuitive interaction modalities. These may include the communication of vocal expressions, physiological signals or other non-verbal indicators of underlying affective state. Although the field has experienced substantial expansion in recent years, the tools and techniques utilized are not yet mature or well established. A notable issue is the one-off nature of currently
more » ... of currently implemented affective computing systems. There is as yet no notion of standardized program architecture and there is no straightforward way to extend the functionality of existing software to include affective components. This paper introduces a new model which describes the affective computing application in terms of a set of loosely coupled functional components. This model encourages a uniform and replicable approach to affective application development in which functional components can be improved independently and subsequently re-used. The model also incorporates existing third party software as a functional component highlighting the potential to build upon existing, well established, software packages. It is hoped that this model and discussion spurs further, focused development in this growing field. communications holds the potential to greatly improve the quality, usability and experience of humancomputer interactions by making the interaction more natural and intuitive. However, human-computer interaction is generally both explicit (via traditional input mechanisms such as the keyboard) and asymmetric. This means that whilst there are numerous ways in which the computer may provide rich and multi-modal information to the user, the user may not always possess equivalent means of communicating his or her status to the computer [1]. This disparity is most pronounced when we consider non-standard forms of communication, such as affective state, as there are no well-established or widely deployed means for communicating this information. Therefore, for affective computing to be successful, this disparity in communication must be reduced by enabling implicit and bidirectional communication. The creation of additional input modalities beyond the traditional keyboard and mouse can support this goal [2] . For instance these modalities may be based on physiological signals, observable traits such as image recognition of facial expressions, audio analysis of vocal patterns or a variety of other senor based input means which can signify changes in the user's emotional state, and can interpreted by affective computing applications. McMillan, Egglestone and Anderson [3] suggested that a different interaction paradigm is required for sensor based human-computer interaction (e.g. affective feedback from a new input modality) to the traditional and widely adopted electromechanical based human-computer interaction. Unfortunately, there has been little systematic exploration how this type of sensor based interaction is best utilized in applications such as affective computing [4] . Affective computing applications are often built in the same way as more traditional applications, with the affective functionality inserted into the program architecture in an ad-hoc manner wherever the developer may deem it to be appropriate. There are also no structured ways to extend the functionality of existing or third party software with affective components, and thus the vast range of established software packages may not be able to avail of the potential benefits of affective computing. Consequently, it could be argued that the current trend for ad-hoc development in affective computing is hampering progress. It has also been noted that research in the area was disparate and uneven, and it seems that little progress has been made in this area [e.g. 4; 5]. One goal that has been identified in the affective computing literature may be termed device independence -that is, any successful solution to handle new user input modalities must be capable of abstracting over multiple sensing devices which may have different outputs, manufacturers and operating requirements [6] . Therefore there is a clear need for a reusable design approach or model which is sufficiently abstracted from implementation considerations in order to be applicable to a wide range of operating and sensing environments. Affective computing applications have potential uses in practically any situation where a humancomputer interaction is taking place. Technology that can recognize and even express affect can provide insights into human-computer (and in some cases human-human) interactions. This may allow the system to be improved by being able to respond in a more natural and realistic way. Measuring the stress or difficulty caused by a system may also make it possible for developers to evaluate various configurations and to pinpoint potential usability problems in new systems. These technologies have been successfully implemented in very diverse environments. These include robotic personas [7], wearable computers [8], learning companions [9], [10] and games [11] . Wearable computers provide a rich and diverse ground for evaluating and implementing affective technologies. The close contact with the user enables easy communication of subtle non-verbal cues that may be valuable indicators of affective state. In some cases, the affect detection capabilities may even be used to improve the user's own abilities to perceive emotions in others, and thus improve human-human communication. For example "expression glasses", developed at MIT provide the wearer with feedback Journal of Software 920 Hardware independence considers the physical interface between the user and the computer (i.e. sensor design) and the hardware used to capture this raw signal (e.g. the implementation of an analogue to digital converter). Achieving hardware independence is a significant undertaking, as most implementations described in the literature to date appear to be tied, often inseparably, to the choice of hardware made by the researchers. This is an issue, as sensor design is an area with a vast amount of flexibility and therefore Nik Thompson is the academic chair of cyber forensics and information security at Murdoch University in Western Australia. He holds the MSc and PhD degrees and teaches in the area of computer security and data resource management. His research interests include affective computing, human-computer interaction and information security.
doi:10.17706//jsw.10.8.919-930 fatcat:n6ksxka54jhfrdqql7asd6a74y