A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is
The increase in connected mobile computing devices has created the need for ubiquitous Web access. In many usage scenarios, it would be beneficial to interact multimodally. Current Web user interface description languages, such as HTML and VoiceXML, concentrate only on one modality. Some languages, such as SALT and X+V, allow combining aural and visual modalities, but they lack ease-of-authoring, since both modalities have to be authored separately. Thus, for ease-of-authoring anddoi:10.1145/1145581.1145624 dblp:conf/icwe/HonkalaP06 fatcat:bssvnu3cgzardpx72fogqeptzm