With ever more sensors being integrated in today’s devices, multimodality becomes a real option for interacting with assistance systems. Cameras, microphones, motion sensors and GPS can provide an application with valuable information on the users and their environments, allowing for fine-tuned situation-aware assistance. Ubiquitous availability of information through the Internet allows accessing specific data as it is required in any given situation. All this combined with the ability of devices to interconnect and to use each other’s sensors, processing capabilities and interaction means opens new possibilities for the design of assistance systems and supportive environments.
The opportunity to use multiple modalities in the real world confront the community with a number of very practical challenges which have only partially been addressed in research before. This includes the following questions:
– Which modalities should be used in which situations?
– Which information should be displayed, through which modalities, in which granularity?

– Which interactions should be offered, through which modalities?
– When and how should be switched between the different modalities?
– How should modalities be combined?
We believe that pervasive supportive environments will benefit most from really user-centric, situation aware multimodal interfaces with multimodality applying not only to the interaction of the user with the system or environment but also to the system inferring on the user’s likely affective state and intentions.