MMI Architecture is a loosely coupled, event-based architecture for integrating multiple modalities into applications. A bit hard to understand, I bet. Let’s start with a hollywood movie example. Did you see Minority Report movie? Do you remember the scene where Tom Cruise is controlling a UI with both hands and voice. He’s using different sources of input to the system (voice and two hands) which are combined for interacting and getting the right information.
Back to the real world what does that mean? Imagine a car with a GPS navigation system. The system receives information from map, live input from the GPS system (geolocation) and the actual car information (speed, direction, etc.). Agains multiple input from different sources with different modalities.
Using a mobile phone with integrated GPS, a Web interface, voice command and fingers tracking on the screen for interacting with the system is yet another example.
Deborah Dahl has introduced the multimodal architecture topic (pdf) in more technical details this morning.