Design of Multimodal Mobile Interfaces

Book description

The “smart mobile” has become an essential and inseparable part of our lives. This powerful tool enables us to perform multi-tasks in different modalities of voice, text, gesture, etc. The user plays an important role in the mode of operation, so multimodal interaction provides the user with new complex multiple modalities of interfacing with a system, such as speech, touch, type and more.

The book will discuss the new world of mobile multimodality, focusing on innovative technologies and design which create a state-of-the-art user interface. It will examine the practical challenges entailed in meeting commercial deployment goals, and offer new approaches to the designing such interfaces.

A multimodal interface for mobile devices requires the integration of several recognition technologies together with sophisticated user interface and distinct tools for input and output of data. The book will address the challenge of designing devices in a synergetic fashion which does not burden the user or to create a technological overload.

Table of contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Preface
  5. Table of Contents
  6. List of contributing authors
  7. 1 Introduction to the evolution of Mobile Multimodality
    1. 1.1 User Interfaces: Does vision meet reality?
    2. 1.2 Discussion of terms: Mobility and User Interface
      1. 1.2.1 Mobility
      2. 1.2.2 User Interface
      3. 1.2.3 User-centered design
      4. 1.2.4 Teamwork
      5. 1.2.5 Context
    3. 1.3 System interaction: Moving to Multimodality
      1. 1.3.1 User input and system output
      2. 1.3.2 Multimodality
      3. 1.3.3 Combining modalities
    4. 1.4 Mobile Multimodality: The evolution
      1. 1.4.1 Technology compliance to user needs
      2. 1.4.2 Technology readiness and availability
      3. 1.4.3 The readiness of multimodal technology
      4. 1.4.4 User requirements and needs
      5. 1.4.5 Cycle of mutual influence
    5. 1.5 Conclusion
  8. 2 Integrating natural language resources in mobile applications
    1. 2.1 Natural language understanding and multimodal applications
      1. 2.1.1 How natural language improves usability in multimodal applications
      2. 2.1.2 How multimodality improves the usability of natural language interfaces
    2. 2.2 Why natural language isn’t ubiquitous already
    3. 2.3 An overview of technologies related to natural language understanding
    4. 2.4 Natural language processing tasks
      1. 2.4.1 Accessing natural language technology: Cloud or client?
      2. 2.4.2 Existing natural language systems
      3. 2.4.3 Natural language processing systems
      4. 2.4.4 Selection Criteria
    5. 2.5 Standards
      1. 2.5.1 EMMA
      2. 2.5.2 MMI Architecture and Interfaces
    6. 2.6 Future directions
    7. 2.7 Summary
  9. 3 Omnichannel Natural Language
    1. 3.1 Introduction
    2. 3.2 Multimodal interfaces built with omnichannel Natural Language Understanding
    3. 3.3 Customer care and natural language
    4. 3.4 Limitations of standard NLU solutions
    5. 3.5 Omnichannel NL architecture
      1. 3.5.1 Omni-NLU training algorithm
      2. 3.5.2 Statistical-language model
      3. 3.5.3 Input transformation
      4. 3.5.4 Predictive omnichannel classifier
      5. 3.5.5 Score normalization
      6. 3.5.6 Conversation manager
    6. 3.6 Experimental results
      1. 3.6.1 Current analysis segment
    7. 3.7 Summary
  10. 4 Wearable computing
    1. 4.1 Introduction to Wearable Ecology
    2. 4.2 Human-computer symbiosis
    3. 4.3 Interactional considerations behind wearable technology
    4. 4.4 Training of end users
    5. 4.5 Wearable technology in the medical sector
    6. 4.6 Human-centered design approach
    7. 4.7 Context of wearable computing applications
    8. 4.8 State of the art in context-aware wearable computing
    9. 4.9 Project examples
    10. 4.10 Towards the TZI Context Framework
    11. 4.11 Conclusion
    12. 4.12 Discussion and considerations for future research
  11. 5 Spoken dialog systems adaptation for domains and for users
    1. 5.1 Introduction
    2. 5.2 Language adaptation
      1. 5.2.1 Lexicon adaptation
      2. 5.2.2 Adapting cloud ASR for domain and users
      3. 5.2.3 Summary
    3. 5.3 Intention adaptation
      1. 5.3.1 Motivation
      2. 5.3.2 Data collection
      3. 5.3.3 Observation and statistics
      4. 5.3.4 Intention recognition
      5. 5.3.5 Personalized interaction
      6. 5.3.6 Summary
    4. 5.4 Conclusion
  12. 6 The use of multimodality in Avatars and Virtual Agents
    1. 6.1 What are A&VA – Definition and a short historical review
      1. 6.1.1 First Avatars – Bodily interfaces and organic machines
      2. 6.1.2 Modern use of avatars
      3. 6.1.3 From virtual “me” to virtual “you”
    2. 6.2 A relationship framework for Avatars and Virtual Agents
      1. 6.2.1 Type 1 – The Avatar as virtual me
      2. 6.2.2 Type 2 – The interaction with a personalized/specialized avatar
      3. 6.2.3 Type 3 – Me and a virtual agent that is random
    3. 6.3 Multimodal features of A&VA – categorizing the need, the challenge, the solutions
      1. 6.3.1 About multimodal interaction technologies
      2. 6.3.2 Why use multimodality with Avatars?
      3. 6.3.3 Evaluation of the quality of Avatars and Virtual Agents
    4. 6.4 Conclusion and future directions: The vision of A&VA multimodality in the digital era
  13. 7 Managing interaction with an in-car infotainment system
    1. 7.1 Introduction
    2. 7.2 Theoretical framework and related literature
    3. 7.3 Methodology
    4. 7.4 Prompt timing and misalignment – A formula for interruptions
    5. 7.5 Interactional adaptation
    6. 7.6 Norms and premises
    7. 7.7 Implications for design
  14. 8 Towards objective method in display design
    1. 8.1 Introduction
    2. 8.2 Method
      1. 8.2.1 Listing of informational elements
      2. 8.2.2 Domain expert rating
      3. 8.2.3 Measurement of integrative interrelationships
      4. 8.2.4 Clustering algorithm
      5. 8.2.5 Comparison of the two hierarchical structures
      6. 8.2.6 Comparisons between the domain expert and the design expert analyses
    3. 8.3 Analysis of an instrument display
    4. 8.4 Conclusion
      1. 8.4.1 Extension of the approach to sound- and haptic-interfaces
      2. 8.4.2 Multimodal presentation
  15. 9 Classification and organization of information
    1. 9.1 Introduction
      1. 9.1.1 Head up displays
      2. 9.1.2 Objectives
    2. 9.2 Characterization of vehicle information
      1. 9.2.1 Activity
      2. 9.2.2 Information Type
      3. 9.2.3 Urgency
      4. 9.2.4 Timeliness
      5. 9.2.5 Duration of interaction
      6. 9.2.6 Importance
      7. 9.2.7 Frequency of use
      8. 9.2.8 Type of user response required
      9. 9.2.9 Activation mode
    3. 9.3 Allocation of information
    4. 9.4 Head up display (HUD) and its information organization
      1. 9.4.1 Information completeness and conciseness
    5. 9.5 Principles of HUD information organization
    6. 9.6 Review of existing Head Up Displays (HUDs)
      1. 9.6.1 “Sporty” head up display
      2. 9.6.2 Simplistic HUD
      3. 9.6.3 Colorful head up display
      4. 9.6.4 Graphically-rich head up display
    7. 9.7 Conclusion
  16. Index
  17. Footnotes

Product information

  • Title: Design of Multimodal Mobile Interfaces
  • Author(s): Nava Shaked, Ute Winter, Kathy Brown, Deborah Dahl, Asaf Degani, Alexander Rudnicky, Brion van Over, Michael Lawo, Yael Shmueli-Friedland
  • Release date: April 2016
  • Publisher(s): De Gruyter
  • ISBN: 9781501502750