Coverbal Synchrony in Human-Machine Interaction

Book description

Embodied conversational agents (ECA) and speech-based human–machine interfaces can together represent more advanced and more natural human–machine interaction. Fusion of both topics is a challenging agenda in research and production spheres. The important goal of human–machine interfaces is to provide content or functionality in the form of a dialog resembling face-to-face conversations. All natural interfaces strive to exploit and use different communication strategies that provide additional meaning to the content, whether they are human–machine interfaces for controlling an application or different ECA-based human–machine interfaces directly simulating face-to-face conversation.

Coverbal Synchrony in Human-Machine Interaction presents state-of-the-art concepts of advanced environment-independent multimodal human–machine interfaces that can be used in different contexts, ranging from simple multimodal web-browsers (for example, multimodal content reader) to more complex multimodal human–machine interfaces for ambient intelligent environments (such as supportive environments for elderly and agent-guided household environments). They can also be used in different computing environments—from pervasive computing to desktop environments. Within these concepts, the contributors discuss several communication strategies, used to provide different aspects of human–machine interaction.

Table of contents

  1. Cover
  2. Preface
  3. Contents
  4. List of Contributors
  5. CHAPTER 1: Speech Technology and Conversational Activity in Human-Machine Interaction (1/4)
  6. CHAPTER 1: Speech Technology and Conversational Activity in Human-Machine Interaction (2/4)
  7. CHAPTER 1: Speech Technology and Conversational Activity in Human-Machine Interaction (3/4)
  8. CHAPTER 1: Speech Technology and Conversational Activity in Human-Machine Interaction (4/4)
  9. CHAPTER 2: A Framework for Studying Human Multimodal Communication (1/5)
  10. CHAPTER 2: A Framework for Studying Human Multimodal Communication (2/5)
  11. CHAPTER 2: A Framework for Studying Human Multimodal Communication (3/5)
  12. CHAPTER 2: A Framework for Studying Human Multimodal Communication (4/5)
  13. CHAPTER 2: A Framework for Studying Human Multimodal Communication (5/5)
  14. CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (1/7)
  15. CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (2/7)
  16. CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (3/7)
  17. CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (4/7)
  18. CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (5/7)
  19. CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (6/7)
  20. CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (7/7)
  21. CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (1/6)
  22. CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (2/6)
  23. CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (3/6)
  24. CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (4/6)
  25. CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (5/6)
  26. CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (6/6)
  27. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (1/9)
  28. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (2/9)
  29. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (3/9)
  30. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (4/9)
  31. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (5/9)
  32. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (6/9)
  33. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (7/9)
  34. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (8/9)
  35. CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (9/9)
  36. CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (1/7)
  37. CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (2/7)
  38. CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (3/7)
  39. CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (4/7)
  40. CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (5/7)
  41. CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (6/7)
  42. CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (7/7)
  43. CHAPTER 7: The Situated Multimodal Facets of Human Communication (1/6)
  44. CHAPTER 7: The Situated Multimodal Facets of Human Communication (2/6)
  45. CHAPTER 7: The Situated Multimodal Facets of Human Communication (3/6)
  46. CHAPTER 7: The Situated Multimodal Facets of Human Communication (4/6)
  47. CHAPTER 7: The Situated Multimodal Facets of Human Communication (5/6)
  48. CHAPTER 7: The Situated Multimodal Facets of Human Communication (6/6)
  49. CHAPTER 8: From Annotation to Multimodal Behavior (1/4)
  50. CHAPTER 8: From Annotation to Multimodal Behavior (2/4)
  51. CHAPTER 8: From Annotation to Multimodal Behavior (3/4)
  52. CHAPTER 8: From Annotation to Multimodal Behavior (4/4)
  53. CHAPTER 9: Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation (1/4)
  54. CHAPTER 9: Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation (2/4)
  55. CHAPTER 9: Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation (3/4)
  56. CHAPTER 9: Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation (4/4)
  57. CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (1/6)
  58. CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (2/6)
  59. CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (3/6)
  60. CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (4/6)
  61. CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (5/6)
  62. CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (6/6)
  63. CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (1/5)
  64. CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (2/5)
  65. CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (3/5)
  66. CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (4/5)
  67. CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (5/5)
  68. CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (1/7)
  69. CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (2/7)
  70. CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (3/7)
  71. CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (4/7)
  72. CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (5/7)
  73. CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (6/7)
  74. CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (7/7)
  75. CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (1/8)
  76. CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (2/8)
  77. CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (3/8)
  78. CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (4/8)
  79. CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (5/8)
  80. CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (6/8)
  81. CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (7/8)
  82. CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (8/8)
  83. CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (1/6)
  84. CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (2/6)
  85. CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (3/6)
  86. CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (4/6)
  87. CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (5/6)
  88. CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (6/6)
  89. CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (1/5)
  90. CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (2/5)
  91. CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (3/5)
  92. CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (4/5)
  93. CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (5/5)
  94. Color Plate Section (1/2)
  95. Color Plate Section (2/2)
  96. Back Cover

Product information

  • Title: Coverbal Synchrony in Human-Machine Interaction
  • Author(s): Matej Rojc, Nick Campbell
  • Release date: October 2013
  • Publisher(s): CRC Press
  • ISBN: 9781466598263