Chapter | 15 Modelling Interest in Face-to-Face Conversations
323
modality), the design of new multimodal integration strategies and
the application of cues and models to other social scenarios. The
improvement of the technological means to record real interaction,
both in multisensor spaces and with wearable devices, is opening the
possibility to analyse multiple social situations where interest emerges
and correlates with concrete social outcomes and also to develop new
applications related to self-assessment and group awareness. Given
the increasing attention in signal processing and machine learning
with respect to social interaction analysis, there is much to look for-
ward to in the future regarding advances on computational modelling
of social interest and related concepts.
Acknowledgments
The author thanks the support of the Swiss National Center of
Competence in Research on Interactive Multimodal Information
Management (IM2) and EC project Augmented Multi-Party Interac-
tion with Distant Access (AMIDA). He also thanks Nelson Morgan
(ICSI) and Sandy Pentland (MIT) for giving permission to reproduce
some of the pictures presented in this chapter (in Figure 15.2).
REFERENCES
1. M.L. Knapp, J.A. Hall, Nonverbal Communication in Human Interaction, Sixth
ed., Wadsworth Publishing, 2005.
2. Y.S. Choi, H.M. Gray, N. Ambady, The glimpsed world: unintended commu-
nication and unintended perception, in: R.H. Hassin, J.S. Uleman, J.A. Bargh,
(Eds.), The New Unconscious, Oxford University Press, 2005.
3. T.L. Chartrand, J.A. Bargh, The chameleon effect: the perception-behavior link
and social interaction, J. Pers. Soc. Psychol. 76 (6) (1999) 893–910.
4. T.L. Chartrand, W. Maddux, J. Lakin, Beyond the perception-behavior link:
the ubiquitous utility and motivational moderators of nonconscious mimicry,
in: R. Hassin, J. Uleman, J.A. Bargh, (Eds.), The New Unconscious. Oxford
University Press, 2005.
5. A. Pentland, Socially aware computation and communication, IEEE Comput.
38 (2005) 63–70.
6. A. Pentland, Honest Signals: How they Shape Our World. MIT Press, 2008.
7. D. Gatica-Perez, Automatic nonverbal analysis of social interaction in small
groups: a review, Image and Vision Computing 2009.
324
PART | III Multimodal Human-Computer and Human-to-Human Interaction
8. A. Madan, Thin slices of interest, Masters thesis, Massachusetts Institute of
Technology, 2005.
9. B. Wrede, E. Shriberg, Spotting hotspots in meetings: human judgments and
prosodic cues, Proceedings of the Eurospeech Conference, 2003. pp. 2805–
2808.
10. B. Wrede, E. Shriberg, The relationship between dialogue acts and hot spots
in meetings, Proceedings of the IEEE Automatic Speech Recognition and
Understanding Workshop (ASRU), 2003, pp. 180–184.
11. L. Kennedy, D. Ellis, Pitch-based emphasis detection for characterization of
meeting recordings, Proceedings of the ASRU, workshop, pp. 243–248, 2003.
12. D. Hillard, M. Ostendorf, E. Shriberg, Detection of agreement vs. disagree-
ment in meetings: training with unlabeled data, Proceedings of HLT-NAACL
Conference, 2003, pp. 34–36.
13. A. Madan, R. Caneel, A. Pentland, Voices of attraction, Proceedings of the
International Conference on Augmented Cognition (AC-HCI), 2005.
14. N. Eagle, A. Pentland, Social network computing, Proceedings of the Interna-
tional Conference on Ubiquituous Computing (UBICOMP), 2003, pp. 289–296.
15. D. Gatica-Perez, I. McCowan, D. Zhang, S. Bengio, Detecting group interest-
level in meetings, Proceedings of the IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), 2005, pp. 489–492.
16. J. Gips, A. Pentland. Mapping human networks, Proceedings of the IEEE
International Conference on Pervasive Computing and Communications, 2006,
pp. 159–168.
17. J. Carletta, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, et al., The
AMI meeting corpus: a pre-announcement, Proceedings of the Workshop on
Machine Learning for Multimodal Interaction (MLMI), 2005, pp. 28–39.
18. R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz,
et al., Emotion recognition in human-computer interaction, IEEE Signal Process.
Mag. (2001) 18(1).
19. A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, et al., The
ICSI meeting corpus, Proceedings of the IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), 2003, pp. 364–367.
20. A. de Cheveigne, H. Kawahara, Yin, a fundamental frequency estimator for
speech and music, J. Acoustic Soc. Am., 2001, III(4), pp. 1917–1930.
21. C. Yu, P. Aoki, A. Woodruff, Detecting user engagement in everyday conversa-
tions, Proceedings of the ICSLP, 2004, pp. 1329–1332.
22. A. Pentland, A. Madan. Perception of social interest, Proceedings of the IEEE
International Conference on Computer Vision, Workshop on Modeling People
and Human Interaction (ICCV-PHI), 2005.
23. W.T. Stoltzman, Toward a social signaling framework: activity and emphasis in
speech, Masters thesis, Massachusetts Institute of Technology, 2006.

Get Multi-Modal Signal Processing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.