Chapter 3: Multimodal perception for dexterous manipulation

Guanqun Cao; Shan Luo    smARTLab, Department of Computer Science, University of Liverpool, Liverpool, United Kingdom

Abstract

Humans perceive the world in a multimodal way in which vision, touch, and sound are utilized to understand surroundings from various dimensions. These senses are combined to achieve a synergistic effect where the learning is more effective than the sum of using each sense separately. For robots, vision and touch are also two key sensing modalities for dexterous manipulation. Vision can enable robots to observe object features like shape and color, whereas touch sensing provides local information of objects such as friction, slip, and texture. As vision and tactile ...

Get Tactile Sensing, Skill Learning, and Robotic Dexterous Manipulation now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.