Learning dexterity from humans

Learning dexterity from humans
Like

The early vision for the field of robotics consisted of machines such as automata and humanoids that assisted humans in performing daily tasks. For instance, imagine robots that could fold our laundry or load our dishwashers. Yet a modern robot is incapable of performing these seemingly simple tasks that require manual dexterity. Achieving human-like dexterity has been a longstanding, foundational challenge in robotics. And roboticists today still dream of building robots with the dexterity and poise of a gifted violinist and her precise, agile fingers that apply just the right amount of force. 

The human ability to apply precisely controlled forces while grasping objects is aided by a network of sensors called mechanoreceptors that provide continuous tactile feedback. Creating an artificial analogue of this network of sensors is critically important for designing future prosthetics. Our inability to cover the human hand with a network of artificial sensors has limited our understanding of the collective role of these tactile signals in the human grasp itself.

Currently, the burden of implementing robot grasping strategies is beginning to be shouldered by computer vision-based artificial intelligence (AI) tools. This is in large part due to the ubiquity of cameras – scalable devices that can easily churn out the large datasets that are required by machine learning algorithms. The field of computer vision has benefitted from large datasets with millions of images such as ImageNet that were collected by many users. Such large tactile datasets do not exist today.

Traditionally, the major focus in tactile sensing hardware has been on building better performing sensors – and less on our ability to scale them to large numbers easily, or improve spatial coverage, or in collecting large datasets. But this must change with the new spotlight on scalable hardware brought about by the demands of successful machine learning tools.

With this backdrop, we report a scalable tactile glove constructed using readily available materials and simple fabrication tools in a paper appearing in Nature. The glove consists of a sensing sleeve with 548 sensors that are uniformly distributed on the hand and respond to normal forces (see figure). Additionally, this sensor architecture can be readily translated to cover different surfaces, without size restrictions originating from the used fabrication methods. Using this tactile glove, we recorded a large-scale dataset of tactile maps (about 135,000 frames) each covering the hand while interacting with 26 different objects. We observed that using convolutional neural networks (CNNs) it was possible to identify objects or estimate their weights from the tactile information recorded during these interactions. The trained CNNs learned to look for simple geometric features like edges or pointed forces and were also activated by simplistic features of the grasp such as the use of a thumb. Finally, we were also able to look at the collaborative role of different regions of the hand when interacting with objects. This is generally known but it is now possible to quantify this purely from the collective set of object interactions.

There are several promising opportunities to pursue here. The tactile sensing system itself can be improved by incorporating the diverse variety of sensors mimicking those in nature. Improving the system architecture (by reducing the number of wires) and covering larger surfaces of the body would help us understand the dynamics and control strategy of humans while performing tasks that require improved mobility and agility – such as those in parkour. And understanding how tactile signals, which vary spatially and temporally, are stitched together is important for truly understanding the full role of tactile feedback in enabling dexterity.  

Ultimately, I believe the path to achieving dexterity in robotics and better prosthetics will involve scalable sensors, and hardware that work in concert with machine learning algorithms. The impact of our work, and others in the future will increasingly depend on the ease with which our hardware, datasets and results can be used by others. We therefore see strong reasons to share our designs, datasets and results; they are available at http://humangrasp.io.

My co-authors and I benefitted from discussions with several colleagues at MIT during the course of our work. I want to thank professors Ted Adelson and Russ Tedrake in whose offices we had many enjoyable discussions on the amazing abilities of humans when it comes to grasping and manipulating objects. Professor Jeff Lang generously shared his insights on scalable designs with me. Professors Marc Baldo and Vladimir Bulovic gave me valuable advice as a part of the committee for my PhD thesis, of which this work is part. And finally, I am immensely grateful to all my co-authors and their collective support!

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Subscribe to the Topic

Electrical and Electronic Engineering
Technology and Engineering > Electrical and Electronic Engineering
  • Nature Nature

    A weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions.