Now robots can also peel bananas

This is a development that was presented by researchers from the University of Tokyo

Compartir
Compartir articulo
infobae

Robots are able to serve restaurants, do stunts and dance, but one of the biggest challenges is getting them to do activities that require fine motor skills.

That is why the model presented by researchers at the University of Tokyo was surprising in which a robot lifts and peels a banana with both arms in three minutes.

While the two-armed machine is only successful 57% of the time, the index is quite good considering the difficulties involved in a robot performing these kinds of tasks.

The most interesting thing about this development is not that artificial intelligence is capable of successfully peeling a fruit, but that it opens up a lot of possibilities for the future, since this type of motor skills can be used for robots to perform tasks that require meticulous care such as moving small pieces from one place to another, taking and storing delicate objects, etc.

Researchers Heecheol Kim, Yoshiyuki Ohmura, and Yasuo Kuniyoshi trained the robot using a machine learning process. In this type of training, several samples are taken to produce that data that are then used for the robot to replicate the action.

Kuniyoshi believes that his training method could help AI systems perform all kinds of tasks that may be simple for humans but require a lot of coordination and motor skills. This would favor the use of this type of technology in homes, factories and all kinds of environments.

El robot recibió un entrenamiento de 13 horas
El robot recibió un entrenamiento de 13 horas

In recent years, several developments have emerged that aim to enhance the capabilities of robots so that these machines can alleviate many repetitive or routine activities. The focus has been, as in this case, on the training of coordination, stability and fine motor skills.

This is the case of researchers at the University of California, Berkeley who created the Motion2Vec algorithm, which seeks to make a robot capable of suturing patients with the precision of a human.

To this end, they used a semi-supervised deep learning system with which the robot learns by watching videos of surgical interventions where sutures are performed. With this information, the AI system learns to imitate the movements of health professionals in order to accurately imitate them.

The developers used a Siamese neural network that consists of the use of two identical networks that receive two sets of data separately and after processing them, compares them and displays a final result.

On the one hand, the system receives the video of the doctor doing the sutures and on the other the recordings of the robot practicing. Make a comparison between these two clips and thus learn how to improve the accuracy of their movements.

The videos used in the training are part of the JIGSAWS database, which gathers information on surgical activity for the modeling of human movement. The data that is part of JIGSWAS was collected through a collaboration between Johns Hopkins University (JHU) and Intuitive Surgical, Inc. (Sunnyvale, CA. ISI) within a study approved by the IRB.

Se están desarrollando robots capaces de manipular objetos frágiles (Toyota)
Se están desarrollando robots capaces de manipular objetos frágiles (Toyota)

Also in line with this type of robot butlers. There are models capable of picking up objects on the floor and ordering the chaos of the house to chef robots to have as allies in the kitchen. There are options for what you want to imagine, but the truth is that this technology has not yet become part of everyday life or spread in part because it still needs to mature, optimize some functions and also lower its values, something that will happen when the use of these devices is extended.

KEEP READING: