AI Technology Breakthrough: Robot Mimicking Human Walking and Running

2024-05-09

A group of international research teams have developed a new technology that can mimic human movement by combining central pattern generators (CPGs) with deep reinforcement learning (DRL). This technology not only successfully imitates walking and running movements, but also generates motion without pre-existing motion data, achieving smooth transitions between walking and running and adapting to unstable ground environments.

This groundbreaking research was published in the IEEE Robotics and Automation Letters on April 15, 2024.

We often don't pay particular attention to it, but walking and running are based on the inherent biological redundancy of the human body, allowing us to easily adapt to various environments or change the speed of walking/running. However, replicating this human-like motion in robots is extremely challenging due to the complexity and variability involved.

Existing models often struggle to adapt when faced with unknown or challenging environments, resulting in low efficiency and poor performance. This is because AI systems typically only generate one or a few correct solutions. However, for organisms and their movements, there is no single correct pattern. In fact, there is a range of possible ways to move, and it is not always clear which one is best or most effective.

Deep reinforcement learning (DRL) is one of the methods researchers are attempting to overcome this challenge. DRL extends traditional reinforcement learning by using deep neural networks to handle more complex tasks and learn directly from raw sensory inputs, giving it more flexibility and powerful learning capabilities. However, its drawback is the significant computational cost required to explore the vast input space, especially when the system has a high degree of freedom.

Another approach is imitation learning, where robots learn by measuring the movements of humans performing the same motion tasks. While imitation learning performs well in stable environments, its effectiveness is greatly reduced when faced with new situations or environments not encountered during training. Its adaptability and ability to navigate effectively are limited by the narrow range of behaviors it has learned.

"By combining these two methods, we have overcome many of their limitations," explained Mitsuhiro Hayashibe, a professor at the Graduate School of Northeastern University. "Instead of directly applying deep learning to CPG itself, we applied it to a form of reflex neural network that supports CPG, training a CPG-like controller through imitation learning."

CPG is a neural circuit located in the spinal cord that acts as a conductor, generating rhythmic patterns of muscle activity. In animals, the reflex circuit works in conjunction with CPG, providing sufficient feedback to allow them to adjust their speed and walking/running movements to adapt to the terrain.

By adopting the structure of CPG and its reflex counterpart, the Adaptive Imitation CPG (AI-CPG) approach demonstrates remarkable generative adaptability and stability in imitating human movement.

Professor Hayashibe further added, "This breakthrough sets a new standard for generating human-like motion in robotics, with unprecedented adaptability to the environment. Our approach represents a significant advancement in the development of generative AI technology for robot control, with potential applications across industries."

The research team consists of members from the Graduate School of Northeastern University and the Swiss Federal Institute of Technology Lausanne (EPFL).