Neuromechanics-Inspired Control Scheme Enhances Robot Adaptability

2025-02-13

To achieve successful application in large-scale and complex real-world environments, robots need to quickly adapt their actions when interacting with humans and their surroundings, enabling them to respond effectively to environmental changes. However, many robots currently developed excel in controlled environments but struggle in unstructured settings.

Biological neuromechanics inspires variable stiffness in robotics
Source: Ignacio Abadía

Researchers from the University of Granada in Spain and EPFL in Switzerland recently developed a new control scheme inspired by neuromechanics, specifically the integration function of the central nervous system and human biomechanics.

Their proposed control system, outlined in a paper published in Science Robotics, was found to regulate robot stiffness, improve motion accuracy, and enhance adaptability to environmental changes.

"Our latest article stems from an exciting collaboration during the final phase of the EU flagship project 'Human Brain Project' (HBP)," senior author Niceto R. Luque told Tech Xplore.

"We had the opportunity to collaborate closely with the BioRob Lab at EPFL (Switzerland), led by Professor Auke Ijspeert, whose pioneering work on muscle simulation frameworks influenced our research. Inspired by the paired operation of human muscles (known as antagonistic muscle relationships), we focused on how muscle co-contraction dynamically adjusts stiffness."

The primary goal of Luque and his colleagues' recent study was to develop a new bio-inspired control scheme to overcome the limitations of traditional impedance/admittance control paradigms that underpin industrial robot movements. Their solution draws inspiration from how humans naturally learn to adapt to complex and unpredictable environmental changes.

"Traditional control methods often rely on highly complex mathematical formulas to manage force exchanges between humans and robots (or between robots)," Luque said. "In contrast, our strategy mimics human muscle co-contraction directly regulating stiffness, eliminating the need for expensive hardware solutions required to determine exchange forces and avoiding the necessity of complex dynamic equations."

"This biomimetic approach aims to enable collaborative robots (or cobots) to exhibit a wide range of adaptive motion behaviors, thereby improving their performance and robustness across various tasks."

The neuro-mechanics-inspired robotic control scheme developed by these researchers has two key components that mimic the systems through which humans control and adapt their movements. The first component is the muscle model, while the second is what is known as the cerebellar network.

As the name suggests, the muscle model aims to replicate the mechanisms underlying human muscle movement. It particularly reflects how human muscles work in pairs using a process called "co-contraction."

"Simply put, when opposing muscles contract together, they adjust the stiffness of a joint," Luque explained. "This allows the robot to change its motion rigidity or flexibility depending on the task at hand—similar to tightening your muscles for precision or relaxing them for freer movement. This ability to modulate stiffness is crucial for handling delicate tasks and absorbing unexpected forces."

The second component of the team's control scheme, complementing the muscle model, is the so-called cerebellar network. This system aims to mimic the function of the human cerebellum, the brain region responsible for fine-tuning human movement and adjusting based on feedback from the body and environment.

"By incorporating this adaptive network, robots can learn from experience and adjust their motions—including their co-contraction and stiffness—when faced with new tasks or unpredictable situations," Luque said. "This means it doesn't solely rely on pre-programmed instructions or complex mathematical equations to operate. Overall, our solution provides collaborative robots with a form of 'muscle memory' and the ability to learn and adapt like humans."

Luque and his colleagues evaluated their control scheme in a series of tests, yielding promising results. Specifically, they demonstrated that the co-contraction mechanism regulates the robot's stiffness and performance accuracy, enhancing resistance to external disturbances.

"We found that, similar to human learning, training under low co-contraction conditions leads to lower stiffness," Luque explained. "Although learning under these conditions is more challenging for the cerebellum, it can operate effectively at higher co-contraction levels without additional training. This indicates a clear preference for motion learning under low co-contraction, which reduces training time and helps prevent wear."

Despite the challenge of learning under low co-contraction for the cerebellum, it can operate effectively at higher co-contraction levels without specific training. Thus, the team's solution allows its controller to adapt to low co-contraction and switch to higher co-contraction behavior when higher stiffness is needed.

"We don't need to train the cere