active: late 2003 to early 2004

Evolution of Motor Control

With Neural Networks

In short: optimizing neural networks with a genetic algorithm to control physically-simulated humanoid creatures.

This article was the original inspiration. It started as a project for an artificial intelligence class taught by Dr. Dimitris Margaritis and eventually became the motivation for my graduate research. The goal was to create motor control systems for virtual humans using artificial neural networks and genetic algorithms. I wanted to take the current physically simulated "ragdolls" in video games and bring them to life, allowing them to learn motor skills entirely on their own.

Basically, a physically simulated humanoid is controlled by an artificial neural network which senses joint angles and controls muscle forces. A genetic algorithm optimizes the neural network weights to improve performance on a given motor control task (standing, jumping, or walking). The most fascinating thing about this method is that the system can learn an appropriate control algorithm with minimal help from the programmer. The main design decision is the fitness function (e.g., for jumping, individuals that jump higher stay in the population and spread their genes).

I would usually start a simulation in the evening, go to sleep, and check on the progress in the morning. I was often surprised at the types of results I would find. Sometimes a suboptimal solution would dominate the population; sometimes they would exploit some instability in the physics simulation and cheat the system; sometimes they would turn out better than I'd hoped.

During spring 2004 I spent time developing tools (a wxwidgets/SDL setup for evolving behaviors and visualizing the results), experimenting with Ken Stanley's NEAT algorithm (modified for leaky integrator neurons with evolvable time constant parameters), and improving results on the walking task. One of the videos below shows the walking results. The neural networks used for walking have no sensory inputs. They are simple central pattern generators which output oscillatory signals in the absence of any sensory input.

Related files:

Virtual human learning to stand
Virtual human learning to stand
Learning to jump
Learning to jump
Biped learning to walk
Biped learning to walk
GUI for evolution experiments
GUI for evolution experiments