The Quest to Make a Robotic Cat Walk With Artificial Neurons

March 20, 2019 0 By JohnValbyNation

The little robot finger’s favorite color is blue. Wave a handful of blueberries in front of it and the finger will follow, transfixed. If you’re wearing a blue shirt, congratulations, you’re its new best friend. If you painted everything around it blue, the robo-digit could well have a heart attack.

You can program a robot to fall in love with a color easily enough. But this robot is thinking in a fundamentally different way—not with line after line of complicated code, but with simulated neurons.

“These neurons, each one may cause a little twitch in the muscle,” says the robot’s master, USC biomedical engineer Terry Sanger. “It can push the muscle left, right, up, and down. All the robot knows is when it sees blue things it wants to go toward blue things and avoid everything else.”

The system is a glimpse at a potentially powerful approach to robotic intelligence: To create machines that move more naturally, maybe the trick is to first make them clever, yet kind of simplistic. Maybe you replicate the primitive functioning of neurons, governed by a relatively simple supervisory code, instead of relying on complicated algorithms.

Across the USC campus from Sanger’s office lives Kleo the robotic cat. Well, more like struggles than lives—Kleo is a remotely piloted machine that ambles awkwardly. But biomedical engineer Francisco Valero-Cuevas has big plans for Kleo: Get it walking on its own with the help of simulated neurons on a chip that can mimic the operation of neurons in a biological spinal cord.

But why not just mimic the brain? “The spinal cord is not just some cables that go from brain to muscle,” says Valero-Cuevas. “The spinal cord has its own low level circuits that do a lot of the micromanagement of muscles. So our goal is to reverse engineer the entire system.”

That begins with the neurons. Scientists know generally how neurons are arranged in a spinal cord. What’s less clear are the strengths of the connections between neurons as they form into networks that drive, say, the movement of legs.

So Kleo would start with lots of simulated neurons connected to each other with random strengths, or perhaps the same strengths. “You have Kleo just sitting there doing nothing and the neurons are spiking at random,” says Valero-Cuevas. “And then one of these random spike patterns causes an accelerometer to feel forward progression. That minute forward progression is fed back to the system and says, Hey, for that spiking pattern, reinforce the connections among neurons that did that.

This is known as reinforcement learning. Bit by bit, Kleo’s artificial spinal cord learns which neuronal connections, and connection strengths, trigger the desired outcome. Some move the robot ever so slightly forward and are rewarded. Over many, many iterations, Kleo could begin to crawl and eventually walk.

Yeah, it’s not exactly brilliance in motion. It’ll be awkward at first. “Where is the intelligence here?” Valero-Cuevas asks. “You realize there is no intelligence. It's all dumb parts, but put together the emergent behavior is at the very least useful.” A single neuron means nothing, but formed up as a whole network they build something special.

Such a system could be big for robotics. To get a robot to move, typically you’ve had to program its actions. Move leg, balance, move other leg, etc. It’s hard as hell to do, as evidenced by the bumbling antics of entrants in Darpa's Robotics Challenge.

By reverse-engineering how the spinal cord drives movement in biological beings, roboticists could get lower-level behavior like walking to develop automatically without complicated algorithms. “The way we walk through the world is not by estimating the contact forces with our feet or trying to identify every single thing in the field of view or trying to estimate to precise levels what our velocity is,” Sanger says. “We don't do that. We just see and we feel and we move.”

Not that robots of the future will be able to rely entirely on this system to navigate their world. Neurons on a chip help something like a robotic finger fall in love with the color blue or a robotic cat learn the basics of locomotion. “Then you add a brain to the system,” Valero-Cuevas says. “The brain is the one who starts saying, Well now that I can walk, why don't I walk right or left, why don’t I chase a mouse?

So having an underlying neuron-like system to handle basic movements could be critical for developing truly useful machines that can adapt to their environments. Oh, and for laughing at poor Kleo as it struggles like hell to learn to walk. After all, if cats are good for anything, it’s comic relief.

More robot learning

—Kleo may one day have simulated neurons, but in a UC Berkeley lab, Brett the robot has been teaching itself to play children’s games with reinforcement learning.

—Researchers from that lab have also launched a company that’s exploring using VR to teach robots how to handle objects.

—If this is all making you nervous about the pending robot singularity, fear not. We’re actually living in the harmless multiplicity.

Related Video

Science

How Brett the Robot is Learning by Failing

Brett, a robot at UC Berkeley, is learning to put a square peg in a square hole the same way that a child does. Slowly and with trial and error.