Legacy Forum: Preserving Nearly 20 Years of Community History - A Time Capsule of Discussions, Memories, and Shared Experiences.

learning robot?

Bioloid robot kit from Korean company Robotis; CM5 controller block, AX12 servos..
13 postsPage 1 of 1
13 postsPage 1 of 1

learning robot?

Post by smile_nik » Fri Jan 12, 2007 7:16 am

Post by smile_nik
Fri Jan 12, 2007 7:16 am

Hi All
Did anyone try to implement some kind of learning abilities to the Bioloid ?
Is it even possible ?
Hi All
Did anyone try to implement some kind of learning abilities to the Bioloid ?
Is it even possible ?
smile_nik
Robot Builder
Robot Builder
User avatar
Posts: 13
Joined: Sun Jan 07, 2007 1:00 am

Post by JonHylands » Fri Jan 12, 2007 1:35 pm

Post by JonHylands
Fri Jan 12, 2007 1:35 pm

You won't get much learning with the out of the box hardware - its got a very simple micro-controller that you can upload really simple behavior to.

I'm in the process of putting together a high-bandwidth wireless link between my PC and the Bioloid. This will give me the hardware required in order to start playing with AI, including learning.

http://www.bioloid.info

- Jon
You won't get much learning with the out of the box hardware - its got a very simple micro-controller that you can upload really simple behavior to.

I'm in the process of putting together a high-bandwidth wireless link between my PC and the Bioloid. This will give me the hardware required in order to start playing with AI, including learning.

http://www.bioloid.info

- Jon
JonHylands
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 512
Joined: Thu Nov 09, 2006 1:00 am
Location: Ontario, Canada

Post by savuporo » Fri Jan 12, 2007 4:18 pm

Post by savuporo
Fri Jan 12, 2007 4:18 pm

The interesting stuff starts to happen once you integrate inverse kinematics, some kind of vision, and a general localization algoritms into a functioning system.
Having an inverse dynamics solver is a huge plus, for extra interesting stuff like jumping over the fences :)
I say lets try to get a semi-accurate model of Bioloid into one of the available open-source physics simulators, and collaborate to get all of this going.
The interesting stuff starts to happen once you integrate inverse kinematics, some kind of vision, and a general localization algoritms into a functioning system.
Having an inverse dynamics solver is a huge plus, for extra interesting stuff like jumping over the fences :)
I say lets try to get a semi-accurate model of Bioloid into one of the available open-source physics simulators, and collaborate to get all of this going.
savuporo
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 26
Joined: Sun Dec 17, 2006 1:00 am

Post by JonHylands » Fri Jan 12, 2007 4:36 pm

Post by JonHylands
Fri Jan 12, 2007 4:36 pm

I actually don't think that inverse kinematics is the way to go. We as people certainly don't do it. The only reason robots do it is because people haven't figured out how to program them to interpret their sensory data well enough to use dynamic feedback techniques to learn how to do motion.

One of the main thrusts of my MicroRaptor project is to do just that - to be able to get enough sensor feedback to allow it to learn how to walk better. I will teach it the basics, but after that it will experiment and, based on feedback, learn how to do it better/faster/smoother.

- Jon
I actually don't think that inverse kinematics is the way to go. We as people certainly don't do it. The only reason robots do it is because people haven't figured out how to program them to interpret their sensory data well enough to use dynamic feedback techniques to learn how to do motion.

One of the main thrusts of my MicroRaptor project is to do just that - to be able to get enough sensor feedback to allow it to learn how to walk better. I will teach it the basics, but after that it will experiment and, based on feedback, learn how to do it better/faster/smoother.

- Jon
JonHylands
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 512
Joined: Thu Nov 09, 2006 1:00 am
Location: Ontario, Canada

Post by Robo1 » Fri Jan 12, 2007 4:53 pm

Post by Robo1
Fri Jan 12, 2007 4:53 pm

I agree with JonHylands

using a ANN would give a better learning enviroment then trying to use iinverse kinematics the only trouble is trying to make the training set. If you can't produce one then you will have to make it learn from scratch this would work but would take a very long time as it will fall over thus spend a lot of time on the floor.

My current robot will probably have 12 pressure sensors, 16 distance sensors and 6 three axis accelameters just in the legs/foot. this should make a very big ANN with something in the regain of 48 inputs and 14 outputs.

all I have to do now is finish building it and programming it. it's going to be a busy year :cry: :cry: :cry: :cry:

bren
I agree with JonHylands

using a ANN would give a better learning enviroment then trying to use iinverse kinematics the only trouble is trying to make the training set. If you can't produce one then you will have to make it learn from scratch this would work but would take a very long time as it will fall over thus spend a lot of time on the floor.

My current robot will probably have 12 pressure sensors, 16 distance sensors and 6 three axis accelameters just in the legs/foot. this should make a very big ANN with something in the regain of 48 inputs and 14 outputs.

all I have to do now is finish building it and programming it. it's going to be a busy year :cry: :cry: :cry: :cry:

bren
Robo1
Savvy Roboteer
Savvy Roboteer
Posts: 501
Joined: Fri Jun 30, 2006 1:00 am
Location: UK - Bristol

Post by DerekZahn » Fri Jan 12, 2007 5:20 pm

Post by DerekZahn
Fri Jan 12, 2007 5:20 pm

It's cool that there are a few people here thinking about these kinds of issues and working toward getting them built. I'm a little ahead of you guys in terms of construction, but you're ahead of me in terms of having some definite ideas about how to do the software. I think I'll end up spending the first couple of months just working on low level concepts and getting a feel for controlling my robot.

I imagine that some of the Japanese guys have made some progress at working beyond the "keyframe animation" type of control... I'd be curious to understand what they've accomplished.
It's cool that there are a few people here thinking about these kinds of issues and working toward getting them built. I'm a little ahead of you guys in terms of construction, but you're ahead of me in terms of having some definite ideas about how to do the software. I think I'll end up spending the first couple of months just working on low level concepts and getting a feel for controlling my robot.

I imagine that some of the Japanese guys have made some progress at working beyond the "keyframe animation" type of control... I'd be curious to understand what they've accomplished.
DerekZahn
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 141
Joined: Wed Mar 16, 2005 1:00 am
Location: Boulder CO, USA

Post by JonHylands » Fri Jan 12, 2007 6:11 pm

Post by JonHylands
Fri Jan 12, 2007 6:11 pm

I actually have a big chunk of the software written, at least in terms of getting started. I've got all the code for communicating with devices on the Bioloid bus. I've also got code set up to handle the mechanics of sequence capture and playback. That will get me to the point where (with a little sequence optimization) MicroRaptor should be able to walk, albeit not very nicely.

After that, I will start working on the learning feedback/reinforcement system, which actually shouldn't be that hard. I'm using the IMU as the main success/failure sensor for walking, with inputs from a couple other places. Once I get the walking to a decent state, I will introduce a sense of balance, which will involve reacting to the IMU and force feedback from the leg actuators with appropriate actions to try and maintain balance. That will be another feedback/reinforcement system.

I think the key to building dynamic systems like this is getting the right sensors, being able to figure out how to classify the sensor input you have, and respond to that input in a reasonable way.

- Jon
I actually have a big chunk of the software written, at least in terms of getting started. I've got all the code for communicating with devices on the Bioloid bus. I've also got code set up to handle the mechanics of sequence capture and playback. That will get me to the point where (with a little sequence optimization) MicroRaptor should be able to walk, albeit not very nicely.

After that, I will start working on the learning feedback/reinforcement system, which actually shouldn't be that hard. I'm using the IMU as the main success/failure sensor for walking, with inputs from a couple other places. Once I get the walking to a decent state, I will introduce a sense of balance, which will involve reacting to the IMU and force feedback from the leg actuators with appropriate actions to try and maintain balance. That will be another feedback/reinforcement system.

I think the key to building dynamic systems like this is getting the right sensors, being able to figure out how to classify the sensor input you have, and respond to that input in a reasonable way.

- Jon
JonHylands
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 512
Joined: Thu Nov 09, 2006 1:00 am
Location: Ontario, Canada

Post by savuporo » Fri Jan 12, 2007 7:54 pm

Post by savuporo
Fri Jan 12, 2007 7:54 pm

I actually didnt mention inverse kinematics in context of nice walking gaits. I mentioned it in context of the robot actually being able to do something useful.

Consider this task for a humanoid bot : walk to (removed by spam filter), find the fridge, open it, fetch a can of beer and close it, return.

This involves walking in indoor environments, possibly up and down the stairs. Walking has been done. Good walking stability involves good sensory input and balancing, like you guys described.

Finding the way and locating the fridge involves mapping and localization algorithms, currently usually employed on two-wheel rovers, these are quite advanced already

Now, getting close to the fridge and locating the beer involves adequate vision ( its required for obstacle avoidance when walking as well .. ), again, vision algorithms have been done and are quite advanced, you just have to have good arsenal of them at hand for a given task.

But to actually open the door and fetch the beer, you pretty much need inverse kinematics. First you have to map the positions of items that you are going to grab, do the path planning and then move according to inverse kinematics solutions.

Note that there is a need for generalized "knowledge" retrieval as well, for storing localization data and what do objects like "fridge" and "can of beer" look like.

Integrating all of this in a Bioloid requires huge amount of patience though :)
I actually didnt mention inverse kinematics in context of nice walking gaits. I mentioned it in context of the robot actually being able to do something useful.

Consider this task for a humanoid bot : walk to (removed by spam filter), find the fridge, open it, fetch a can of beer and close it, return.

This involves walking in indoor environments, possibly up and down the stairs. Walking has been done. Good walking stability involves good sensory input and balancing, like you guys described.

Finding the way and locating the fridge involves mapping and localization algorithms, currently usually employed on two-wheel rovers, these are quite advanced already

Now, getting close to the fridge and locating the beer involves adequate vision ( its required for obstacle avoidance when walking as well .. ), again, vision algorithms have been done and are quite advanced, you just have to have good arsenal of them at hand for a given task.

But to actually open the door and fetch the beer, you pretty much need inverse kinematics. First you have to map the positions of items that you are going to grab, do the path planning and then move according to inverse kinematics solutions.

Note that there is a need for generalized "knowledge" retrieval as well, for storing localization data and what do objects like "fridge" and "can of beer" look like.

Integrating all of this in a Bioloid requires huge amount of patience though :)
savuporo
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 26
Joined: Sun Dec 17, 2006 1:00 am

Post by JonHylands » Fri Jan 12, 2007 8:48 pm

Post by JonHylands
Fri Jan 12, 2007 8:48 pm

I still don't agree that you need IK for what you are talking about. We clearly don't solve differential equations in our heads to move our hand towards the fridge door.

The reason we can do it without IK is because we use a combination of muscle training/memory with subtle visually-directed modifications to get our hand to the target, in this case the fridge door.

We don't need to know how far away we are from the fridge, only that we are "within reach". We don't take out a ruler and measure how far we are before deciding that we are within reach. Because we have reached out hundreds of thousands of times to grasp objects, we "know" how far we can reach, and we estimate the distance to the fridge door using a combination of focal divergence and size estimation.

To think about it in robotic terms, the robot needs to position itself "close enough" to the goal that it can reach it. It doesn't need to know how far away it is exactly. It will then start playing back a "grasp for an object" sequence of servo positions, while tracking its hand with its camera as it moves towards the target. As the hand gets closer to the target, it will start applying corrections to the sequence based on visual feedback.

I suppose from one perspective, you could say these subtle corrections are a form of IK, although they will be implemented using sequence overlays. The robot senses that it needs to nudge its hand in a direction, and thus it does a lookup of where it did that in the past successfully, and overlays that sequence on the one that is currently operating.

These modifyer sequences will be stored as deltas rather than absolute positioning, so they can be overlaid on top of the main sequence. The robot will have to have a lot of these sqeuences stored in memory, which is one of the reasons I am implementing and will be running MicroRaptor from my PC, with high-speed low-latency interactions with the robot hardware from the PC. My laptop has a couple gigs of memory, and a 100 gig hard drive. You can fit a lot of sequences in that much space...

This approach should work in any number of areas, including grasping, maintaining balance, visual servoing, and probably a bunch of things I haven't thought about.

- Jon
I still don't agree that you need IK for what you are talking about. We clearly don't solve differential equations in our heads to move our hand towards the fridge door.

The reason we can do it without IK is because we use a combination of muscle training/memory with subtle visually-directed modifications to get our hand to the target, in this case the fridge door.

We don't need to know how far away we are from the fridge, only that we are "within reach". We don't take out a ruler and measure how far we are before deciding that we are within reach. Because we have reached out hundreds of thousands of times to grasp objects, we "know" how far we can reach, and we estimate the distance to the fridge door using a combination of focal divergence and size estimation.

To think about it in robotic terms, the robot needs to position itself "close enough" to the goal that it can reach it. It doesn't need to know how far away it is exactly. It will then start playing back a "grasp for an object" sequence of servo positions, while tracking its hand with its camera as it moves towards the target. As the hand gets closer to the target, it will start applying corrections to the sequence based on visual feedback.

I suppose from one perspective, you could say these subtle corrections are a form of IK, although they will be implemented using sequence overlays. The robot senses that it needs to nudge its hand in a direction, and thus it does a lookup of where it did that in the past successfully, and overlays that sequence on the one that is currently operating.

These modifyer sequences will be stored as deltas rather than absolute positioning, so they can be overlaid on top of the main sequence. The robot will have to have a lot of these sqeuences stored in memory, which is one of the reasons I am implementing and will be running MicroRaptor from my PC, with high-speed low-latency interactions with the robot hardware from the PC. My laptop has a couple gigs of memory, and a 100 gig hard drive. You can fit a lot of sequences in that much space...

This approach should work in any number of areas, including grasping, maintaining balance, visual servoing, and probably a bunch of things I haven't thought about.

- Jon
JonHylands
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 512
Joined: Thu Nov 09, 2006 1:00 am
Location: Ontario, Canada

Post by smile_nik » Fri Jan 12, 2007 11:19 pm

Post by smile_nik
Fri Jan 12, 2007 11:19 pm

OK ,
Too much theory and too little practice :) how about sharing some useful experience with the beginners :)
For example – what program language you intend to use in order to help Bioloid to learn ?
How you plan to connect the different algorithms and the robot ?

From the last few post I get 3 things
1. We will be needing wireless connection to the robot in order to transfer the data from the robot sensors and after interpreting sensor data back to the servos that control robots movement.
2. We will be needing some very complicated algorithm to analyze the sensor input from the robot and to create proper respons based on the new sensor data and the input from the database with the previous learned things.
3. We will be needing some way to synchronize the input and output communication between point 2 and the robot.
Now I see there is a very nice progress implementing the point 1, a bunch of theoretical ideas about point 2 but no solutions for implementing points 2 and 3
OK ,
Too much theory and too little practice :) how about sharing some useful experience with the beginners :)
For example – what program language you intend to use in order to help Bioloid to learn ?
How you plan to connect the different algorithms and the robot ?

From the last few post I get 3 things
1. We will be needing wireless connection to the robot in order to transfer the data from the robot sensors and after interpreting sensor data back to the servos that control robots movement.
2. We will be needing some very complicated algorithm to analyze the sensor input from the robot and to create proper respons based on the new sensor data and the input from the database with the previous learned things.
3. We will be needing some way to synchronize the input and output communication between point 2 and the robot.
Now I see there is a very nice progress implementing the point 1, a bunch of theoretical ideas about point 2 but no solutions for implementing points 2 and 3
smile_nik
Robot Builder
Robot Builder
User avatar
Posts: 13
Joined: Sun Jan 07, 2007 1:00 am

Post by JonHylands » Fri Jan 12, 2007 11:30 pm

Post by JonHylands
Fri Jan 12, 2007 11:30 pm

Well, as you said, its all theory at this point, but I should have a good idea by this summer how its working, and of course I'll be posting progress reports here and on my blog.

I will be writing all the "brain" code in Smalltalk, specifically Squeak, which you can download at http://www.squeak.org.

The algorithms are connected to the robot using a system I call "Motion Control" for servo control and feedback, and input from the sensors (vision, IMU, pressure, servo force) will feed through a mechanism I call Sensor Queries, which I talk a bit about on my MicroRaptor sensor page.

http://www.bioloid.info/tiki/tiki-index.php?page=MicroRaptor
http://www.bioloid.info/tiki/tiki-index.php?page=MicroRaptor+Sensing

- Jon
Well, as you said, its all theory at this point, but I should have a good idea by this summer how its working, and of course I'll be posting progress reports here and on my blog.

I will be writing all the "brain" code in Smalltalk, specifically Squeak, which you can download at http://www.squeak.org.

The algorithms are connected to the robot using a system I call "Motion Control" for servo control and feedback, and input from the sensors (vision, IMU, pressure, servo force) will feed through a mechanism I call Sensor Queries, which I talk a bit about on my MicroRaptor sensor page.

http://www.bioloid.info/tiki/tiki-index.php?page=MicroRaptor
http://www.bioloid.info/tiki/tiki-index.php?page=MicroRaptor+Sensing

- Jon
JonHylands
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 512
Joined: Thu Nov 09, 2006 1:00 am
Location: Ontario, Canada

Post by limor » Sat Jan 13, 2007 4:49 am

Post by limor
Sat Jan 13, 2007 4:49 am

Hi Jon,
I'm 99% in agreement with you on how to go about making humanoid develop a more animal-like motions and from there to gradual intelligence.
(the 1% difference relates to the IK.. I think that IK has its merits in that the robot should have a "long-term" motion plan that is resolved by the IK. This plan can be modified many times/sec but a plan should be there to smooth out the motion. the ANN helps to select a path in regular time-frames. but this is all speculation. lets see how some cool alternative control software evolves over the next few months.)

However, I have a question relating to the Wi-Fi remote closed-loop control:

Wi-Fi has variable latency of 0.5-100ms depending on other wifi traffic, wall reflections and cosmic radiation. If you set up a reliable TCP link with the PC, latency will be on the higher end of this range (and we assume here that Windows O/S will call your TCP/IP handler with relatively low latency).


(software <->PC) <-> [.5..100ms] <-> (Wifi -> buffer) <-> AX12

Buffering at the WiFi modules is probably necessary to ensure that packets are reliably transmitted back and forth. If you want to get information from 18 servos, one at a time, this adds another considerable latency.

Have you thought about how a feedback control loop can be acheived in such a high-latency communication environment (5hz control cycle?).

what about the gumstix doing some number crunching at 120hz cycle and the PC doing 'high-level stuff' at 5hz (a multirate controller). ?
Hi Jon,
I'm 99% in agreement with you on how to go about making humanoid develop a more animal-like motions and from there to gradual intelligence.
(the 1% difference relates to the IK.. I think that IK has its merits in that the robot should have a "long-term" motion plan that is resolved by the IK. This plan can be modified many times/sec but a plan should be there to smooth out the motion. the ANN helps to select a path in regular time-frames. but this is all speculation. lets see how some cool alternative control software evolves over the next few months.)

However, I have a question relating to the Wi-Fi remote closed-loop control:

Wi-Fi has variable latency of 0.5-100ms depending on other wifi traffic, wall reflections and cosmic radiation. If you set up a reliable TCP link with the PC, latency will be on the higher end of this range (and we assume here that Windows O/S will call your TCP/IP handler with relatively low latency).


(software <->PC) <-> [.5..100ms] <-> (Wifi -> buffer) <-> AX12

Buffering at the WiFi modules is probably necessary to ensure that packets are reliably transmitted back and forth. If you want to get information from 18 servos, one at a time, this adds another considerable latency.

Have you thought about how a feedback control loop can be acheived in such a high-latency communication environment (5hz control cycle?).

what about the gumstix doing some number crunching at 120hz cycle and the PC doing 'high-level stuff' at 5hz (a multirate controller). ?
limor
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 1845
Joined: Mon Oct 11, 2004 1:00 am
Location: London, UK

Post by JonHylands » Sat Jan 13, 2007 5:23 am

Post by JonHylands
Sat Jan 13, 2007 5:23 am

limor,

In theory, wifi can have latency of 100 ms. In practice, it almost never happens. I typically get 1ms when I ping my router, occasionally 2ms.

There isn't going to be any other traffic, so I don't really think it will be an issue. If it ends up being an issue, then I'll have to think about doing something else.

What you envision (with the gumstix) isn't really doable with my control structure. Motion derives from and is constantly being updated by information in the robot's brain, so offloading motion control to the gumstix just isn't going to work. I was hoping to be able to offload force-driven control to the ATmega128, but after playing with the AX-12's, I don't think force driven control is possible (because the force sensor is affected by moving the servo, you can't use it in a feedback loop to measure outside force on the actuator).

- Jon
limor,

In theory, wifi can have latency of 100 ms. In practice, it almost never happens. I typically get 1ms when I ping my router, occasionally 2ms.

There isn't going to be any other traffic, so I don't really think it will be an issue. If it ends up being an issue, then I'll have to think about doing something else.

What you envision (with the gumstix) isn't really doable with my control structure. Motion derives from and is constantly being updated by information in the robot's brain, so offloading motion control to the gumstix just isn't going to work. I was hoping to be able to offload force-driven control to the ATmega128, but after playing with the AX-12's, I don't think force driven control is possible (because the force sensor is affected by moving the servo, you can't use it in a feedback loop to measure outside force on the actuator).

- Jon
JonHylands
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 512
Joined: Thu Nov 09, 2006 1:00 am
Location: Ontario, Canada


13 postsPage 1 of 1
13 postsPage 1 of 1