by PaulL » Sat Jan 29, 2011 10:09 pm
by PaulL
Sat Jan 29, 2011 10:09 pm
Thanks!

I've seen ring-type rotary tables, and those are very pricey, and much larger. Can't recall anything miniaturized... In miniaturizing something like the neck mechanism, the problem I have is that I want him to still be able to do headstands without breaking something. I don't want to add 3 full-sized servos for motion, that'd be way too much, so it has to be micros. Using micro servos which don't have a pile of torque, the mechanism needs to give just before the servo gears break or holding torque is exceeded, which I will try to do in the linkages to the mechanism.
What I have in mind is a bearing at the base mounted to the top plate of the chassis, with a universal joint where the bottom half is fixed in the bearing, and top half is fixed to the head - this gives the head rotation axis. For tilt, a swashplate would be attached to the top half of the universal joint, allowing tilt fwd / back / left / right. I've considered running wires out the bottom / back of his head, as his head will rotate 180, and tilt something like 30 degrees in any direction. There isn't much room to work, though.
Regarding kinematics, it's a very systemic approach to motion, and from a higher level, yes, as you say, it would be simpler to set an intended end point in 3D space than to set multiple servo positions. I think kinematics or not boils down to how you want the bot to function in the end, and what your focus is in robotics. I've read enough about inverse kinematics to know it's not something I personaly want to focus on.
From a higher level, I want to provide a more simple interface, one that is more explicit than implicit regarding desired movements. I think inverse kinematics would be more appropriate for more analog types of interactions - the only analogy I can think of at the moment would be that of dance, ballet versus disco.
With inverse kinematics, more than one solution can be realized (such as a multi-axis arm reaching a point in 3D space), but I'm really only looking for the end result.
To explain a bit further, I want to experiment with AI, and let him "figure out" what works and what doesn't. For example, "If you reach out in front of you with both hands, and lean forward, and fall on your face, then you shouldn't do that again.". I will give him the ability to adjust his motions and realize the results, much as we humans learned to walk - by falling over, many times.

But, I'll start him off with what I know works, and let him improve on that.. Call it "hand-holding", if you will.
I checked out that camera, it's bigger than I'm looking for, I can tell from the lego pieces (I know lego's pretty well..

). I need something that fits inside an RN-1's head. He will get a clear visor to see through. It's a tight fit in there. I know the camera devices exist that are small enough, but finding one with an equally small interface board has been tough. Perhaps the best solution is to just buy a few internal laptop cameras and USB webcams and experiment with extending the camera wiring to the interface board.
I still at this point have no clue what his runtime will be like, even with the 2200 mAh lipo. Probably short, but I hope not too short. If it's unacceptably short, this will probably cause me to further persue the DC-to-DC converter power management aspect. I will almost certainly use the DC-to-DC converters for dropping from 7.4v of the LiPo to 6v, the question is in how many I will need and whether or not I will set them up to be digitally adjustable (optocoupler, digital potentiometer, etc).
More info...
In my servo class, I haven't incorporated position feedback as an "always available" function. At first, I thought reading servo position would be useful, but I've since thought it to be unnecessary. You can't measure force using position feedback, all position feedback can really do is tell you a servo's position from a "limp" servo. However, if you measure power consumption, you CAN deduce force. Reading position, IMHO, is only truly useful at power up, or in a "capture" mode for creating new poses. Viewing servos as their human equivalents, they are just muscles.
SOMETHING FOR EVERYONE:
Regarding inverse kinematics, let's do a practical experiment. Ready? With your hands starting at the keyboard, raise your hands a bit, and touch the index finger, middle finger, and thumb together on one hand, and touch the tips of these 3 fingers to the palm of the other. Repeat. Note the position of your elbow. Now, do the same, but don't just touch this time, push really hard, lots of force. Where is your elbow now? Why wasn't it in this position with just a touch? Desired result is the difference - a touch versus a push. Same end motion, but different motion in the middle. Try varying levels of pressure, note how your elbow's position changes with intended force. Think about turning a screwdriver or a thumbscrew that is being stubborn - does your elbow change position? Why does your elbow's position change? How did you learn to do this?
Ok, one more practical test, more about sensors and motors. Fold your arms, then unfold your arms and reach out with one finger and press the Y key on your keyboard. Simple enough. Do it a few more times, and think about where your muscles position your arms throughout the movement. Now, close your eyes and repeat. With my eyes closed, I keep hitting "U" for some reason.

Our muscles don't give us feedback, they just do. We humans can only vaguely get it close on muscle control alone. Now, try with your eyes open to do it faster. Faster. Faster! Me, I move quickly out, then pause before I press the key when I'm sure I'm at the key. Sensors.
We use our eyes for everything, in this case, for motion feedback. We have crappy muscle control. But, our robots are quite good at positioning via muscle control. If we were as good as our robots at moving muscles, we could do a number of things without having to observe ourselves to be successful. With that, our robots don't NEED to sense their motion, or their balance in order to, for example, walk. Our own responses are very reactionary, depending on senses to fine tune the results.
In my project, I seek to take advantage of the fact that my robot doesn't need a pile of sensory feedback or kinematics algorithms to move in a way that produces the desired result. But then, I wouldn't discourage anyone else from heading down that path if they so choose!

Thanks!

I've seen ring-type rotary tables, and those are very pricey, and much larger. Can't recall anything miniaturized... In miniaturizing something like the neck mechanism, the problem I have is that I want him to still be able to do headstands without breaking something. I don't want to add 3 full-sized servos for motion, that'd be way too much, so it has to be micros. Using micro servos which don't have a pile of torque, the mechanism needs to give just before the servo gears break or holding torque is exceeded, which I will try to do in the linkages to the mechanism.
What I have in mind is a bearing at the base mounted to the top plate of the chassis, with a universal joint where the bottom half is fixed in the bearing, and top half is fixed to the head - this gives the head rotation axis. For tilt, a swashplate would be attached to the top half of the universal joint, allowing tilt fwd / back / left / right. I've considered running wires out the bottom / back of his head, as his head will rotate 180, and tilt something like 30 degrees in any direction. There isn't much room to work, though.
Regarding kinematics, it's a very systemic approach to motion, and from a higher level, yes, as you say, it would be simpler to set an intended end point in 3D space than to set multiple servo positions. I think kinematics or not boils down to how you want the bot to function in the end, and what your focus is in robotics. I've read enough about inverse kinematics to know it's not something I personaly want to focus on.
From a higher level, I want to provide a more simple interface, one that is more explicit than implicit regarding desired movements. I think inverse kinematics would be more appropriate for more analog types of interactions - the only analogy I can think of at the moment would be that of dance, ballet versus disco.
With inverse kinematics, more than one solution can be realized (such as a multi-axis arm reaching a point in 3D space), but I'm really only looking for the end result.
To explain a bit further, I want to experiment with AI, and let him "figure out" what works and what doesn't. For example, "If you reach out in front of you with both hands, and lean forward, and fall on your face, then you shouldn't do that again.". I will give him the ability to adjust his motions and realize the results, much as we humans learned to walk - by falling over, many times.

But, I'll start him off with what I know works, and let him improve on that.. Call it "hand-holding", if you will.
I checked out that camera, it's bigger than I'm looking for, I can tell from the lego pieces (I know lego's pretty well..

). I need something that fits inside an RN-1's head. He will get a clear visor to see through. It's a tight fit in there. I know the camera devices exist that are small enough, but finding one with an equally small interface board has been tough. Perhaps the best solution is to just buy a few internal laptop cameras and USB webcams and experiment with extending the camera wiring to the interface board.
I still at this point have no clue what his runtime will be like, even with the 2200 mAh lipo. Probably short, but I hope not too short. If it's unacceptably short, this will probably cause me to further persue the DC-to-DC converter power management aspect. I will almost certainly use the DC-to-DC converters for dropping from 7.4v of the LiPo to 6v, the question is in how many I will need and whether or not I will set them up to be digitally adjustable (optocoupler, digital potentiometer, etc).
More info...
In my servo class, I haven't incorporated position feedback as an "always available" function. At first, I thought reading servo position would be useful, but I've since thought it to be unnecessary. You can't measure force using position feedback, all position feedback can really do is tell you a servo's position from a "limp" servo. However, if you measure power consumption, you CAN deduce force. Reading position, IMHO, is only truly useful at power up, or in a "capture" mode for creating new poses. Viewing servos as their human equivalents, they are just muscles.
SOMETHING FOR EVERYONE:
Regarding inverse kinematics, let's do a practical experiment. Ready? With your hands starting at the keyboard, raise your hands a bit, and touch the index finger, middle finger, and thumb together on one hand, and touch the tips of these 3 fingers to the palm of the other. Repeat. Note the position of your elbow. Now, do the same, but don't just touch this time, push really hard, lots of force. Where is your elbow now? Why wasn't it in this position with just a touch? Desired result is the difference - a touch versus a push. Same end motion, but different motion in the middle. Try varying levels of pressure, note how your elbow's position changes with intended force. Think about turning a screwdriver or a thumbscrew that is being stubborn - does your elbow change position? Why does your elbow's position change? How did you learn to do this?
Ok, one more practical test, more about sensors and motors. Fold your arms, then unfold your arms and reach out with one finger and press the Y key on your keyboard. Simple enough. Do it a few more times, and think about where your muscles position your arms throughout the movement. Now, close your eyes and repeat. With my eyes closed, I keep hitting "U" for some reason.

Our muscles don't give us feedback, they just do. We humans can only vaguely get it close on muscle control alone. Now, try with your eyes open to do it faster. Faster. Faster! Me, I move quickly out, then pause before I press the key when I'm sure I'm at the key. Sensors.
We use our eyes for everything, in this case, for motion feedback. We have crappy muscle control. But, our robots are quite good at positioning via muscle control. If we were as good as our robots at moving muscles, we could do a number of things without having to observe ourselves to be successful. With that, our robots don't NEED to sense their motion, or their balance in order to, for example, walk. Our own responses are very reactionary, depending on senses to fine tune the results.
In my project, I seek to take advantage of the fact that my robot doesn't need a pile of sensory feedback or kinematics algorithms to move in a way that produces the desired result. But then, I wouldn't discourage anyone else from heading down that path if they so choose!
