by PaulL » Mon Oct 15, 2012 12:35 am
by PaulL
Mon Oct 15, 2012 12:35 am
Wow, someone actually reads this thread!
I've been keeping this thread up mostly out of personal commitment, but it's nice to know someone is reading...
I've been giving some thought to your post, about kinematics processing and how it fits into the scheme of things.
Compromise, optimization, now we're just playing with words.
What this approach does is move high bandwidth / CPU motions off the Roboard (keeps Roboard from handling interpolation), but it doesn't deny the ability to stream motions for use with inverse kinematics / sensor feedback-controlled motions / etc.
It also offers some benefit turning something like an STM32 into a PWM interface. Then, any serial / USB / etc host can plug right in (I'm thinking Atom Z530). As there is no "standard" for servo controllers out there, I figured I'd try a different piece of hardware and method.
If the intent of inverse kinematics is a point in 3D space, this approach can facilitate this. Regarding kinematics, how fine a resolution is intended for a particular instance? Depending on the precision required, this approach could "fill in the gaps" between points, not necessarily a bad thing.
I think active inverse kinematics is overkill for taking a stroll around the house on a "large footed" bot, but this is my opinion. But, a sensor input saying "you're about to lose your balance" can interrupt and offset those rudimentary motions - not outside the capabilities of the STM32.
What we're really talking about is CPU load and priority of complex functions. If the intent is to experiment in the realm of kinematics, sure, you can do that on the RB-100 or other, and use the onboard PWM right on top of the Vortex86DX in whatever language makes you happy. If TTS and Voice Rec is needed, that's a good chunk of CPU (esp on Roboard), and doesn't quite leave enough room for motion to play without some sacrifice (ex, TTS / Voice Rec when standing still, off while moving).
This offboard control gives me options, and it doesn't quite take any away - I think that's a good thing. If I want to fiddle with kinematics and swamp the CPU, I can, and when I want to walk and talk, I can. This aspect of choice is what I think is key here. The heavy hitters are vision, kinematics, TTS / voice rec, and AI. Today, it's simply not possible to do all on the same small bot concurrently.
In people, there is a kind of "mode switch" regarding focus. If we're performing a delicate or unique motion, we focus, and we sometimes don't even talk while we're concentrating. This is possible with an offboard controller that can accept streamed position updates and does kinematics host-side. At the same time, many motions we make don't require concentration, and the same offboard controller can do the interpolation for similar types of motions, even using sensor feedback (gyro, accelerometer). There are other situations such as where we may be walking, but holding a glass of water, perhaps we can talk, perhaps we can't if the water is very near the rim of the glass. This controller approach supports this as well (interpolations + discrete position updates).
I agree regarding the complexity of kinematics / inverse kinematics. For the same reason GPU's exist, the same logically will eventually be true for kinematics (but practically will this be of benefit to semiconductor manufacturers? Maybe as an unintended application of gate arrays and the like).
I agree about the STM32, it's a very slick little processor. The 32U4 isn't bad, but the STM32 is a good bit better. I don't need all that GPIO, but what I really want is a pretty specialized board. For now, I'll just use the Maple Mini as-is. Waiting on Sparkfun to get them back in stock, I need a couple more.
Paul
Wow, someone actually reads this thread!
I've been keeping this thread up mostly out of personal commitment, but it's nice to know someone is reading...
I've been giving some thought to your post, about kinematics processing and how it fits into the scheme of things.
Compromise, optimization, now we're just playing with words.
What this approach does is move high bandwidth / CPU motions off the Roboard (keeps Roboard from handling interpolation), but it doesn't deny the ability to stream motions for use with inverse kinematics / sensor feedback-controlled motions / etc.
It also offers some benefit turning something like an STM32 into a PWM interface. Then, any serial / USB / etc host can plug right in (I'm thinking Atom Z530). As there is no "standard" for servo controllers out there, I figured I'd try a different piece of hardware and method.
If the intent of inverse kinematics is a point in 3D space, this approach can facilitate this. Regarding kinematics, how fine a resolution is intended for a particular instance? Depending on the precision required, this approach could "fill in the gaps" between points, not necessarily a bad thing.
I think active inverse kinematics is overkill for taking a stroll around the house on a "large footed" bot, but this is my opinion. But, a sensor input saying "you're about to lose your balance" can interrupt and offset those rudimentary motions - not outside the capabilities of the STM32.
What we're really talking about is CPU load and priority of complex functions. If the intent is to experiment in the realm of kinematics, sure, you can do that on the RB-100 or other, and use the onboard PWM right on top of the Vortex86DX in whatever language makes you happy. If TTS and Voice Rec is needed, that's a good chunk of CPU (esp on Roboard), and doesn't quite leave enough room for motion to play without some sacrifice (ex, TTS / Voice Rec when standing still, off while moving).
This offboard control gives me options, and it doesn't quite take any away - I think that's a good thing. If I want to fiddle with kinematics and swamp the CPU, I can, and when I want to walk and talk, I can. This aspect of choice is what I think is key here. The heavy hitters are vision, kinematics, TTS / voice rec, and AI. Today, it's simply not possible to do all on the same small bot concurrently.
In people, there is a kind of "mode switch" regarding focus. If we're performing a delicate or unique motion, we focus, and we sometimes don't even talk while we're concentrating. This is possible with an offboard controller that can accept streamed position updates and does kinematics host-side. At the same time, many motions we make don't require concentration, and the same offboard controller can do the interpolation for similar types of motions, even using sensor feedback (gyro, accelerometer). There are other situations such as where we may be walking, but holding a glass of water, perhaps we can talk, perhaps we can't if the water is very near the rim of the glass. This controller approach supports this as well (interpolations + discrete position updates).
I agree regarding the complexity of kinematics / inverse kinematics. For the same reason GPU's exist, the same logically will eventually be true for kinematics (but practically will this be of benefit to semiconductor manufacturers? Maybe as an unintended application of gate arrays and the like).
I agree about the STM32, it's a very slick little processor. The 32U4 isn't bad, but the STM32 is a good bit better. I don't need all that GPIO, but what I really want is a pretty specialized board. For now, I'll just use the Maple Mini as-is. Waiting on Sparkfun to get them back in stock, I need a couple more.
Paul