Legacy Forum: Preserving Nearly 20 Years of Community History - A Time Capsule of Discussions, Memories, and Shared Experiences.

Can we gather around to discuss camera vision?

Anything that doesn't fit our other forums goes here.
5 postsPage 1 of 1
5 postsPage 1 of 1

Can we gather around to discuss camera vision?

Post by sharifk » Fri Aug 17, 2007 4:09 am

Post by sharifk
Fri Aug 17, 2007 4:09 am

Hello all,

This mightve been asked numerous times but I have the question in a different format if you dont mind.

Say I have 2 bipeds, one which is the Robonova-1 and two, a custom built one off of lynxmotions parts. This means two things, one controller is the MR-C3024 and the other is the MINI-ABB stacked with the SSC-32.

Ive been trying to figure out ways to give my robot vision for many applications and I'm not sure what I want to do or how to do it exactly. Im very much into the latest CMUCAM as it can recognize faces, however, I'm not sure just exactly how it works and how it sends data. Could someone please tell me whether that CMUCAM comes with a processing board or not? If so, how would one bus it with the MR-C3024 or MINI-ABB for some basic intelligence?

Ive also checked out the POB-EYE which looks very promising, the package comes with a servo controller, the camera, the lcd and the mother board ... however ... the camera itself looks quite large and unattractive. Nevertheless, choose POB-EYE means no reason to ask how to implement the CMUCAM with the 2 controllers I have.

Just wondering the possibilities, feel free to throw out ideas or shut me down if you wish :lol:
Hello all,

This mightve been asked numerous times but I have the question in a different format if you dont mind.

Say I have 2 bipeds, one which is the Robonova-1 and two, a custom built one off of lynxmotions parts. This means two things, one controller is the MR-C3024 and the other is the MINI-ABB stacked with the SSC-32.

Ive been trying to figure out ways to give my robot vision for many applications and I'm not sure what I want to do or how to do it exactly. Im very much into the latest CMUCAM as it can recognize faces, however, I'm not sure just exactly how it works and how it sends data. Could someone please tell me whether that CMUCAM comes with a processing board or not? If so, how would one bus it with the MR-C3024 or MINI-ABB for some basic intelligence?

Ive also checked out the POB-EYE which looks very promising, the package comes with a servo controller, the camera, the lcd and the mother board ... however ... the camera itself looks quite large and unattractive. Nevertheless, choose POB-EYE means no reason to ask how to implement the CMUCAM with the 2 controllers I have.

Just wondering the possibilities, feel free to throw out ideas or shut me down if you wish :lol:
sharifk
Robot Builder
Robot Builder
Posts: 7
Joined: Fri Aug 17, 2007 3:55 am

Post by NovaOne » Mon Aug 20, 2007 8:59 pm

Post by NovaOne
Mon Aug 20, 2007 8:59 pm

I don't know much about CMUCAM's but they seem to have TTL serial ports so I think you could connect them to RN via ETX and ERX connections?

I Don't know about the Mini-Atom Bot Board? It must have a serial port?

Chris
I don't know much about CMUCAM's but they seem to have TTL serial ports so I think you could connect them to RN via ETX and ERX connections?

I Don't know about the Mini-Atom Bot Board? It must have a serial port?

Chris
NovaOne
Savvy Roboteer
Savvy Roboteer
Posts: 405
Joined: Thu Jul 05, 2007 7:30 am

CMUCAM3

Post by MadDogJoe » Tue Aug 21, 2007 6:21 pm

Post by MadDogJoe
Tue Aug 21, 2007 6:21 pm

I've interfaced one of these to my RN1, here are a couple of ideas.

1. Yes, the CMU board carries the intelligence to do most basic types of image recognition.

2. After I evaluated the interface mechanisms that are available (Serial Communications, Digitial Signals, Etc.) I decided that anything I could give the MC3024 in terms of image processing data would be too much for robobasic to use. I then decided that the most important information I needed were: did you find what we are looking for, and where is it?
So using some simple communication FROM the MC3024 TO the CMUCam3 board I tell the cam what to look for, it feeds back a single digital signal saying that it found it and then mirrors where the head is looking (there are 4 servo pwms, 2 are connected to the 2 servo pan and tilt that I made out of carbonfiber, 2 are connected to the MC3024 to let me know where the camera is looking when it finds what we are looking for) using the RCIN commands in robobasic.

This of course is just one way to do things, but I can zero in on a target, then align the body to the target, move toward it until the bot is within reach and then touch the target. Mostly developed for two operations, automatic recharging and find / retrieve operations.

I'll have some video and pictures as soon as my HP Laptop returns from their repair depot (THE MAGIC SMOKE CAME OUT! LOTS OF IT!) videos and pics are on it.

MadDog Joe
I've interfaced one of these to my RN1, here are a couple of ideas.

1. Yes, the CMU board carries the intelligence to do most basic types of image recognition.

2. After I evaluated the interface mechanisms that are available (Serial Communications, Digitial Signals, Etc.) I decided that anything I could give the MC3024 in terms of image processing data would be too much for robobasic to use. I then decided that the most important information I needed were: did you find what we are looking for, and where is it?
So using some simple communication FROM the MC3024 TO the CMUCam3 board I tell the cam what to look for, it feeds back a single digital signal saying that it found it and then mirrors where the head is looking (there are 4 servo pwms, 2 are connected to the 2 servo pan and tilt that I made out of carbonfiber, 2 are connected to the MC3024 to let me know where the camera is looking when it finds what we are looking for) using the RCIN commands in robobasic.

This of course is just one way to do things, but I can zero in on a target, then align the body to the target, move toward it until the bot is within reach and then touch the target. Mostly developed for two operations, automatic recharging and find / retrieve operations.

I'll have some video and pictures as soon as my HP Laptop returns from their repair depot (THE MAGIC SMOKE CAME OUT! LOTS OF IT!) videos and pics are on it.

MadDog Joe
MadDogJoe
Robot Builder
Robot Builder
User avatar
Posts: 13
Joined: Tue Aug 14, 2007 11:47 pm

Post by NovaOne » Tue Aug 21, 2007 7:11 pm

Post by NovaOne
Tue Aug 21, 2007 7:11 pm

Very interesting.....so please tell me if I understand this:

You feeding the pan and tilt PWM to two servos and also to RN?

The Pan PWM is read into RoboBasic , then do you turn RN through the required angle you have calculated from the PWM information?

How do you use the Tilt PWM information?

If the segment of RoboBASIC code is small could you post it?


Chris
Very interesting.....so please tell me if I understand this:

You feeding the pan and tilt PWM to two servos and also to RN?

The Pan PWM is read into RoboBasic , then do you turn RN through the required angle you have calculated from the PWM information?

How do you use the Tilt PWM information?

If the segment of RoboBASIC code is small could you post it?


Chris
NovaOne
Savvy Roboteer
Savvy Roboteer
Posts: 405
Joined: Thu Jul 05, 2007 7:30 am

Post by MadDogJoe » Wed Aug 22, 2007 4:27 pm

Post by MadDogJoe
Wed Aug 22, 2007 4:27 pm

Chris,

Of course it is virtually impossible to keep the head aligned perfectly in the pan direction (even when walking very slowly) but the head will continue to track pretty faithfully, so you need to create an acceptable bounding box (in the world of electronics it would be called hysteresis) that the object stays in that doesn't required correction. As you move you can adjust to the left or right if the pan falls out of the box. You also have to the bot search if the object cannot be found in the available pan and tilt range, but that just requires a short spin to the left or right in a logical pattern.

The tilt information gives me a guesstimate of how far away I am if it is something to pick up (basic trig function for angle of head versus height of bot, although I just use a look up table) walk until the head is looking down enough.

For walking to a target that is basically the same height, the tilt info doesn't yield much, but when you get I get close I switch to the sharp / devantec sensors to yield that info.

I'll try to get code and pics / video up in the near future, although alot of it was on the laptop that the senior case manager at HP informs me may be a total replacement!

MadDog Joe
Chris,

Of course it is virtually impossible to keep the head aligned perfectly in the pan direction (even when walking very slowly) but the head will continue to track pretty faithfully, so you need to create an acceptable bounding box (in the world of electronics it would be called hysteresis) that the object stays in that doesn't required correction. As you move you can adjust to the left or right if the pan falls out of the box. You also have to the bot search if the object cannot be found in the available pan and tilt range, but that just requires a short spin to the left or right in a logical pattern.

The tilt information gives me a guesstimate of how far away I am if it is something to pick up (basic trig function for angle of head versus height of bot, although I just use a look up table) walk until the head is looking down enough.

For walking to a target that is basically the same height, the tilt info doesn't yield much, but when you get I get close I switch to the sharp / devantec sensors to yield that info.

I'll try to get code and pics / video up in the near future, although alot of it was on the laptop that the senior case manager at HP informs me may be a total replacement!

MadDog Joe
MadDogJoe
Robot Builder
Robot Builder
User avatar
Posts: 13
Joined: Tue Aug 14, 2007 11:47 pm


5 postsPage 1 of 1
5 postsPage 1 of 1