by JavaRN » Fri May 25, 2007 12:57 pm
by JavaRN
Fri May 25, 2007 12:57 pm
I bought the camera module from e-bay at a very reasonable price, it was a wireless camera with a usb receiver which enables windows to recognise the device as a WDM so that it can interface with JMF (Java Media Framework) framework. I tried with other models of wireless/wired cameras that did not have a usb reciever (worked through a DVB card) but they were not JMF compatible.
Any way thorugh JMF I could capture a frame from the video stream and then make some simple image manipulation, basicly, brighten the image (adding an offset to the RGB values of any pixel) to remove shadows from the image as this makes the image recognition more difficult, and then extracting the "red" pixels out of the image treating the other pixels as "white", in this way the image of the box was singled out.
From the position of the box on the image one can determine whether the box is far, near or in a position to be collected. All this processing is done in the laptop.
Through the bluetooth module the robot receives the moves to make - the laptop sends an integer number which the robot receives and then performs the moves using robobasic. When the move is ready, the robot replies back with a ready signal (integer number 0) and the sensor readings (for now the sonar sensor and the battery status) - the sensor readings were not used in this experiment but I intend using them in the next one.
When the robot performs the "pickup" instruction the laptop terminates the communication with the robot.
Hope this explains better.
Charles.
I bought the camera module from e-bay at a very reasonable price, it was a wireless camera with a usb receiver which enables windows to recognise the device as a WDM so that it can interface with JMF (Java Media Framework) framework. I tried with other models of wireless/wired cameras that did not have a usb reciever (worked through a DVB card) but they were not JMF compatible.
Any way thorugh JMF I could capture a frame from the video stream and then make some simple image manipulation, basicly, brighten the image (adding an offset to the RGB values of any pixel) to remove shadows from the image as this makes the image recognition more difficult, and then extracting the "red" pixels out of the image treating the other pixels as "white", in this way the image of the box was singled out.
From the position of the box on the image one can determine whether the box is far, near or in a position to be collected. All this processing is done in the laptop.
Through the bluetooth module the robot receives the moves to make - the laptop sends an integer number which the robot receives and then performs the moves using robobasic. When the move is ready, the robot replies back with a ready signal (integer number 0) and the sensor readings (for now the sonar sensor and the battery status) - the sensor readings were not used in this experiment but I intend using them in the next one.
When the robot performs the "pickup" instruction the laptop terminates the communication with the robot.
Hope this explains better.
Charles.
F'dan il-passatemp ghandek bzonn zewg affarijiet - FLUS u HIN. Zewg affarijiet li huma skarsi hafna u li jien minnhom ghandi vera ftit!