<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-gb">
<link rel="self" type="application/atom+xml" href="http://forum.robosavvy.com/feed.php?f=4&amp;t=1472" />

<title>RoboSavvy Forum</title>
<subtitle>Robosavvy Forum: The largest online community of Humanoid Robot Builders</subtitle>
<link href="http://forum.robosavvy.com/index.php" />
<updated>2007-07-20T17:10:03+01:00</updated>

<author><name><![CDATA[RoboSavvy Forum]]></name></author>
<id>http://forum.robosavvy.com/feed.php?f=4&amp;t=1472</id>
<entry>
<author><name><![CDATA[DirtyRoboto]]></name></author>
<updated>2007-07-20T17:10:03+01:00</updated>
<published>2007-07-20T17:10:03+01:00</published>
<id>http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=10065#p10065</id>
<link href="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=10065#p10065"/>
<title type="html"><![CDATA[My Theory of Robot Vision]]></title>

<content type="html" xml:base="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=10065#p10065"><![CDATA[
Most ideas of how a compound eye works are flawed. In my experience the CNS for image processing betweens the compound input into a smooth and detailed construction.<br />All of those demonstrations of compound vision (this is what a fly sees) are wrong. They see a lower rez version of what we see but we would still be able to use a flys eye view to navigate our world.<br /><br />Marcus<p>Statistics: Posted by <a href="http://forum.robosavvy.com/memberlist.php?mode=viewprofile&amp;u=312">DirtyRoboto</a> — Fri Jul 20, 2007 5:10 pm</p><hr />
]]></content>
</entry>
<entry>
<author><name><![CDATA[Humanoido]]></name></author>
<updated>2007-07-20T11:16:44+01:00</updated>
<published>2007-07-20T11:16:44+01:00</published>
<id>http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=10058#p10058</id>
<link href="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=10058#p10058"/>
<title type="html"><![CDATA[My Theory of Robot Vision]]></title>

<content type="html" xml:base="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=10058#p10058"><![CDATA[
This is an excellent point regarding vision. In our limited humanoid hobby robotic systems, it's extremely important to scan only &quot;essential area&quot; using limited vision resources that are affordable.<br /><br />I don't know about fish vision, but take fly vision as an example. The eye is composed of many elements, each capable of image forming. This applies well to the structure of microcontrollers.<br /><br />For example, use of a port expander such as the 74HC151 would allow multiple lenses and light sensors in a quantity sufficient to form a practical image, and create (input) data readily in parallel processing for maximum speed. I'm currently building several eye versions with these thoughts in mind. The concept is straight forward, low cost, and uses off the shelf components.<br /><br />Incidentally, this technique can &quot;centroid scan&quot; in any direction or combinations of directions just by changing the byte/word structure. Once a particular algorithm is established, moving the eye, just as a real human eye, becomes important. On the other hand, it's also possible to create virtual movements of the eye without physical movements. Such a technique is used in astronomy, for astronomical CCD imaging, in virtual guidance and tracking of the image.<br /><br />What will be very interesting is creating some peripheral vision using virtual techniques. Peripheral vision could utilize broadband sensors that are cheaper, smaller, and more easy to interface to the primary vision centroid. When coupling these simple components and techniques together, the unit would become a much more powerful eye.<br /><br />humanoido<p>Statistics: Posted by <a href="http://forum.robosavvy.com/memberlist.php?mode=viewprofile&amp;u=416">Humanoido</a> — Fri Jul 20, 2007 11:16 am</p><hr />
]]></content>
</entry>
<entry>
<author><name><![CDATA[DirtyRoboto]]></name></author>
<updated>2007-06-28T06:32:41+01:00</updated>
<published>2007-06-28T06:32:41+01:00</published>
<id>http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9671#p9671</id>
<link href="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9671#p9671"/>
<title type="html"><![CDATA[My Theory of Robot Vision]]></title>

<content type="html" xml:base="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9671#p9671"><![CDATA[
most ccd'S i know of scan from top left to bottom right. so you would have to wait for all of the data to be captured until you can anylize the more critical area.<p>Statistics: Posted by <a href="http://forum.robosavvy.com/memberlist.php?mode=viewprofile&amp;u=312">DirtyRoboto</a> — Thu Jun 28, 2007 6:32 am</p><hr />
]]></content>
</entry>
<entry>
<author><name><![CDATA[cdraptor]]></name></author>
<updated>2007-06-28T02:59:40+01:00</updated>
<published>2007-06-28T02:59:40+01:00</published>
<id>http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9669#p9669</id>
<link href="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9669#p9669"/>
<title type="html"><![CDATA[My Theory of Robot Vision]]></title>

<content type="html" xml:base="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9669#p9669"><![CDATA[
Well if your using the 50% below the horizon as the most important you don't need to invert the image.  Let's use the standard X,Y coordinates on an image, just process your routine in reverse Y  (from Y=MaxHeight to Zero).  If your running some routines that are working off a scanline going from the bottom of the image will strike against those objects that are generally closer to the robot.<br /><br />I plan on doing some research into computer vision, so I hope to have some more things to offer in the near future.<p>Statistics: Posted by <a href="http://forum.robosavvy.com/memberlist.php?mode=viewprofile&amp;u=581">cdraptor</a> — Thu Jun 28, 2007 2:59 am</p><hr />
]]></content>
</entry>
<entry>
<author><name><![CDATA[DanAlbert]]></name></author>
<updated>2007-06-27T17:37:09+01:00</updated>
<published>2007-06-27T17:37:09+01:00</published>
<id>http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9666#p9666</id>
<link href="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9666#p9666"/>
<title type="html"><![CDATA[My Theory of Robot Vision]]></title>

<content type="html" xml:base="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9666#p9666"><![CDATA[
That makes no sense to me. The inverted image is a product of the concave shape of the eye.  Look inside a spoon and you will see your image inverted. <br /><br />Do fish also have inverted images? They have no horizon.<br /><br />I think for simple biped robot imaging consider using rectangular shapes such as windows and tables to determine a level condition.<br /><br />This may sound easy at first but if the surface is not at eye level it will appear as if one end were up when the Z axis is rotated. Try it with a pencil. Hold it eye level and rotate Z. It becomes a point. Then hold it above your head and rotate. It appears as if one end ir rising.<p>Statistics: Posted by <a href="http://forum.robosavvy.com/memberlist.php?mode=viewprofile&amp;u=5">DanAlbert</a> — Wed Jun 27, 2007 5:37 pm</p><hr />
]]></content>
</entry>
<entry>
<author><name><![CDATA[JavaRN]]></name></author>
<updated>2007-06-27T17:31:44+01:00</updated>
<published>2007-06-27T17:31:44+01:00</published>
<id>http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9664#p9664</id>
<link href="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9664#p9664"/>
<title type="html"><![CDATA[Robot Vision]]></title>

<content type="html" xml:base="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9664#p9664"><![CDATA[
Remember that Robots are not Humans! anyway if you leave the image as it is without inverting it you will also get an idea how &quot;close&quot; the object is to the robot, also if you invert the image upside down you have to store other images to match upside down when it comes to image recognition.<p>Statistics: Posted by <a href="http://forum.robosavvy.com/memberlist.php?mode=viewprofile&amp;u=546">JavaRN</a> — Wed Jun 27, 2007 5:31 pm</p><hr />
]]></content>
</entry>
<entry>
<author><name><![CDATA[DirtyRoboto]]></name></author>
<updated>2007-06-26T19:12:27+01:00</updated>
<published>2007-06-26T19:12:27+01:00</published>
<id>http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9647#p9647</id>
<link href="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9647#p9647"/>
<title type="html"><![CDATA[My Theory of Robot Vision]]></title>

<content type="html" xml:base="http://forum.robosavvy.com/viewtopic.php?t=1472&amp;p=9647#p9647"><![CDATA[
I was just thinking about how to make vision make sense to a robot. I thought about how we see and all of a sudden a fact popped out at me. That our true sight is inverted so that the floor is the sky and the sky the floor. our brain flips this image before processing it. I asked WHY?<br /><br />What is so important about this inversion. Well this....<br />Most of the important information we need is in the 50% below the horizon in relation to our head position. Gravity also makes things settle on the ground, in the case of anticipating a falling object one knows where it will end up.<br /><br />So the lower 50% of vision is most important. This is the reason for inversion, so that the most important data is processed first.<br />All repoduced moving images scan from top to bottom as do books. (plz dont quote variances in direction). There is a preferred direction to process visual info for humans this is left to right, top to bottom.<br /><br />This is why I belive that processing visual info for robots should be done by first inverting the image (or flipping the cameras pov by 180o) so that the info to get processed first is in the more vital area.<br /><br />I dont get much psy reading so I might be covering already used ideas.<br /><br />Marcus.<p>Statistics: Posted by <a href="http://forum.robosavvy.com/memberlist.php?mode=viewprofile&amp;u=312">DirtyRoboto</a> — Tue Jun 26, 2007 7:12 pm</p><hr />
]]></content>
</entry>
</feed>