Spend enough time around robot hobbyists or their message forums and you will come across the two "How Do I" topics that popup over and over again. It depends on the time of year and climate as to which topic is more popular, but the first is "How do I build a flying robot?" To be honest, this question made the mode of transportation for the You Design It project a foregone conclusion before voting even started. Flying is really cool and the number two dream of every man, woman and child, hence the reason so many roboticists want to create a flying robot (the number one involves Rebecca Romijn and the Mystique costume).
That is all well and good (Mmmmm, Mystique), but this entry is more concerned with the second of those questions, "How do I implement robotic vision?" It seems like everyone in the robotic world is obsessed with hooking up a 500 gigapixel camera to their robot and letting their robot see exactly like we humans do. Even more so, they all want object recognition thrown in their as well. This is such a popular request that there are a dozen opensource and inexpensive retail projects out there dedicated to allowing hobbyists to do exactly this. Of course none of these projects ever have the disclaimer that the hobbyist is going to be incredibly disappointed with the outcome, but they will. Oh yes, they will.
In order to see (ha! a pun) why robot hobbyists are going to be disappointed, let us backup a moment and look at the human brain. The human brain is arguably the most complex and powerful logical processor in the known universe (some more than others). Even if you made a silicon processor the same size as the human brain, it would still not compare in power because organic brains are analog processors, not digital (they all lied to you in school when they told you the brain uses binary). In addition, a brain is made up of multiple sections dedicated to performing specific tasks, with one of the larger sections being dedicated to visual processing (striate cortex, prestriate cortex, etc). Basically, a brain is a lot of very powerful analog computers working in parallel and roboticists want to make a single 8-bit 16MHz processor accomplish the same functionality, plus handle all the other sensors, motor control and logic programming. Disappointmentville here we come.
I can fully understand why someone would want to build a flying robot that can see and fully appreciate Rebecca Romijn, but it is not going to happen at the hobby level easily. Throw in a few more processors, reduce the pixel count and make it a 16 color count, and suddenly you are in the realm of possibility. Rebecca is not going to look good at that resolution though, so let’s look at other options instead.
I said in the Herbert 1701 Species C Gen 1 & 2 entry that sensors get skimped on for robots, and to explain what I mean by this I am going to once again jump to biology. We all know the five senses, but most biologists can tell you there are more (and none would fall into the X-Files anywhere). Magnetic field detection is well documented in migratory animals, many snakes (and other animals) have specialized sensors for detecting heat (thermoception), everyone knows bats deal with ultrasonic sound waves, and the list goes on. Robots have all these senses available to them and more, yet rarely will you see more than a couple sensors on a given robot.
I do understand that the organisms people associate with just the five basic senses fall into the "very complex evolved species" classifications. So now it is time to shame that belief with a little more biology. Most single cell bacteria (yes, we are talking micro-organisms here) have both a wider variety of senses and a higher count of sensors than ASIMO, one of the most advanced robots in the world. There are bacteria that are not only covered with touch sensors, but some can even tell you which direction is north from south, know which way is up from down in pitch black liquid while at a zero buoyancy, can sense temperature, know whether there is light or not and how bright it is, and even sense minute chemical changes in the surrounding environment. Single cell organisms. And you want to put two IR sensors on your robot and say that is "enough"?
If the robotic community (hobby and professional) is going to have a hope for making complex robots, we are going to have to loosen up on the sensors a bit. There is a limit to the number of I/O (input/output) ports available on a microprocessor, thus a limit to the number of sensors, but that just screams that maybe you should have more microprocessors to support more sensors. Ugobe understood this a bit when they designed Pleo: more sensors meant more "life-like", which also meant more processors. Granted Ugobe just went belly-up, but that has nothing to do with the sensor count and how much more realistic Pleo was compared to other "toy" robots.
The evolution project artificial robotic life forms are very limited in the number and type of sensors currently, but it is an evolution project. These robots are starting off very simple and evolving into more complex organisms, where, I imagine, the number and variety of sensors will increase with the growth. I am intentionally evolving the Herberts in this manner to increase my own understanding of robotics, and also to generate the best options for each generation. That's my excuse for not having more sensors and processors in each robot (yet), what is yours? Really, when you design a multi-thousand dollar robot (yes, I am talking to you Mr. Universities like MIT, Carnegie Mellon, & Stanford, as well as companies like Honda and everyone who enters the DARPA Grand Challenge), you have given up the right to any excuse.