Tacto, ergo Sum. I touch, therefore I am.
I was trying to mentally compose this post earlier today, but find myself lost for words. Basically what it comes down to is that intelligence seems to depend a lot on being able to detect the environment around one's self, as it does sheer processing power.
A robot requires sensors, both proprioceptive and exteroperceptive - it needs to know about itself and about the world around it. Of course, this is potentially a lot of information - hence my interest in distributing such processes. Mind is a function of brain and the information it can process.
The simplest robots have touch sensors, wandering aimlessly. Such a robot cannot have much awareness of what's going on around it. Navigation, planning, and so on, are all impossible.
We introduce proprioception. Some simple additions, and a robot can learn when its batteries are low - it becomes hungry. It learns how far and fast it is moving, it can start to sense when its motors are overworking. We add additional sensors - it can become more aware of what's going on around it...
The more sensors we add, the more processing we have to do for the inputs, but the more the robot knows what's going on around it. I will write more later... but the thrust of it is - we interact with the environment, therefore we are.
On a different note, I actually made some progress on the motor controllers, the interfacing between the microcontroller(s) and the mosfets that control the power to the motor. Although, thinking about it, I perhaps should use optoisolators to eliminate the chance of interference.