I just purchased a licence for Roborealm and got it all connected up last night.
After running through the tutorial I rebuilt the everything before Centre of Gravity calculation to detect (for the most part) large skin areas. providing youve got a shirt on and its not skin coloured it works quite well.
Here is a video of my hexapod tracking my face as a walk around in my (tiny) room. I was happy to see the MSRH01 had no problems moving around on my mattress!
I went out and purchased another webcam that would install onto my OQO2 and this is the result! as you can see I have made it much faster at tracking and turning
The OQO2 is sitting on the back of the Hex, its a Windows 7 Ultimate UMPC running Roborealm with the COG image displayed in full screen. I still need to make a "mount" as it moved a little during the shooting of this video, however the ideal solution is to have a wifi webcam connected ad-hoc to the OQO. but I need to find one first! That way I can have the OQO in my pocket/bag giving the MSR-H01 its brains while remaining out of sight! (oh and its HEAVY) :p
I purchased the Logitech Webcam C120 for AU$19. once you pull it apart its got a thin pcb board with the camera and a detachable focus rim. the PCB is just slightly larger than the gap between the two spacers on the tilt frame.
rather than typing it all out again, here is a copy of the description of how this works from the roborealm forum :
this uses canny edge detection to find edges like walls and things. then side fill plus harris corners to get points the VBS script turns the harris corner array into an array of x,y positions for all the points then finds the hieghest point overall, the lowest point overall, the right heighest point and the left heighest point. then the robot checks to see if the lowest point is too low. if it is, it backs away, turning away from the point at the same time. Otherwise the robot checks to see if the heighest point is 75% of the way up the image. If it is the robot moves towards that point while turning to face it. If the heighest point is too close, it then checks to see if the left heighest point is hiegher than the right one, and turns left if it is, right if it isnt.
it is simple but works excellently providing that the canny edge detector doesnt screw up! I have found that using the cameras built in contrast and brightness settings to tweak it for each use the best method of compensating for light changes
I have noticed that sometimes the Harris Corners point finder returns more than 100 points (which is the limit ive put on the point arrays) and can crash the vb script. when this happens the robot stops moving (its shouldnt ever be stationary) just open the vbs plugin to get it going again
I have made some updates to my program to also use crab movement. here is a visualisation of how the ai decides what to do :
all movement uses linear interpolation rather than direct numbers for smooth movement, this is especially good on the crabbing. the closer the lowest point inside the crab area is to the outer edge of the image, the less the robot crabs left or right!