I figured I would take Matt's advice and create a project thread for my robot!
The primary purpose for purchasing the MSRH01 was so that I could finally move AI development out of computer games and into the real world. After getting it up and running it was clear that the MSRH01 is capable of displaying a suprising amount of emotion through movement and body position, this gave me the idea to build a highly interactable AI system modelled from the type of behaviours evidenced by the average pet (dogs in particular). At the moment I am restricting the development to :
roaming : The robot should move around its environment, avoiding both moving and stationary objects following : If the robot identifies a human in view, follow the human where possible watching : if the robot can identify a human face in view, watch the face (follow it without moving)
I have already started on Roaming and I have made some significant progress :
I have made leaps and bounds on the above since making that video. The system now uses crabbing to help avoid upcoming walls, and samples multiple points to decide where to go (rather than just using a single heighest/lowest for each side). The Linear Interpolation for forward/turning/crab movement has been improved aswell. Most of the headway was made late last night, and I kept having battery problems (the battery that came with the MSRH01 isnt powerful enough to run the MSRH01 while its got my OQO on its back it seems). Tonight I will finish up this code and post a link to my robo and VBS files for anyone to play with.
Here is a video of the face tracking program so far :
Significant improvement needs to be made to this to make it more robust. I have been working on using masks and object detection to pick up face shapes, but thats in its infancy. This video uses good old fashion RGB filters (lots of em) to isolate skin. Ill update this with more information when I get home!
Special thanks to Matt and Eriklely for creating the Blob Tracking Tutorial which makes all of this possible!
This is looking great! I can't wait to test your code at some point, I think I may try it on the uBug for fun!
Could you do a breakdown of your hardware, I presume the OQO is some kind of embedded computer so that you can run the app locally? Any reason why you did this local and not on a remote machine with a wireless video link? I presume this is due to the break up of video signal which I have seen myself in the past. One thing I was considering was to add a wifi enabled network device to the Hexapod to send both PIP commands and video back to a control PC...
I have used 5000mAh batteries in the past with my hexapods, I presume you are using a 3700mAh pack? One option could be to use a good Lithium pack and a small DC-DC converter to provide 5V for the servos, this way the run time could be improved and the weight reduced! I haven't tried this yet, but I'm sure the MSR-H01 would one on one of these Turnigy devices.
Again, great work, lets all have an hexapod pet in the home
I desperately require a better power source than the battery I have, its just not powerful enough to carry the OQO while walking. Thanks to my good electronics master friend in Japan, I am getting myself two of these packs http://www.sparkfun.com/products/8484. Each one outputs 3.7v at 2000mah with a 6Ahr rating! these things are incredible. Two of these powering the little devil and it will never stop!
Of course the intention is for it to not have to carry the OQO at all, and go IP wireless. I am going to purchase FOSCAM Wireless IP Camera off E-Bay as they are cheap (for IP cameras) and it looks like once you pull it apart the camera module is relatively small and connected to the mainboard though wiring in the pan/tilt mechanism. Then I will be able to run Roborealm on my home PC (quad core) which will give better response. Currently the OQO is able to run the Object Avoidance program at around 7-10 frames per second, which is OK as long the MSRH01 is moving slowly. Being able to power the MSRH01 through the OQO (connecting to the MSRH01 with Bluetooth and the Camera via an ad-hoc IP connection) will allow me to take it anywhere
I am just finalizing some settings in my avoidance program and I will upload the files here along with a video!
Last edited by Matt Denton on Fri Jul 01, 2011 7:47 am, edited 2 times in total.
Reason:Took the liberty of adding links to your hardware/software setup :)
Hey all, a quick update. I need to work on a better way to remove the lines of grout on the ground as the contrast changes make my robot not be able distinguish my walls from the ground as you can see in this video. however the actual avoidance works suprisingly well considering the head does not currently do any panning!
I have decided to use what I have learned from object avoidance so far and start again from scratch incorperating panning this time. Once I find out how to get the hieghest white y pixel on each x pixel column I will store every 5th pixel into the point array. with this ability I can have the head pan left/centre/right and create an array of y heights for each pixel over the entire panning movement range. Providing the forward momentum isnt too fast the delay in having to read the image in blocks wouldnt cause an issue. This MUCH larger sample area of movement limits will allow the system to make much better decisions on left/right turning and prevent it getting stuck in corners!
I have been thinking further on the Processor to use to run the Roborealm app. I like the idea of having a network IP camera and bluetooth link back to a base station PC that controlls the Hexapod, but also love the idea of the on-board processing which although uses more battery power and has a slower processor, will have lower latency on the video capture which is improtant in face tracking. So I wonder if its worth looking at a Fit-PC which has already been used with great effect by Matt Bunting on his hexapod.
In particular I noticed they have released a Fit-PC2i which has a serial port on it, ehich could be used to connect direct to the HexEngine without the need for a USB/serial converter. Also they have a 2Ghz model, and pwer consuption is 6 to 8W, 370grams including aluminium case and hard disk, I'm thinking strip off the case and boot from an SD card? For development purposes you could plug a DVI monitor and keyboard in as needed, or USB/VGA monitor converter then both keybaord and monitor could be plugged in directly to one port on a hub?? Price of the Ultra model at 2Ghz without OS is 375.0 (GBP) + VAT.
In order to replace the Bluetooth link to the HexEngine I have considered using one of these MatchPort devices. Its a shame video couldn't be streamed over the same WLAn device though...
A couple of quick questions about the Matchpoint device you mention. Do I understand you correct, that instead of the ESD2000 Bluetooth module, you'll fit the Matchpoint module? And then use one of the serial channels for PIP-commands, and allow a cheap wired web-cam to hook up to the ethernet interface of the Matchpoint? Your remark seems to indicate that the web-cam part is not possible?
Its 1am here, and my last battery just run out after 6 hours of programming LOL
I will be booting it all up in the morning and continuuing on the object avoidance code. I started from scratch and implemented head panning. the new process (so far) is :
look left. record the heighest white pixel from the bottom of every 10th x column in the image into an array (takes 1 second) look centre. do the same and store into seperate array (takes 1 second) look right, do the same and store into seperate array (takes 1 second) capture from each array : heighest x,y / lowest x,y /average y above 50% / average y below 50% choose the direction with the highest average y above 50% use the heighest x to set turning (scaled so x=0 on the left would be -cMAX_TURN_SPEED and x=320 on the right would be cMAX_TURN_SPEED) use average y to set walking speed (scaled so y=0 would be no movement and y=240 would be cMAX_WALK_SPEED) if any lowest points are too close override and back away from point, turning away from it
there are limits like before where it will only turn on the spot if the furthest point is too close. the great thing about how this works is that once its starts off, the robot will continue to pan its head while it walks around. its doesnt actually need to stop to resample. due to the restraints set on movement, and the speed the hexapod is set at, the information it gathers is enough for the 4 seconds it takes to re-gather the info.
I will take lots of pictures for you tomorrow Matt, aswell as a video of the new process
Hi all, a quick update. I made further improvements to the object avoidance system this morning and its now working extremely well! There are a few tweaks required in how the system translates x coordinates to turning (as the images actually overlap slightly) causing over turning, but other than that its extremely good at avoidance things and getting out of corners. Thanks to the fact that the commands sent to the hexapod for move/turn/crab only change once every 4 seconds. I am working on using temporal mean to further stabalise the image and returned height values and help remove errors.
roaming in a larger space
Getting out of a dead end
Here is a picture of my camera "mount". Just 4 zip ties! (and a 5th to hold the wire in place)
I have added an array with contains the last 3 turn directions. When backing up or standing still, if a repeated pattern happens (left, right, left, wanting to go right for example) it will overide and go in the opposite direction. this stops it getting caught in loops when it tight corners.
I also tweaked the turning/crabbing/walking speeds to further reduce over compensation. Crabbing was bugged (missing cMAX_CRAB_SPEED lol) and ive fixed that up too
next up is to make the sequence in which it looks around be dependant on the current direction the robot is turning to look in the opposite direction first giving the turning more time to influence direction before data is captured on the turning side