I haven't seen a homemade one mainly because it's so hard to find an effective vacuum cleaner unit that runs off of a battery with the mechanical wipers and everything.
Just use sonar or something for object detection and have it sort of randomize it's motion... The expense and complexity of image processing hardware and software is beyond the reach and knowledge of most people. Coding for path planning is also out of reach of most people.
If you do not have about even the basic concepts of image processing or path planning, then it's way out of your league. Even if you do it's really challenging. It's not the kind of thing that you learn as a subproject. It's the kind of thing that is a mega-project.
Best way right now is to intelligently randomize the motion of the robot in an intelligent way for full coverage and use sonar for object detection.
Let me try and demonstrate why:
1. Suppose I took the easiest method and provided an overhead photo of the room to the robot. How would I even start to pick out what's an object and what is not? It's a cluttered up, non-picture perfect photo that I have to process. Overhead photos are also kind of hard to get.
2. Okay, so providing with an overhead photo is already pretty hard in the first place. Suppose I was able to achieve 1, how then would I path plan? How would I even begin to figure out the best way to walk about? Remember, you have to think of how to do this at an extremely low level.
3. The next step is that the robot has to know exactly where it is in the room to know where it is on the map. . How did you plan to do this? Odometry has too much error. And if you provided overhead video feed the robot would have to be able to pick itself out in the video. Not the easiest thing in the world to do. You could also provide an indoors GPS system with high accuracy. This gets limited by walls and how you may have to setup the room and calibrate it before you start the system, plus you need several "satellites". Using radio is beyond your means so the only option is to use light & sound time differences in travel which requires line of sight. Not useful for a low level robot. Of course, you also have something like this:
**broken link removed**
which projects a light map onto the ceiling from a base station and the robot reads the light patterns to find out where it is in the room- not something you can build yourself as it kind of involves some kind of vision processing.
3. Now what you seem to want is to use a webcam to provide an single isosymmetric view or first person video of the room. Both of these require the the ability to recognize when the floor is the floor and when an object is an object as well as when the floor is the floor behind an object. With first-person video, you would have to be able to recognize that an object from one photo is in fact the same object as in a bunch of other photos, each at a different angle and then discern it's depth and map it onto a map. This is leading edge research. It's already hard enough for a robot to know when something is in front of it, let alone higher processing functions with video.
Use an intelligently randomized path and regular obstacle avoidance sonar and IR to do the job.