Autonomous Underwater Vehicle
A robotic submarine which combines input from several different kinds of sensors, including cameras, accelerometers, gyroscopes, a depth sensor, and a compass. Vision code in OpenCV will test the limits of the beagle board's processing power.
| Our goal is to implement the control and vision for our Autonomous Underwater Vehicle, a robotic submarine which navigates without any human input.  The vehicle must navigate a course which contains visual cues to help guide the robot.  
Success will involve two separate sub-projects.  First, we will implement high-level control routines.  This means combining physical readings from all of the different sensors and determining the course of action for the robot.  The robot has a variety of sensors including accelerometers, gyroscopes, a depth sensor, a compass, and cameras.  Interpreting all of this information and making it useful will require an implementation of Kalman Filtering, PID control, and a state-machine.  The state machine will contain high level instructions like "maintain heading", and "pursue object."  The second sub-project is a set of computer vision routines.  These will be implemented in OpenCV, and include segmenting objects from their surroundings, determining their orientation and distance, and distinguishing patterns.  We have determined that 3 processed images per second is sufficient for our objectives, so we expect that a beagle board will have adequate computational power.  The small size and low power consumption make them ideal for use in a closed system like a submarine.  Battery power is limited, and a beagle board will consume less than one tenth the power of our current computer. Homepage: http://mart.cs.mcgill.ca Registrar: alan-schoen.myopenid.com | 
Tags:
Projected created on: Sun Aug 01 2010 15:44:08 GMT-0000 (UTC)
Submitted by: alan-schoen.myopenid.com
Last updated on: Sun Aug 01 2010 15:44:08 GMT-0000 (UTC)





