For the final lab in this class we were tasked with programming a maze solver using computer vision. Each wall segment is marked with varying left, right, and reverse arrows with a red star to mark the goal. In order to receive full credit, the robot must drive up to a wall, correctly identify the one of five possible signs and then drive in the appropriate direction.
In order to implement this program we used a K-Nearest-Neighbhors (KNN) approach to characterize the signs and custom navigation code using waypoints and the nav stack. We acheived full marks and over 90% consistency in our testing.
This video shows the final demonstration of our TurtleBot3 solving a maze using a KNN vision model and the navigation stack. Credit to Aditya Rao for putting together this video.
This video shows the TurtleBot3 driving through a given set of waypoints using my custom path planning algorithm
Previously, I had implemented a two-state controller that processes LiDAR and odometry data in order to complete a given set of waypoints while avoiding dynamic objects placed in its environment. The algorithm is running locally on the TurtleBot3 using ROS2 and Python, built using custom proportional controllers and state estimators. This implementation passed all given unit tests with over 95% consistency.
During previous assignments in this class I implemented custom image-based object tracking using OpenCV and embedded this tracking in a ROS2 environment enabling a TurtleBot3 to safely track and follow an object of a given color.
For future work in this class I plan on implementing more complex path planning algorithms in order to traverse mazes and other obstacles. I also plan on experimenting with Simultaneous Localization and Mapping (SLAM) to improve the algorithms performance and further challenge my abilities.
This video features the object tracking algorithm