Autonomous navigation using image processing
Abstract
In the recent years, the pursuit intelligent and self-operated machines have increased. The human user is to be completely eliminated or minimized as much as possible. It is also popular now to observe machines that are able to do variety of tasks, instead of just one.
A division of intelligent robotics is autonomous navigation. Google, Tesla, Honda, and many other large corporations are trying to master this field, since the search for a self-driving vehicle is profitable as much as it is difficult. The research presented in this thesis is autonomous navigation for mobile robots in an indoor environment, but not limited to.
Some explored algorithms focus on navigating in environment that has been already explored and mapped. Algorithms such as modified A-Star and goal-based navigational vector field were tested for how effectively a path is planned from one point to another. The algorithms were compared and analyzed for how well the robot avoided obstacles and the length of the path taken. Other algorithms were also developed and tested for navigation without a map.
The navigational algorithms are simulated on different artificial environments as well as on real environments. Machine learning is used to learn and adapt to the robot’s motion behaviours, which enables the robot to perform movements as intended by the implemented navigational algorithms. Referred to in this thesis as the intelligence engine, a feed-forward artificial neural network was created to predict power delivery to the motors. Back-propagation algorithm is used alongside the neural network to enable supervised learning.
Similar to human vision, the algorithm relies mainly on image processing to obtain data about the surrounding environment. The data human eyes provide helps one perceive and understand the surroundings. Similarly, a Kinect sensor is used in this thesis to get 2-dimensional colour data as well as depth data.
A program was implemented to process and understand this arbitrary sequential array of numbers in terms of quantifiable values. The robot in return is capable of understanding target, obstructions, and is capable of navigation. All external data are gathered from one optical sensor. Many different algorithms were implemented and tested to efficiently detect and track a target. The idea is to make an artificial robot perceive its’ surrounding using 3-Dimentional image data and intelligently navigate the local surroundings.