Show simple item record

dc.contributor.advisorLiu, Xiaoping
dc.contributor.authorSivanesu, Ajantharojan
dc.date.accessioned2018-09-04T17:56:21Z
dc.date.available2018-09-04T17:56:21Z
dc.date.created2015
dc.date.issued2016
dc.identifier.urihttp://knowledgecommons.lakeheadu.ca/handle/2453/4256
dc.description.abstractIn the recent years, the pursuit intelligent and self-operated machines have increased. The human user is to be completely eliminated or minimized as much as possible. It is also popular now to observe machines that are able to do variety of tasks, instead of just one. A division of intelligent robotics is autonomous navigation. Google, Tesla, Honda, and many other large corporations are trying to master this field, since the search for a self-driving vehicle is profitable as much as it is difficult. The research presented in this thesis is autonomous navigation for mobile robots in an indoor environment, but not limited to. Some explored algorithms focus on navigating in environment that has been already explored and mapped. Algorithms such as modified A-Star and goal-based navigational vector field were tested for how effectively a path is planned from one point to another. The algorithms were compared and analyzed for how well the robot avoided obstacles and the length of the path taken. Other algorithms were also developed and tested for navigation without a map. The navigational algorithms are simulated on different artificial environments as well as on real environments. Machine learning is used to learn and adapt to the robot’s motion behaviours, which enables the robot to perform movements as intended by the implemented navigational algorithms. Referred to in this thesis as the intelligence engine, a feed-forward artificial neural network was created to predict power delivery to the motors. Back-propagation algorithm is used alongside the neural network to enable supervised learning. Similar to human vision, the algorithm relies mainly on image processing to obtain data about the surrounding environment. The data human eyes provide helps one perceive and understand the surroundings. Similarly, a Kinect sensor is used in this thesis to get 2-dimensional colour data as well as depth data. A program was implemented to process and understand this arbitrary sequential array of numbers in terms of quantifiable values. The robot in return is capable of understanding target, obstructions, and is capable of navigation. All external data are gathered from one optical sensor. Many different algorithms were implemented and tested to efficiently detect and track a target. The idea is to make an artificial robot perceive its’ surrounding using 3-Dimentional image data and intelligently navigate the local surroundings.en_US
dc.language.isoen_USen_US
dc.subjectAutonomous navigationen_US
dc.subjectMobile robotsen_US
dc.subjectArtificial neural networken_US
dc.subjectNavigational vector fielden_US
dc.titleAutonomous navigation using image processingen_US
dc.typeThesisen_US
etd.degree.nameMaster of Scienceen_US
etd.degree.levelMasteren_US
etd.degree.disciplineEngineering : Electrical & Computeren_US
etd.degree.grantorLakehead Universityen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record