Show simple item record

dc.contributor.advisorLiu, Xiao
dc.contributor.advisorLiu, Kefu
dc.contributor.authorPan, Allen Chih Lun
dc.date.accessioned2014-01-22T16:24:33Z
dc.date.available2014-01-22T16:24:33Z
dc.date.created2013
dc.date.issued2014-01-22
dc.identifier.urihttp://knowledgecommons.lakeheadu.ca/handle/2453/467
dc.description.abstractRobotic research has been costly and tremendously time consuming; this is due to the cost of sensors, motors, computational unit, physical construction, and fabrication. There are also a lot of man hours clocked in for algorithm design, software development, debugging and optimizing. Robotic vision input usually consists of 2 or more color cameras to construct a 3D virtual space. Additional cameras can also be added to enrich the detailed virtual 3D environment, however, the computational complexity increases when more cameras are added. This is due to not only the additional processing power that is required running the software but also the complexity of stitching multiple cameras together to form a sensor. Another method of creating the 3D virtual space is the utilization of range finder sensors. These types of sensors are usually relatively expensive and still require complex algorithms for calibration and correlation to real life distances. Sensing of a robot position can be facilitated by the addition of accelerometers and gyroscope sensors. A significant robotic design is robot interaction. One type of interaction is through verbal exchange. Such interaction requires an audio input receiver and transmitter on the robot. In order to achieve acceptable recognitions, different types of audio receivers may be implemented and many receivers are required to be deployed. Depending on the environment, noise cancellation hardware and software may be needed to enhance the verbal interaction performance. In this thesis different audio sensors are evaluated and implemented with Microsoft Speech Platform. Any robotic research requires a proper control algorithm and logic process. As a result, the majority of these control algorithms rely heavily on mathematics. High performance computational processing power is needed to process all the raw data in real-time. However, any high performance computation proportionally consumes more energy, so to conserve battery life on the robot, many robotic researchers implement remote computation by processing the raw data remotely. This implementation has one major drawback: in order to transmit raw data remotely, a robust communication infrastructure is needed. Without a robust communication the robot will suffers in failures due to the sheer amount of raw data in communication. I am proposing a solution to the computation problem by harvesting the General Purpose Graphic Processing Unit (GPU)’s computational power for complex mathematical raw data processing. This approach utilizes many-cores parallelism for multithreading real-time computation with a minimum of 10x the computational flops of traditional Central Processing Unit (CPU). By shifting the computation on the GPU the entire computation will be done locally on the robot itself to eliminate the need of any external communication system for remote data processing. While the GPU is used to perform image processing for the robot, the CPU is allowed to dedicate all of its processing power to run the other functions of the robot. Computer vision has been an interesting topic for a while; it utilizes complex mathematical techniques and algorithms in an attempt to achieve image processing in real-time. Open Source Computer Vision Library (OpenCV) is a library consisting of pre-programed image processing functions for several different languages, developed by Intel. This library greatly reduces the amount of development time. Microsoft Kinect for XBOX has all the sensors mentioned above in a single convenient package with first party SDK for software development. I perform the robotic implementation utilizing Microsoft Kinect for XBOX as the primary sensor, OpenCV for image processing functions and NVidia GPU to off-load complex mathematical raw data processing for the robot. This thesis’s objective is to develop and implement a low cost autonomous robot tank with Microsoft Kinect for XBOX as the primary sensor, a custom onboard Small Form Factor (SFF) High Performance Computer (HPC) with NVidia GPU assistance for the primary computation, and OpenCV library for image processing functions.en_US
dc.language.isoen_USen_US
dc.subjectRobotic researchen_US
dc.subjectRobotic vision inputen_US
dc.subjectRobot interactionen_US
dc.subjectOpen source computer vision library (OpenCV)en_US
dc.subjectRobot tank software designen_US
dc.titleMobile robot tank with GPU assistanceen_US
dc.typeThesisen_US
etd.degree.nameM.Sc.en_US
etd.degree.levelMasteren_US
etd.degree.disciplineEngineering : Electrical & Computeren_US
etd.degree.grantorLakehead Universityen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record