Show simple item record

dc.contributor.advisorZhang, Hao
dc.contributor.authorZhang, Ziling
dc.contributor.committeememberPetruska, Andrew J.
dc.contributor.committeememberWilliams, Thomas
dc.date.accessioned2020-02-03T11:28:59Z
dc.date.available2020-02-03T11:28:59Z
dc.date.submitted2019
dc.descriptionIncludes bibliographical references.
dc.description2019 Fall
dc.description.abstractIn recent years, advances in autonomous robotics have begun to transform how we work and live. Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles are helping us to deliver goods, conduct surveys of construction sites, and perform search and rescue alongside first responders. However, designing robots with this level of autonomy is often challenging due to the complexity of the real-world environment. Multi-sensory perception is a critical component to address this challenge and develop robust autonomous robotic systems. By combining multiple inputs from sensors, the system can eliminate a single point of failure from sensor degradation and generate new insights to make better decisions integrating information from dierent sensor modalities. Recent breakthroughs in Machine Learning, especially the Deep Neural Network(DNN) based deep learning perception pipelines have been proven effective in a number of robot perception tasks. However, the significant computation cost for Deep Neural Networks is prohibiting their deployment on a robot system with limited power budget and real-time performance requirement. It is important to bridge this gap by optimization to deploy state-of-the-art machine learning models to a real-world robot systems. This work investigates the viability to develop robust multi-sensory robot perception systems enhanced by machine learning models in three different chapters. First, I explore the effectiveness of DNN perception pipelines in object detection and semantic segmentation tasks, then experiment on various model optimization techniques to enhance the efficiency of these perception models, achieving real-time performance on robot system with a limited power budget. Then I elucidate the design and implementation of a thermal sensing robot system that performs sensor fusion of a thermal camera and an RGB-Depth Camera to automatically track occupants in a building, measuring their forehead temperature, providing fine-grain information for better decision making in intelligent Air Conditioning (AC) system. Finally, I explore camera pose estimation using rectangular to spherical image matching, enabling a robot to quickly grasp a scene with spherical camera, and allow other robots to localize themselves within the scene by matching rectangular sensor images to the spherical image.
dc.identifierZhang_mines_0052N_11880.pdf
dc.identifierT 8873
dc.identifier.urihttps://hdl.handle.net/11124/174011
dc.languageEnglish
dc.publisherColorado School of Mines. Arthur Lakes Library
dc.rightsCopyright of the original work is retained by the author.
dc.subjectMachine learning
dc.subjectSensor fusion
dc.subjectMulti-sensory perception
dc.subjectEdge computing
dc.titleApplied machine learning for multi-sensory robot perception
dc.typeThesis
thesis.degree.disciplineComputer Science
thesis.degree.grantorColorado School of Mines
thesis.degree.levelMasters
thesis.degree.nameMaster of Science (M.S.)


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record