Title:
Scene flow for autonomous navigation

dc.contributor.advisor Vela, Patricio A.
dc.contributor.advisor AlRegib, Ghassan
dc.contributor.advisor Davenport, Mark A.
dc.contributor.author Dedhia, Vaibhav
dc.contributor.department Electrical and Computer Engineering
dc.date.accessioned 2018-05-31T18:17:25Z
dc.date.available 2018-05-31T18:17:25Z
dc.date.created 2018-05
dc.date.issued 2018-04-30
dc.date.submitted May 2018
dc.date.updated 2018-05-31T18:17:25Z
dc.description.abstract Today, there are various different paradigms for vision based autonomous navigation: mediated perception approaches that parse an entire scene to make driving decision, a direct perception approach that estimates the affordance of driving that maps an input image to small number of key perception indicators that directly relate to the affordance of road/traffic state for driving. Also, deep learning models trained for specific tasks such as obstacle classification, detecting drivable spaces have been used as modules for autonomous navigation of vehicles. Recent applications of deep learning to navigation have generated end-to-end navigation solutions whereby visual sensor input is mapped to control signals or to motion primitives. It is accepted that these solutions cannot provide the same level of performance as a global planner. However, it is less clear how such end-to-end systems should be integrated into a full navigation pipeline. We evaluate the typical end-to-end solution within a full navigation pipeline in order to expose its weaknesses. Doing so illuminates how to better integrate deep learning methods into the navigation pipeline. For the thesis, we evaluate global path planning using sampling based path planning algorithms. Global planners assume that the world is static and location of obstacle is known. However, for autonomous navigation scenerio, this assumption does not hold true. A need arises to be able to detect the obstacles in the scene, localize them and then make appropriate changes to the decisions for navigation. We train Convolutional Neural Network based deep networks for object recognition that are very effective for detecting the objects in the scene such as vehicles, pedestrian etc. We also propose methods to track the objects in the scene in three dimensions thus effectively localizing the objects in the scene.
dc.description.degree M.S.
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/59948
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Autonomous navigation
dc.subject CNNs
dc.subject Computer vision
dc.title Scene flow for autonomous navigation
dc.type Text
dc.type.genre Thesis
dspace.entity.type Publication
local.contributor.advisor Davenport, Mark A.
local.contributor.advisor AlRegib, Ghassan
local.contributor.corporatename School of Electrical and Computer Engineering
local.contributor.corporatename College of Engineering
relation.isAdvisorOfPublication 1162b098-768c-4269-839c-db771101c01b
relation.isAdvisorOfPublication 7942fed2-1bb6-41b8-80fd-4134f6c15d8f
relation.isOrgUnitOfPublication 5b7adef2-447c-4270-b9fc-846bd76f80f2
relation.isOrgUnitOfPublication 7c022d60-21d5-497c-b552-95e489a06569
thesis.degree.level Masters
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
DEDHIA-THESIS-2018.pdf
Size:
9.38 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.87 KB
Format:
Plain Text
Description: