Page 1 |
Save page Remove page | Previous | 1 of 205 | Next |
|
small (250x250 max)
medium (500x500 max)
Large (1000x1000 max)
Extra Large
large ( > 500x500)
Full Resolution
All (PDF)
|
This page
All
|
ROBOT VISION FOR THE VISUALLY IMPAIRED by Vivek Pradeep A Dissertation Presented to the FACULTY OF USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (BIOMEDICAL ENGINEERING) May 2011 Copyright 2011 Vivek Pradeep
Object Description
Title | Robot vision for the visually impaired |
Author | Pradeep, Vivek |
Author email | vivek.pradeep@gmail.com; vpradeep@microsoft.com |
Degree | Doctor of Philosophy |
Document type | Dissertation |
Degree program | Biomedical Engineering |
School | Viterbi School of Engineering |
Date defended/completed | 2010-11-23 |
Date submitted | 2011 |
Restricted until | Restricted until 11 Jan. 2012. |
Date published | 2012-01-11 |
Advisor (committee chair) | Weiland, James D. |
Advisor (committee member) |
Medioni, Gerard G. Humayun, Mark S. Ragusa, Gisele Tjan, Bosco S. |
Abstract | Vision is one of the primary sensory modalities for humans that assists in performing several life-sustaining and life-enhancing tasks, including the execution of actions such as obstacle avoidance and path-planning necessary for independent locomotion. Visual impairment has a debilitating impact on such independence and the visually impaired are often forced to restrict their movements to familiar locations or employ assistive devices such as the white cane. More recently, various electronic travel aids have been proposed that incorporate electronic sensor configurations and the mechanism of sensory substitution to provide relevant information - such as obstacle locations and body position - via audio or tactile cues. By providing higher information bandwidth (compared to the white cane) and at a greater range, it is hypothesized that the independent mobility performances of the visually impaired can be improved. The challenge is to extract and deliver information in a manner that keeps cognitive load at a level suitable for a human user to interpret in real-time.; This dissertation presents a novel mobility aid for the visually impaired that consists of only a pair of cameras as input sensors and a tactile vest to deliver navigation cues. By adopting a head-mounted camera design, the system creates an implicit interaction scheme where scene-interpretation is done in a context-driven manner, based on head rotations and body movements of the user. Novel computer vision algorithms are designed and implemented to build a rich, 3D map of the environment, estimate current position and motion of the user and detect obstacles in the vicinity. A multi-threaded and factored simultaneous localization and mapping framework is used to tie all the different software modules together for interpreting the scene in real-time and accurately. The system always maintains a safe path for traversal through the current map, and tactile cues are generated to keep the person on this path, and delivered only when deviations are detected. With this strategy, the end user only needs to focus on making incremental adjustments to the direction of travel.; This dissertation also presents one of the few computer-vision based mobility aids that have been tested with visually impaired subjects. Standard techniques employed in the assessment of mobility for people with vision loss were used used to quantify performance through an obstacle course. Experimental evidence demonstrates that the number of contacts with objects in the path are reduced with the proposed system. Qualitatively, subjects with the device also follow safer paths compared to white cane users in terms of proximity to obstacles. However, members of the former group take longer durations to complete the course and this is primarily due to certain inadequacies in the system design that had not been initially anticipated. Solutions for overcoming these problems are discussed in depth towards the end of this thesis.; The work presented here makes several novel contributions to the computer vision community and provides insight into mobility performance of the visually impaired with and without the assistance of a sophisticated travel aid. The research done here can potentially help improve the quality of life of those with severe visual defects and also translate to the development of autonomous robotic systems. |
Keyword | simultaneous localization and mapping; SLAM; stereo; structure from motion; tracking; visual prosthetics; visually impaired; mobility aid; obstacle avoidance; path planning |
Language | English |
Part of collection | University of Southern California dissertations and theses |
Publisher (of the original version) | University of Southern California |
Place of publication (of the original version) | Los Angeles, California |
Publisher (of the digital version) | University of Southern California. Libraries |
Provenance | Electronically uploaded by the author |
Type | texts |
Legacy record ID | usctheses-m3611 |
Contributing entity | University of Southern California |
Rights | Pradeep, Vivek |
Repository name | Libraries, University of Southern California |
Repository address | Los Angeles, California |
Repository email | cisadmin@lib.usc.edu |
Filename | etd-Pradeep-4239 |
Archival file | uscthesesreloadpub_Volume17/etd-Pradeep-4239.pdf |
Description
Title | Page 1 |
Contributing entity | University of Southern California |
Repository email | cisadmin@lib.usc.edu |
Full text | ROBOT VISION FOR THE VISUALLY IMPAIRED by Vivek Pradeep A Dissertation Presented to the FACULTY OF USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (BIOMEDICAL ENGINEERING) May 2011 Copyright 2011 Vivek Pradeep |