Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
User-interface considerations for mobility feedback in a wearable visual aid
(USC Thesis Other)
User-interface considerations for mobility feedback in a wearable visual aid
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
USER-INTERFACE CONSIDERATIONS FOR MOBILITY FEEDBACK IN A WEARABLE VISUAL AID by Aminat A. Adebiyi A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (BIOMEDICAL ENGINEERING) December 2016 Copyright 2016 Aminat A. Adebiyi Epigraph To get lost is to learn the way. ~African proverb ii Acknowledgments The journey to my Ph.D. has undoubtedly been a marathon, not a sprint. And like any tremendous undertaking, I could not have successfully completed this exposition without the help and support of my ‘village’ . I would like to thank my adviser, Dr. James D. Weiland, whose dignity and humility were a great example in my quest to become an academic researcher. I shall be eternally grateful to him for teaching me balance and helping me discover my intellectual voice, and ultimately my place in the world. I would like to thank my committee – Drs. Humayun, D’Argenio, Powers and Ragusa, for their guidance and insight over the years. I would also like to thank Dr. Greg Goodrich, for his kind tutelage throughout our time working together. I would like to thank my family, the Lukman Adebiyis, which includes Alhaji Lukman Adeyemi himself, Alhaja Mobolanle Ayisat, Alhaja Omololami Ajani, Olawale Adebiyi, Bilikiss Adebiyi-Abiola and my MVP Ibrahim Adebiyi. They have been my backbone during this journey, and I am lucky to count myself among their clan. I would not have a thesis without my human subjects, whom I have to give many thanks. They have made the journey fun and worthwhile, always giving me fuel to press iii on. Iwouldalsoliketothankmyhelpersandfriends, amongthemDr.OlabisiOlatokunbo, Nicole Murray-Bruce, Nkechi Ekwunife, Esq., Paige Sorrentino and my summer student interns. Many thanks to Yvette Araujo, Judy Hill, and the staff of the Braille Institute for their help, and for allowing us generous use of their facilities. I would like to thank my colleagues past and present; Dr. Younghoon Lee, the ‘Fab Five’ (Karthik, Boshuo, Steven and Nii), Dr. Andrew Weitz, Yao-Chaun Chang, and Dr. Kiran Nimmagadda. I would also like to thank Doris Lee, Ellis Troy, Diana Sabogal, Mischal Diasanta, Chris Noll, Dr. Joe Coccoza, as well as the staff of the USC Biomedical Engineering Department and the USC Roski Eye Institute. I owe much gratitude to Mort Arditti, for being a great technical mentor and nurturing my love of electronics. Above all, I would be remiss if I do not thank Almighty Allah, for His mercies and guidance throughout this time. iv Table of Contents Epigraph ii Acknowledgments iii List of Tables vii List of Figures viii Abstract xii Chapter 1: Introduction 1 1.1 The Neurophysiology Of Visual Perception . . . . . . . . . . . . . . . . . 1 1.2 Perceptual Input and Self-Motion . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Path Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.2 Configural Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.3 Spatial Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.4 Perceptual and Motor Learning . . . . . . . . . . . . . . . . . . . . 6 1.3 Cognitive Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Visual Impairment Interventions . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . 10 1.4.2 Diseases that Cause Vision Loss . . . . . . . . . . . . . . . . . . . 11 1.4.3 Classification of Low Vision and Effect on Quality of Life . . . . . 12 1.4.4 Desired Goals of the Low-Vision Population . . . . . . . . . . . . . 14 1.4.5 Mobility Aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.4.6 Multimodal Sensory Feedback for the Visually Impaired . . . . . . 20 1.5 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 2: Assessing Optimal Modalities for Mobility Feedback 23 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2 Audible Mobility Feedback System . . . . . . . . . . . . . . . . . . . . . . 25 2.3 Vibrotactile Mobility Feedback System . . . . . . . . . . . . . . . . . . . . 27 2.4 Test Subject Demographics . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.5 Subject Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6 Subject Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.7 System Usability Assessment . . . . . . . . . . . . . . . . . . . . . . . . . 36 v 2.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.8.1 Audible Mobility Feedback System . . . . . . . . . . . . . . . . . . 36 2.8.2 Vibrotactile Mobility Feedback System and Comparison . . . . . . 39 2.8.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.9 Outdoor Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Chapter 3: The Effect of Mobility Feedback on Cognitive Load 49 3.1 The Blind Office Clerk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Navigation Skill Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.1 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.3.3 Relative Access Measure (RAM) . . . . . . . . . . . . . . . . . . . 61 3.4 Effect of Mobility Feedback on Cognitive Load . . . . . . . . . . . . . . . 63 3.4.1 Dual-Task Methodology . . . . . . . . . . . . . . . . . . . . . . . . 64 3.4.2 NASA Task Load Index (NASA-TLX) . . . . . . . . . . . . . . . . 64 3.4.3 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Chapter 4: An Adaptive Real-Time Control Algorithm for the WVA 71 4.1 The Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 The Experimental Setup - A Heuristic Approach . . . . . . . . . . . . . . 72 4.2.1 The System Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.2.2 The Wireless Tactile Cuing Vest - The "SmartVest" . . . . . . . . . 74 4.2.3 The Algorithm Scheme . . . . . . . . . . . . . . . . . . . . . . . . 76 4.2.4 Lookup-Table Results . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.3 The Control System - An Analysis . . . . . . . . . . . . . . . . . . . . . . 89 4.3.1 System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Chapter 5: Conclusions and Future Work 100 5.1 Recommendations for the Wearable Visual Aid . . . . . . . . . . . . . . . 100 5.2 Future Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Appendix A The Miniguide Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 A.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 A.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Appendix B Selected Wechsler Intelligence Achievement Test Questions . . . . . . . . . . . 109 References 111 vi List of Tables 1.1 Classifications of Methods of Measuring Cognitive Load Based on Objec- tivity and Causal relationship . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 Test Subject Demographics for Mobility Feedback Study . . . . . . . . . . 30 2.2 Mobility Feedback Results for the aMFS prototype . . . . . . . . . . . . . 38 2.3 Summary of SUS Scores of mobilityi prototypes by Subject . . . . . . . . 38 2.4 Compliance and Reaction Time Grouped By Command Type . . . . . . . 39 3.1 Test Subject Demographics for Navigation Baseline Study . . . . . . . . . 58 3.2 Subject Designation by O&M Skill and Cognitive Baseline . . . . . . . . . 59 3.3 Subject Grouping into Skill Level . . . . . . . . . . . . . . . . . . . . . . . 59 3.4 Subject Pairing for Dual-Task Experiments . . . . . . . . . . . . . . . . . 66 3.5 Navigation Speed Summary by Group . . . . . . . . . . . . . . . . . . . . 69 3.6 Cognitive Load Summary by Group . . . . . . . . . . . . . . . . . . . . . 69 3.7 Reaction Time Summary by Group . . . . . . . . . . . . . . . . . . . . . . 70 3.8 NASA-TLX Results by Group . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.1 Test Subject Demographics for SmartVest Study . . . . . . . . . . . . . . 81 A.1 Mobility Results for the Miniguide Study . . . . . . . . . . . . . . . . . . 106 vii List of Figures 1.1 Pathway of visual information in the brain . . . . . . . . . . . . . . . . . . 2 1.2 Eye and cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 BrainPort device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 vOICe device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.5 Miniguide Ultrasound Mobility Aid . . . . . . . . . . . . . . . . . . . . . . 17 1.6 OrCam Pointing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.7 Sonic Pathfinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.8 The NavBelt Navigation System . . . . . . . . . . . . . . . . . . . . . . . 18 1.9 The Wearable Visual Aid . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1 Front view of custom android application for the aMFS prototype . . . . 26 2.2 vMFs system prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.3 Researcher testing aMFS during an indoor testing session . . . . . . . . . 32 2.4 Floorplan for indoor testing session . . . . . . . . . . . . . . . . . . . . . . 32 2.5 aMFS prototype used in an outdoor testing session . . . . . . . . . . . . . 33 2.6 Within groups comparison by modality type . . . . . . . . . . . . . . . . . 40 2.7 Heatmap showing subjects navigating with their cane only . . . . . . . . . 40 2.8 Heatmap representing subject travel with the aMFS prototype . . . . . . 41 viii 2.9 Heatmap representing subject travel with the vMFS prototype . . . . . . 41 2.10 Outdoor route navigated in long-distance trials . . . . . . . . . . . . . . . 46 2.11 Subject crossing street during outdoor aMFS trial . . . . . . . . . . . . . 47 3.1 Background questions to establish baseline of subject habits . . . . . . . . 55 3.2 Baseline tasks for Navigation Skill Profile . . . . . . . . . . . . . . . . . . 56 3.3 Mean Time to Complete Baseline Tasks by Group Classification . . . . . 60 3.4 Relative Access Measure by group classification . . . . . . . . . . . . . . . 62 3.5 NASA-TLX weightings of Elements affecting Workload Perception . . . . 65 3.6 Layout of Testing Space for Treasure Hunt . . . . . . . . . . . . . . . . . . 65 3.7 Testing protocol for Dual Task Phase . . . . . . . . . . . . . . . . . . . . . 66 4.1 Path Deviation as a function of Speed . . . . . . . . . . . . . . . . . . . . 73 4.2 System Diagram of Control Algorithm . . . . . . . . . . . . . . . . . . . . 74 4.3 Wireless Tactile Cuing Vest (SmartVest) . . . . . . . . . . . . . . . . . . . 75 4.4 Complex Left Turn Navigated using the SmartVest . . . . . . . . . . . . . 75 4.5 Algorithm Scheme for the SmartVest . . . . . . . . . . . . . . . . . . . . . 76 4.6 Control System Diagram representing the Algorithm Scheme. . . . . . . . 78 4.7 Motion Trajectories of a Young Woman Navigating with the SmartVest . 79 4.8 Motion trajectories of Young Man Navigating with the SmartVest . . . . 80 4.9 Success Rate of the Younger Male group as a function of Tolerance Setting 82 4.10 Navigation Success Rate of the Younger Female group as a function of Tolerance Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.11 NavigationSuccessRateoftheOlderMalegroupasafunctionofTolerance Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 ix 4.12 Navigation Success Rate of the Older Female group as a function of Tol- erance Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.13 Navigation Success Rate as a function of Group Walking Speed at a 10 °tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.14 Navigation Success Rate as a function of Group Walking Speed at a 15 °tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.15 Navigation Success Rate as a function of Group Walking Speed at a 20 °tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.16 Navigation Success Rate as a function of Group Walking Speed at a 25 °tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.17 Navigation Success Rate as a function of Group Walking Speed at a 30 °tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.18 Navigation Success Rate as a function of Group Walking Speed at a 35 °tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.19 Navigation Success Rate as a function of Group Walking Speed at a 40 °tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.20 Navigation Success Rate as a function of Group Walking Speed at a 50 °tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.21 Hammerstein-Weiner Model . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.22 Normative Flow of A Successful Subject Trial . . . . . . . . . . . . . . . . 91 4.23 Finite Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.24 FIR Model- Step Response . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.25 Bode Plot of FIR Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.26 Nyquist Plot of FIR Model . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.27 State-Space Model - Step Response . . . . . . . . . . . . . . . . . . . . . . 94 4.28 SS Model- Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.29 SS Model - Bode Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 x 4.30 SS Model - Nyquist Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.31 HW Model - Input Nonlinearity Block . . . . . . . . . . . . . . . . . . . . 96 4.32 HW Model - Linear Block Impulse Response . . . . . . . . . . . . . . . . 97 4.33 HW Model - Linear Block Step Response . . . . . . . . . . . . . . . . . . 98 4.34 HW Model - Output Nonlinearity Block . . . . . . . . . . . . . . . . . . . 98 4.35 Comparison of Model Fit to Validation Data . . . . . . . . . . . . . . . . 99 A.1 Miniguide Ultrasound Mobility Aid . . . . . . . . . . . . . . . . . . . . . . 103 A.2 The Wearable Visual Aid . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 A.3 Layout of Obstacle Course . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 A.4 PPWS by trial (Miniguide vs Cane) . . . . . . . . . . . . . . . . . . . . . 107 A.5 PPWS by trial (Miniguide vs WVA) . . . . . . . . . . . . . . . . . . . . . 108 A.6 PPWS by trial (WVA vs Cane) . . . . . . . . . . . . . . . . . . . . . . . . 108 xi Abstract Mobility-related deficits triggered by vision loss make it difficult for the blind to ac- complish everyday tasks. This in turn affects quality of life, which can be alleviated by mobility aids. Enter the Wearable Visual Aid, a device our group is developing to improve blind mobility. Currently, design of mobility aids lead to device abandonment due to lack of user-experience research. Being a closed-loop navigation system that is safety critical, it is also important that subjects understand and execute output cues effectively at the time it is given; thus, successful implementation of a wearable visual aid will require a better understanding of how blind people respond to non-visual cues for mobility. Through the assessment of feedback modalities using prototypes, we determined the optimal presentation of mobility cues were a combination of audible and vibrotactile modalities, which presented quantitative and qualitative advantages over sole cane nav- igation. An adaptive real-time control algorithm was also built to give commands that enables users to replicate normative pedestrian flow. Finally, cognitive load related to responding to electronically delivered mobility cues was investigated, to promote the au- tomaticity of using the device, and therefore its usability for blind navigators. xii Chapter 1 Introduction Man’s survival has been long dependent on its ability to react appropriately to external stimuli - be it the evasion of predators or the pursuit of prey- through locomotion. This goal-directedbehaviorisbasedmainlyontheinputfromfourofthefivesenses, withvision being arguably the most important sensory input for voluntary movement. Although vision is valuable, successful mobility is achieved through a combination of visual and nonvisual strategies that include path integration and cognitive mapping. In the absence ofvision, compensatorymechanismsaffectnonvisualnavigationprocedures, whichinturn place additional attentional demands on visually impaired individuals. In this chapter (Sections 1.1– 1.4), we focus on visual perception and its role in movement as well as the resulting effects of its loss on human mobility. 1.1 The Neurophysiology Of Visual Perception Vision, ortheactofseeing, occursthroughthetransductionofvisiblelightcontainedinan image of one’s environment into neuronal signals, the brain’s representation of said image. Light is focused by the lens on the photoreceptors housed on the retina. Neural pathways 1 that lead from the ganglion cells relay visual information through Lateral Geniculate Nucleus to the Primary Visual Cortex (Kolb et al., 2012). The primary visual cortex (V1) is located in the occipital lobe, and is responsible for the processing input from the ganglion cells into an image a person sees. Major transformations of visual information are thought to take place in V1. The spatiotemporal patterns encoded by the ganglion cells are sorted through a combination of on/off pathways that facilitates edge-detection. Topographical mapping and cortical magnification is also thought to take place in this area. The information from this cortex is then passed into other areas of the visual cortex through pathways that are not as yet fully understood (Fig.1.1). Figure 1.1: Pathway of visual information in brain upon seeing an object. Vision is a combination of peripheral and central vision. Peripheral vision allows us to see objects in the perimeter of the visual field, while central vision allows us to focus on objects in the center. Photoreceptors are unevenly distributed along the retina, with rods focused on the periphery and cones in the center (fovea). This uneven distribution 2 contributes to dynamic capture of images (Kolb et al., 2012). Rods outnumber cones 20 to 1, and are responsible for peripheral vision. They are more sensitive to light, but provide lower acuity vision in dim light. Cones are responsible for central vision, and are located in the fovea. They are less sensitive to light but provide higher acuity and color vision. Due to dark and light adaptation, humans are able to see at different levels of light and darkness. The human visual field is 110 - 200 ° in the vertical and horizontal planes, respectively (Fig. 1.2). Figure 1.2: Cross section of the eye The act of gazing and directing the eyes have also been studied as it impacts mobility and navigation. (Patla and Vickers, 1997) studied spatio-temporal gaze behavior patterns in normal participants wearing a mobility eye tracker as they approached and stepped over obstacles of varying height in the travel path. They found that while approaching an obstacle, participants fixated on the obstacle for 20% of the travel time. They also found that participants did not fixate on obstacles as they were stepping over, but did in the planningstepsbefore. Thisstronglysuggeststhatperipheralvisionisessentialinintuitive 3 and efficient navigation, as this obstacle information is contained in the periphery while subjects are in the planning phase, allowing subjects to maintain constant eye contact withtheirgoal. Thisfindingwasfurtherconfirmedby(Turanoetal.,2001)whenstudying direction of gaze in normal and RP subjects while navigating a simple route. They found that people with RP fixated over a larger area in the environment and at different features thatpeoplewithnormalvision. Peoplewithnormalvisiondirectedtheirgazeprimarilyat the goal, whereas people with RP directed their gaze at edge-lines or boundaries between walls. They also found a significant negative correlation between the horizontal visual field extent of the RP subjects and the proportion of downward-directed fixations. This suggests that peripheral vision aids in planning routes, while central vision maintains undisrupted contact with the goal. 1.2 Perceptual Input and Self-Motion In order to understand the role of perception in mobility, one must first understand ter- minology associated with it in the field of orientation and mobility (O&M). Perception is one’s interpretation of their environment based on a sensory input. Perception influ- ences action and vice versa (Gibson, 1969) and this relationship is foundational in the field of orientation and mobility. Orientation, as a term, is defined as the knowledge of one’s distance and direction relative to objects observed or remembered in the sur- rounding, and keeping track of these self-to-object spatial relationships as they change during locomotion (Hill and Ponder, 1976). Mobility is defined as the act of safely and effectively moving from one’s present position to a desired position in another part of 4 the environment (Wiener et al., 2010). Safe and effective self-motion is derived from the intersection of visual, audible, olfactory and tactile inputs, also known as intersensory or multisensory integration. This results in the concept of environmental flow, which enables an individual to keep track of self–to–object and object–to–object spatial relationships during locomotion. By itself, the visual sense is only sufficient for guided locomotion when the desired stop point is within one’s visual reach (a single frame of view). As this is more than often not the case, other navigation strategies are adopted by humans to move around in large-scale spatial areas. They are described in the following sections (1.2.1 – 1.2.4). 1.2.1 Path Integration Path integration - a Darwinian theory also known as dead reckoning- is a semi-intuitive strategy of nonvisual navigation whereby one updates their current position based on idiothetic information about velocity or acceleration. It is differentiated from piloting, which uses allothetic landmarks and references as waypoints to orient oneself along the way. According to (Loomis et al., 1993), this mode of nonvisual navigation is vulnerable to an inaccurate representation of environment, fallible updating, and faulty execution of the desired response to known target direction and location. The errors derived from this method of nonvisual navigation integrate, and steadily grow with each step on execution pathway. The information processing demands of this mode weigh less than configural encoding (described in Section 1.2.2) as decision making occurs on the spot, rather than path being computed from memory. 5 1.2.2 Configural Encoding Another method of nonvisual navigation detailed by (Klatzky et al., 2002) where one’s path of travel is constructed from an encoded representation of segments of legs and turns in outbound path. From this, computational processes decode one’s orientation and subsequent path of travel. There are higher cognitive demands associated with this method in addition to more computation, faulty encoding and all errors from path integration. It stands to reason that there as a result, more errors will be integrated along the way. Due to the higher attentionaldemandsassociatedwiththismethod,itwouldbeinterestingtostudywhether the visually impaired use it less. 1.2.3 Spatial Orientation Spatial Orientation is described by the experts in the field of orientation and mobility as one’s knowledge of their surroundings and distance and direction of objects from memory. It is also the ability to track object to object and self-object relationships and how they change during movement. In the absence of vision, one’s path of travel can be constructed and executed using this strategy (Golledge, 1999). 1.2.4 Perceptual and Motor Learning In the instance of vision loss, the need for nonvisual navigation strategies is enhanced. In addition to those outlined above, strategies like perceptual and motor learning are either self –taught or provided by Orientation and Mobility (O&M) specialists to extend field of perception limited by the absence of ocular sensory input. They are routinely 6 employed in teaching visually impaired how to use their white cane as well improving their orientation and mobility skills. Perceptual learning is often described as the education of attention that enables per- ceivers to notice the features of a situation which are relevant to their goals while dis- carding irrelevant features (Gibson, 1969; Goldstone, 1998). In orientation and mobility, where successful, one is considered a skillful perceiver and reaps the benefits of less de- mands on one’s attention and increased efficiency. In addition, a skillful perceiver is taught to narrow their focus to relevant features and situations they specify, and pay at- tention to distant stimulus. This is different from an unskillful perceiver, who may notice both relevant and irrelevant features of sensory stimulation without understanding their meaning, as well as focus their attention to proximal stimulus, which limits their field of perception and ability to navigate greatly. Motor learning refers to acquisition of specific and complex patterns of movement through practice and experience. Particularly, the concepts of degrees of freedom and the development of automaticity are relevant to orientation and mobility. The degree of freedom is a dimension which movement is free to vary, whereas automaticity refers to performing a motor skill automatically – like it is second nature. When instructing students to use the wide cane effectively, O&M specialists teach techniques that help students to travel consistently and minimize the variability of their limbs’ movements. Thesemethodsareusedbythevisuallyimpairedobtainanddecodeinformationabout walking surfaces and objects, which include changes in elevation, identifying as well as understanding stationary and moving objects. With these tools, the subject is able to travel consistently and effectively. 7 1.3 Cognitive Load Cognitive Load is a term in cognitive psychology which describes the load related to the control and use of working memory. Working memory are the structures and processes for temporarily manipulating and storing information for immediate tasks. This concept is particularly apparent in learning and instruction. Due to demands on the attention of the visually impaired in learning new skills, the concepts of cognitive load and work- ing memory are important in optimally imparting strategies utilized in orientation and mobility. According to theorists like Sweller, there are three types of cognitive load, namely intrinsic, germane and extraneous cognitive load. Intrinsic cognitive load refers to the amount of attention need to process the material that is directly related to the inher- ent structure and complexity the material and cannot be manipulated or affected by the instructor (Sweller, 1994). Extraneous cognitive load refers the load due to mode the material is presented to the user. Germane cognitive load is the amount of working mem- ory needed through the efforts exerted by the individual to understand the instructional material. Therefore, the goal is to minimize extraneous cognitive load while optimizing germane cognitive load. The types of cognitive load are additive in nature such that the sum of these elements cannot surpass the total working memory available. Also, the relationship between the three types of load are asymmetric in the sense that intrinsic cognitive load is a base attribute, which is assigned before resources are allocated to germane and extraneous load. Germane and extraneous cognitive load affect each other inversely, such that a 8 reduction in extraneous cognitive load frees capacity for an increase in germane cognitive load and vice versa (Brunken et al., 2003). Germane cognitive load is thought to result in the automation of schema, which improves learning and results in the reduction of intrinsic load. According to (Paas et al., 2003) the overall learning process is cyclical, where a reduction in intrinsic load frees more resources for learning more complex schema and reducing intrinsic load even more, and so on. (Mayer, 2001) introduces the concept of the modality effect, which denotes how the principles of cognitive load can influence instructional learning in a multimedia setting. The principle states that in designing materials, formats with visual and auditory modali- ties presented simultaneously produced the best learning outcomes. This is an interesting phenomenon considering that the visually impaired lack the visual sensory modality, and therefore cannot experience optimal learning conditions according to Mayer. Given the suboptimal conditions of learning the visually impaired are faced with, a format of pre- sentation of the material must be designed to adjust for these limitations and produce similarly favorable outcomes. As a working memory is an abstract property, measuring it has taken many forms that, so far, center on the intersection of causal relation and the objectivity of the user (Table 1.1). The dimension of causal relation refers to the relationship of the measure to the attribute of interest whereas objectivity refers to the perspective from which the data is collected. This has resulted in four classifications, namely: Indirect, subjectivemeasures: Developedby(Paasetal.,2003)fromearlierworks of (Van Merriënboer et al., 2002). The measure is from the perspective of user, where 9 the attribute quantified is indirect to that of interest. E.g. self-rating of invested mental effort. Indirect, objectivemeasures<mostcommon>: Independentofuserbias, prop- erty measured is indirectly related to relevant property e.g. knowledge acquisition scores. Direct, subjective measures: Used by (Kalyuga et al., 1999), measures from per- spective of user, where the attribute is directly related to cognitive load e.g. rating of difficulty of materials. This method tests the extent of learning accomplished, however it should be noted that outcomes may be affected by the presentation of material that may contribute to cognitive load, which bias the measurements made. Direct, objective measures: A promising new mode of measurement. Not self- reported, this is a direct measurement of cognitive load imposed by task. E.g. fMRI techniques, dual-task methodology. Table 1.1: Methods of Measuring Cognitive Load (adapted from Paas et al., 2003). Objectivity Indirect Measures Direct Measures Subjective Self-reported invested mental effort Self-reported stress level Self-reported difficulty of materials Objective Physiological measures Brain activity measures (e.g. fMRI) Behavioral measures Dual-task performance Learning outcome measures 1.4 Visual Impairment Interventions 1.4.1 Background and Motivation Visual impairment is a phenomenon that grows increasingly prevalent among the world- wide population. According to the (World Health Organization, 2014), there are over 285 10 million people that are visually impaired worldwide and of this population, 39 million are blind. (Congdon et al., 2004) projected that there will be a 70% increase in the cases of blindness in the United States by the year 2020, owing to the increase in the aging population. In the United States, blindness or low vision affects approximately 1 in 28 Ameri- cans older than 40 years as found by (Congdon et al., 2004) however specific cause of visual impairment and blindness vary greatly by race/ethnicity. Specifically, degenera- tive diseases like retinitis pigmentosa (RP) and age-related macular degeneration (AMD) are of importance due to their commonality in the elderly. “RP represents one of the most common causes of blindness or severe low-vision in people from 20 to 60 years old.” AMD is recognized as the leading cause of blindness in the elderly (Congdon et al., 2004). Non-degenerative causes of visual impairment like visual dysfunction due to Traumatic Brain Injury are becoming increasingly important due the large number of cases seen every year, particularly due to an increase in the veteran population. These diseases will be described in the next section (1.4.2). 1.4.2 Diseases that Cause Vision Loss Retinitis Pigmentosa is characterized as the loss of peripheral vision stemming from the degenerationofphotoreceptors. Currentinterventionsincludetheretinalprosthesiswhich will be consists of an implant with artificial electrodes providing electrical signals to replace those generated by photoreceptors. These have been tried both epiretinally and subretinally. 11 AMD is a progressive eye condition characterized by loss of central vision due to the degeneration of macula of the eye. There are two types of AMD, “wet”/ neovasular and “dry”/atrophic AMD. There are no treatments for dry AMD, but treatments for wet AMD are focused on either sealing off leaking blood vessels (laser and light sensitive drugs) and/or preventing the blood vessels from growing back (anti-angiogenic therapies). Traumatic Brain Injury (TBI) is incurred by approximately 70-90,000 individuals resulting in long-term substantial loss of physical and mental functioning. This occurs when a violent blow or jolt is experienced to the head or body. It can be also be caused by an object penetrating the skull, such as a bullet or shattered piece of skull. It is a result of vehicular incidents, falls, and acts of violence, sports injuries (Hoofien et al., 2001). (McKenna et al., 2006) found that visual perceptual changes are evident in patients with severe TBI when compared to a normative sample. Currently treatment consists of a continuum of rehabilitation, which patients exit and reenter at any given point in time, suggesting visual aids and mobility assistive devices could be vital to improve compliance and quality of life. 1.4.3 Classification of Low Vision and Effect on Quality of Life In the United States, partial sight is classified as best-corrected visual acuity or less in the better eye. A designation of legally blind is defined as best corrected visual acuity of 20/200 or less, or a visual field of no more than 20 ∘ diameter in the better eye. Func- tionally blind is defined as no usable vision, however only 10% of the visually impaired population are given this designation, as 90% have some usable vision. A designation 12 of low vision or visually impaired is defined as vision that cannot be corrected optically, medically or surgically, and is insufficient for a patient to do what he/she wants to do. The most common type of functional losses occur in spatial resolution, contrast and visual field. With a loss of spatial resolution, there is reduced acuity with images appear- ing blurry. Loss in contrast makes images appear cloudy. With a loss of visual field there is usually the loss of central or peripheral field. In classifying low vision, clinicians use a variety of measures and tests. The most importantmeasuresarevisualacuity, visualfieldandcontrastsensitivity. Othermeasures include color vision, stereopsis among others. Primarily clinicians measure how a disease will affect measures of visual functions. For example, with reading, clinicians classify the measures necessary to accomplish reading and technology is developed to help with these tasks. Examples of such technology include optical magnifiers, head mounted displays etc. Recent developments include computer vision, and prosthetic vision (retinal and cortical). Loss of vision capabilities have been negatively correlated with a decrease in quality of life, as found by (Nutheti et al., 2006). This was determined through the administra- tion of an adapted World Health Quality of Life Instrument on 3702 participants aged 40 years or older participating in the Andhra Pradesh Eye Disease Study, and the analysis of psychometric properties of a health-related quality of life (HRQOL) instrument. It was concluded that there was a decreased Quality of Life associated with the presence of glaucoma or corneal disease independent of visual acuity and with cataract or retinal disease as a function of visual acuity. These results were not only specific to the Indian population. (Polack et al., 2007) also studied the effect of cataract visual impairment 13 on the quality of life in a Kenyan population and found a similar result. Through the use of a World Health Organization Prevention of Blindness and Deafness 20-item Vi- sual Functioning Questionnaire (WHO/PBD VF20) on a low income population of 322 participants, they found that a significant association between poorer visual acuity and problems with mobility, self-care, usual activities and pain or discomfort. They did not find any significant association between poor visual acuity and depression however. 1.4.4 Desired Goals of the Low-Vision Population In low vision research, the primary goal is to help low-vision patients accomplish what they want to do, which is tied into improving quality of life for these patients. As a result, visual requirements for desired tasks must be understood. Goals of the low vision population are dependent on the disease or disorder the patient is afflicted with. Primary goals include: reading, recognizing faces, driving and mobility (Mangione et al., 1998; Massof, 2006). These tasks are complicated because they include a multitude of sub- tasks. Reading, as an example, includes sub-tasks such as detection and recognition of letters. Spatial Navigation is a field of research that includes maintaining posture and gait, wayfinding, mobility and orientation, obstacle avoidance, walking and driving. Research specifically focuses on the effect of simulated vision loss on mobility and orientation, wayfinding and obstacle avoidance. These devices will be described more in the Mobility Aids section. 14 1.4.5 Mobility Aids An electronic travel aid (ETA) has been defined by (La Grow, 1999) as a device that emits energy waves to detect the environment within a certain range or distance, processes re- flected information, and furnishes the user with certain information in an intelligible and useful manner. Mobility aids can also be seen as sensory substitution devices, which transforms characteristics of one sensory modality into stimuli of another sensory modal- ity. In the case of vision, mobility aids feed the stimuli of visual perception into another sensory modality for the brain to process. Conventionally ETAs may be classified as either a) an object detector or an environ- mental sensor, and b) a primary or secondary mobility device as described by (La Grow, 1999). Object detectors provide for obstacle preview in and to the side of one’s path of travel, while environmental sensors provide for both object preview and supplemen- tal information about the quality and characteristics of the environment in which one is traveling (e.g., texture and density of objects detected). Primary mobility aids provide for both object and surface preview, while secondary aids provide for just one but not both of these (LaGrow and Weessies, 1994). Alternately, ETAs are classified into types, with each increasing level denoting the growing information and aid available to the user. As studied by(Farmer and Smith, 1997), Type I devices have a single output for object preview while Type II devices have multiple modes of output. Type III devices provide for both object preview and environmental information (i.e., environmental sensors), while Type IV devices feature artificial intelligence as a component. 15 Type I devices include (Bach-y Rita et al., 2005) BrainPort (Fig. 1.3), and (Meijer, 1992) voICe (Fig. 1.4). Both devices convert images to information that can be processed by another sensory modality. In the case of the BrainPort, an image is mapped onto sensors on the tongue, whereas the voICe maps images into a complex pattern of sounds fed to the user through headphones. Figure 1.3: BrainPort device. Figure 1.4: vOICe device. Type II devices include the Sendero Group’s Miniguide (Hill and Black, 2003), an ul- trasonic mobility aid that uses echo-location to detect objects (Fig.A.1). The aid vibrates to indicate the distance to object- the faster the vibration rate, the nearer the object. There is also an earphone socket which can be used to provide sound feedback. 16 Figure 1.5: Miniguide Ultrasound Mobility Aid. Type III devices include the OrCam (Shalev-Shwartz et al.) which of a small camera that can be clipped to the user’s glasses connected to a small computer in the user’s pocket (Fig.1.6). The sensor sees signs, objects, and people in front of the user and informs or reads text to users via a bone-conduction earpiece. Its designation as a Type III device comes mainly from the environmental information it provides. Figure 1.6: OrCam Pointing System. Finally, Type IV devices include (La Grow, 1999) Sonic Pathfinder(Fig. 1.7), (Boren- stein, 1990) NavBelt(Fig. 1.8) and (Pradeep et al.) Wearable Visual Aid (Fig. A.2). The Sonic Pathfinder is an ultrasonic sonar device designed to provide object preview in and to the side of one’s path of travel for blind or visually impaired persons. It has two transmitting, and three receiving, transducers, mounted in a headband. The transmitting transducers flood the field in front of the traveler with ultrasonic energy. The receiving transducers detect the signals bounced back from objects in its path. These are processed 17 Figure 1.7: Sonic Pathfinder. Figure 1.8: The NavBelt Navigation System. Figure 1.9: The Wearable Visual Aid. 18 by a microcomputer and translated into musical notes over miniature speakers for the traveler. It is classified as a Type IV device due to its selectivity in alerting the user to certain obstacles. It only alerts the traveler of those objects closing in range while ignoring those at a constant or increasing range. The NavBelt is a device that consists of a belt with a small computer, ultrasonic and other sensors, and support electronics. Signals from these sensors is processed by a unique algorithm and relayed to the user via headphones. The device has two main modes: guidance mode and image mode. The guidance mode enables users to navigate to a destination while avoiding obstacles, whereas the image mode provides the user with an acoustic or tactile image of the environment. Pradeep’s Wearable Visual Aid (WVA)is a mobility aid that computes the best traversable path in real-time using machine-vision principles. In this scenario, the WVA detects obstacles in the environment and plans a path around the obstacles, but raw information about obstacles is not passed to the user. The main concern of the user is to adhere to the computed path, which requires only simple cues. Despite the variety of mobility aids that have been developed throughout the years, not many are commercially successful with the white cane and guide dog remaining staples. This is due to the need of a balanced and intuitive connection between the device andtheuser. Manyresearcherscreatedeviceswithoutinputofthevisuallyimpairedusers in the design process, which many are finding essential for successful device compliance. 19 1.4.6 Multimodal Sensory Feedback for the Visually Impaired Without visual perception, there is in the perceptive ability of the remaining sensory or- gans through cross-modal plasticity and multisensory integration by the brain (Collignon et al., 2009; Garcia et al., 2015; RoÈder et al., 1999). As the remaining senses - especially audition - carry a higher weighting in perception, navigational cues must be delivered in a form that is usable by these sensory modalities. All ETAs employ a mode of communication with its users, and the choice of feed- back mode in devices have gone without scientific justification. Increasingly, however, researchers are finding that the mode of sensory feedback employed by ETAs for the vi- sually impaired is vital to device success as it serves as the perpetual link between the device output and user (Jacko et al., 2002; Vitense et al., 2002; Xiao et al., 2003). So far, modalities include haptic, vibrotactile and auditory feedback. Haptic feedback a tactile feedback technology which takes advantage of the sense of touch by applying forces, vibrations, or motions to the user. Examples include raised Braille, and ETAs that use this type of technology include the BrainPort. Vibrotactilefeedbackisasubsetofhapticfeedbackthatreferstovibrationstimulation that is delivered by a mechanical instrument (typically through vibration motoros) placed on the skin. ETAs that use this mode of sensory feedback include the Navbelt, the WVA and the Miniguide. Finally, auditory feedback refers to feedback delivered through sound. They include the voICe system, the Sonic Pathfinder and the OrCam. 20 1.5 Thesis Overview This thesis aims to address some gaps in the field of assistive technology for the visually impaired. It is part of an overall effort to develop a wearable visual aid (WVA) for blind mobility. The WVA will address the problem of independent mobility by enabling the visually impaired to navigate effectively in unknown environments. So far, no wearable device designed for the visually impaired user for mobility has been successful in being counted among staples like the white cane and guide dog (Blasch and Stuckey, 1995; Fok et al., 2011; Hocking, 1999). This reliance on basic technology is majorly due to the lack of devices that employ a user-centered system design (Giudice and Legge, 2008). Most devices are developed as “accessible,” however usability is overlooked as a subsection of accessibility, and neglect of this area plays into the low compliance of users (Riemer-Reiss and Wacker, 2000). As feedback modality is a cornerstone of human-computer interaction and usability of devices- particularly in a closed-loop device like WVA - the first specific aim of this thesis is to investigate an optimal feedback mode to interface a wearable mobility device with a visually impaired user. Although the WVA has been shown to statistically reduce the number of collisions the visuallyimpairedendurewhennavigating, itdoesnotallowthemtoachievethenormative pedestrian flow behavior that seeing individuals employ. Therefore, the second specific aim of this thesis is to present a closed-loop, real time control algorithm developed to assist subjects in adhering to a predetermined path in a manner consistent with seeing controls. The algorithm enables the visually impaired to maximize the expected utility 21 of their efforts in navigating without overstimulation. This will help ensure efficacy and efficiency of subject movements while using the wearable mobility device. Using assistive devices places cognitive load on the working memory of its users, however this concept has not been applied to studying the visually impaired in closed- loop navigation. As this will affect the efficacy of the system, the third specific aim is to study the types of load the user is subject to while using the device, and determine how to optimize the load that will contribute to the automaticity of using the device. 22 Chapter 2 Assessing Optimal Modalities for Mobility Feedback 2.1 Introduction Early ETAs have been criticized as placing a burden on the user to interpret raw informa- tion. Basically, this criticism suggests that the user is over-tasked by having to respond to environmental stimuli (e.g. ambient sounds) as well as needing to decode the output of the ETA. An alternative approach is to reduce the information to simple commands, by having algorithms resident on the ETA process the sensor input. For example, an ETA with a camera could have an algorithm that processes camera data to locate a door, then the user can be guided towards the door with simple directional cues. Research has been conducted on how information gathered by an ETA should be provided to the user, as described in Section 1.4.6. The question of speech or auditory cues as output from an ETA has been examined (Arditi and Tian, 2013). Speech output was the preferred output medium, based on a questionnaire survey of ten well-educated and employed individuals with visual im- pairment. Speech output is also supported by a study a “Wizard of Oz” mobility device 23 (Poláček et al., 2012). ETA output was also studied by comparing speech to virtual sound (Klatzky et al., 2006). Virtual sound was shown to produce better performance when subjects performed a vibrotactile N-back task while guided along virtual paths without vision. However, producing virtual sounds is more difficult computationally (compared to generating words) and, to maintain fidelity of the stereo sound, requires users to wear headphones which block some or all ambient sound, thus limiting its practicality. Speech can be delivered via bone-conduction headphones that do not occlude ambient sounds. Currently it appears that determining “the best” (if there is a “best”) output medium will require additional studies which consider a variety of output mediums, ETA char- acteristics, environmental conditions, and personal preferences and characteristics of the user, among other variables. Based on the prior work in our lab and others, reviewed above, we are able to hypoth- esize that ETAs can be learned quickly and used effectively if simple, intuitive commands are provided to the user as guiding cues. The purpose of this chapter is to report on a study comparing two types of ETA outputs (speech or tactile) in a group of blind test subjects. Most other studies of this type used blindfolded, sighted individuals and did not directly compare speech and tactile outputs. Prior research has specifically cited the lack of user-centered design as a barrier in the successful implementation of these devices by the visually impaired population (Blasch and Stuckey, 1995; Fok et al., 2011; Giudice and Legge, 2008; Klatzky et al., 2014; Loomis et al., 2012; Nicolau et al., 2009; Phillips and Zhao, 1993). And, for that reason our subject population included individuals who were blind. In this study, we specifically focus on the critical aspect of maintaining the 24 user on the preferred path via non-visual methods of feedback; specifically, the interface between the device and the user. The preliminary WVA study cited above involved only three subjects, traversing a single course multiple times, and utilized only tactile cues for guidance (Adebiyi et al.; Thakoor et al.). Orientation and mobility specialists often use verbal commands to direct their subjects in training, so in this study we explored the use of electronically delivered speech, like those that would be generated by a wearable visual aid. We also compared speech to “equivalent” vibrotactile commands. We explored vibrotactile commands as these will not interfere with the hearing of the visually impaired, as hearing is heavily relied upon for self-navigation. As previously noted, this research is part of a larger WVA project that seeks to use computer vision algorithms to predict clear paths and plan routes. The WVA software is still under development and is not yet sufficiently robust to reliably predict paths. Indeed, route planning and obstacle avoidance remains an active area of research in robotics and computer vision. Since the WVA is not yet robust enough to be used, we instead used a “human-in-the-loop” paradigm, similar to that used by (Poláček et al., 2012)and others, to ensure that reliable directional cues were provided to the subject and to simulate the expected functionality of the WVA. 2.2 Audible Mobility Feedback System The audible mobility feedback system (aMFS) is a tool developed by our group to assess speech cues for mobility through synthesized speech. Our design rationale was rooted in communicating navigational cues in as direct a ‘language’ as possible to minimize the 25 amount of decoding our users will face. Although the use of virtual sounds in providing simple guiding cues has been demonstrated as superior to synthesized speech in minimiz- ing cognitive load(Klatzky et al., 2006; Loomis et al., 1998), the infancy of its deployment to bone conduction headphones deemed it impractical for our purposes (Parseihian and Katz, 2012). In addition, synthesized speech provides an expressiveness (Nicolau et al., 2009) that blind subjects familiar with common mobile platforms are already comfortable with. Figure 2.1: Front view of the custom android application showing commands implemented as buttons on a touchscreen. The outputs were audible commands delivered to the subject via bone conduction headphones. The aMFS consists of bone-conduction headphones (GameChanger Innovations LLC) worn by the user and a custom android application to generate verbal commands under experimenter control. Bone-conduction headphones allow users to hear ambient sounds, which is important because visually impaired individuals are trained through perceptual 26 learning to rely upon their hearing and other senses to enhance their mobility perfor- mance (Guth and Rieser, 1997; LaGrow and Weessies, 1994). The aMFS delivers speech commands to the user when an operator touches a virtual button on a touch screen (Fig. 2.1). Eight commands included “forward”, “veer left”, “approaching left turn”, “turn left”, “veer right”, “approaching right turn”, “turn right” and “stop” were used. The duration of the commands was as follows: stop - 0.75 seconds; approaching right/left turn – 2.53 seconds; forward - 0.93 seconds; veer right/left - 1.08 seconds; turn right/left - 1.24 seconds. For testing reported here, the app was run on a Motorola XOOM MZ601 tablet and a dual-core Android 3.1 Operating System. 2.3 Vibrotactile Mobility Feedback System The vibrotactile mobility feedback system (vMFS) is a collection of six vibration motors attached on individual points on a subject’s upper torso through a vest and activated by a push-button system. This system was intended to serve as a vibrotactile analog to the aMFS, providing the same eight commands through an array of six coin-shaped vibration motors which are eccentric rotating masses commonly used in cellphones and pagers and are also referred to as pancake motors. The placement of the vibrotactile array was informed by other studies intersecting with our design constraints regarding portability and subject preference. Stimulation sites used by other studies include the tongue, hands and fingers, the waist and upper torso(Shull and Damian, 2015). Although the hands and fingers are highly sensitive, it was ruled impractical due to the high usage of those areas during navigation with 27 Figure 2.2: (a) Arditti outfitted with the Vibrotactile Mobility Feedback System (vMFS) with an activated left turn command displayed on LED Array (centre) (b) Back view of vMFS showing placement of vibration motors on upper torso. the white cane. The tongue was also similarly avoided for practical purposes, as our training protocol required our subjects to use their speech while navigating. Between the waist and the upper torso, the upper torso was chosen due to the number of distinct commands to be communicated and to provide a wide enough area so that each command could be discriminated clearly (Cholewiak and Collins, 2003). Our ultimate selection of a torso-based array is supported by studies showing not only its utility in a variety of mobile and strenuous environments (Elliott et al., 2010), but also by the superiority of the back (upper torso) in pattern identification as compared to the forearm (Jones et al., 2006); this study also showed that the type of vibration motor used did not affect pattern identification. 28 The coin vibration motors used in this experiment were manufactured by Yuesui (https://cdn.sparkfun.com/datasheets/Robotics/B1034.FL45-00-015.pdf). The motors are connected to a push-button microcontroller system that delivers commands to the subject when the researcher presses a button that activates the corresponding motor(s). The system was programmed using an ArduinoTM ATMega development board and IDE environment. Eightnavigationalcommands(correspondingtotheeightspeechcommands of the aMFS) were encoded into the six-motor array as follows: forward – center back motor; stop – center front motor; veer left/right –upper shoulder; approaching left/right turn and turn left/right – lower back area (Fig. 2.2). The duration of each vibrational pulse was 0.38 seconds. LEDs were connected in parallel to the motors and arrayed on the shoulder, to allow synchronization of the command and the subjects’ reaction, extracted from recorded video. 2.4 Test Subject Demographics Testing was conducted under a protocol approved by the University of Southern Cali- fornia Institutional Review Board. Subjects were read the informed consent form prior to enrolling in the study. Once enrolled, background medical information was obtained on their eye condition both from their ophthalmologist and from a questionnaire, under HIPAA regulations. All subjects had light perception or less, and therefore classified as totally blind with regards to functional vision. Subject code, age, gender and visual diagnosis are shown in Table 2.1. 29 Eleven persons with severe visual impairment were enrolled in our aMFS experiment (mean age = 53.8 years). After a period of six months, ten out of our eleven former participants returned for our vMFS experiment (mean age = 53.5 years). The cohort of subjects were trained and tested identically for both systems. Table 2.1: Subject Demographics Subject Age Gender Diagnosis of Vision Loss RP 50 M Cytomegalovirus Retinitis RA 41 M Advanced Glaucoma GB 55 F Microphthalmia(Left)/ Anophthalmia(Right) ON 47 F Retinitis Pigmentosa JV 63 M Cataracts RC 50 F Diabetic Retinopathy and Glaucoma TT 69 M Retinitis Pigmentosa NM 40 F Detached Optic Nerve (Congenital) HF 64 F Retinopathy of Prematurity RT-2 40 F Optic Nerve Hypoplasia EB 69 F Retinitis Pigmentosa 2.5 Subject Training Subjects were trained on the meaning of each command as well as on the expected re- sponse before they were tested. Training usually occurred on the same day as testing; however, two subjects had multiple sessions of training or participated in pilot exper- iments with the aMFS on separate days prior to the testing reported here. The pilot experiments consisted of guiding the visually impaired subject around a course for 3 minutes by an operator, who gave commands that were randomized for each trial. The course in the pilot experiment measured 5.18 m x 5.18 m, and was interspersed with 0.3 30 m-sized cones every 1.5 meters. The pilot experiment had no measurable effect on their performance (see Section 2.8.3). Typical training for the main experiments included theory and practical based seg- ments. The theory-based segment was where the researcher explained how the prototype worked and the meaning of each command. Subjects were given the opportunity to ask questions and told to repeat the commands as they heard them. The command set was given three times in random order, and once the researcher was satisfied that the sub- ject had an understanding of what each of the commands meant, the practical training segment commenced. Subjects practiced in an environment different from that in which they were tested, until the researcher observed they were comfortable executing each command correctly at least three times consecutively, which usually took about three to five minutes. Once practice was complete, testing on the actual routes commenced. 2.6 Subject Testing After training, subjects were guided through the indoor and outdoor mobility courses. Depending on the modality being tested, subjects used that MFS with their cane. The indoor setting was a classroom at the Braille Institute with tables, chairs and other obstacles(Fig. 2.3). Atop-viewdrawingoftheroomisshowninFig2.4. Onlysubstantial obstacles like tables and countertops are represented; chairs were present during trials but were not substantial obstacles as they were pushed in towards the table. Starting points for the indoor setting were the four corners of the room. Subjects were askedtonavigatediagonallyacrosstheroomfromonecornertotheother,resultinginfour 31 Figure 2.3: Researcher guiding subject using the Audible Mobility Feedback System (aMFS) during an indoor testing session. Figure 2.4: Schematic top view of the indoor mobility course used during experimentation. The numbered corners represent start and/or stop points for each trial. Each start point had a stop point at the diagonal corner of the room (direction of travel represented by arrows. 32 different routes. As a control, subjects were asked to navigate these routes independently with their cane and their wayfinding skills. The outdoor setting consisted of an 8.53m by 6.40m course interspersed with 0.35m traffic cones, and subjects were guided around this course for a single three-minute trial (Fig. 2.5). Figure 2.5: Subject being guided by the researcher using the aMFS during an outdoor testing session. For each modality, a subject was trained and tested entirely in one session. A testing session consisted of sixteen trials for the indoor setting (four control and twelve MFS), and one trial for the outdoor setting as described above, except for two subjects, who completed the MFS trials on one occasion, and the control trials on another. 33 The order of testing was alternated between the MFS with the cane, and the cane only. For example, if a subject had used the MFS and cane first and then their cane only, the next subject would test with their cane only first, and then the MFS and cane. Additionally, the indoor routes were alternated such that subjects were guided from point one to three, and then back. After this path was complete, subjects were guided from point two to four and back. The order of path organization was different between subject sessions. The outdoor route was always completed at the end of a test session. After all testing was completed, subjects completed a survey of their experience with the device. This was administered by someone other than the MFS operator, so that subjects could freely express perspectives of their experience. Appropriate responses to commands, path tracking data, and the subjects’ reaction times were measured for both indoor and outdoor settings. Time to completion was mea- sured only for the indoor setting since visible reaction was difficult to determine through video of the outdoor setting. The Android application with the aMFS recorded a time stamp for each command and each trial of the experiments was recorded by video camera. For aMFS, reaction time was measured by syncing the start of the experiment from the video (in which the operator made a clear “start” motion to aid in video content analysis) with the time-stamp of the first command of the log file. Subsequent commands were also time stamped in the log file, and therefore commands could be aligned with the video (based on the video time stamp) and reaction time calculated. Reaction time was determined as the time difference from when the researcher gave a command to when a subject visibly executed the command. If a subject did not execute the command at all, no reaction time was calculated. The expected compliance for “approaching turns” 34 command was no reaction; therefore, no reaction time was determined for those com- mands. For vibrotactile feedback, the onset of command was visible to the experimenter via the LED display, which allowed measurement of reaction time directly from the video time stamp. The log file or LED display also indicated the type of command, which allowed determination of compliance to that command, based on the video of the sub- ject’s response. Path travel was estimated from video and used to generate heatmaps that showed the amount of time a subject spent in a given space. Percentage Preferred Walking Speed (PPWS) was calculated by taking the ratio of the speed of subjects using the mobility feedback system to navigate the obstacle course and their Preferred Walking Speed (PWS)(Soong et al., 2000) as shown in (Equation A.1). PWS was established by measuring subjects’ average speed when navigating three unique routes at their own pace assisted by a sighted guide. Subjects were also given an exit survey that quantified their impression of the usability of the MFS (as described next in Section 2.7)(Brooke, 1996). PPWS = 𝑇𝑟𝑖𝑎𝑙𝑆𝑝𝑒𝑒𝑑 𝑃𝑟𝑒𝑓𝑒𝑟𝑟𝑒𝑑𝑊𝑎𝑙𝑘𝑖𝑛𝑔𝑆𝑝𝑒𝑒𝑑 ×100 (2.1) Statistically, our quantitative data was analysed using a paired t-test to compare the PPWS of subjects during their use of the aMFS and the vMFS in the indoor obstacle course. A Pearson product-moment correlation coefficient of the average subject perfor- mance was also calculated to determine if there was a learning effect with each iterative trial. Given the small sample size, an analysis of the within- subjects effect was computed using time to complete data to determine effect size. 35 2.7 System Usability Assessment The system usability score acts as a means to quantify how useable a system is. Ten questions are given to the user. Each question is rated by the user on a scale from 1 to 5, in which 1 corresponds to strongly disagreeing with the question, and 5 corresponds to strongly agreeing with the question (see supplementary material for the questionnaire). ThequestionsarestructuredinsuchamannerthatEquation(2.2)canbeusedtocalculate the system usability score based on the System Usability Scale (SUS)(Brooke, 1996). SUS = {︁ ∑︁ (Odd Number Scores−1)+ ∑︁ (5−Even Number Scores) }︁ ×2.5 (2.2) The output of the SUS equation ranges from 0 to 100, which has a tendency to be misread as a percentage (Brooke, 2013). Rather, the score has been shown as having a strong correlation to descriptive scales, similar to letter grades used in school (A, B, C, etc.)(Bangor et al., 2009; Sauro, 2011). Based on multiple studies, an SUS score of 68 would be considered above average, and anything below 68 is considered below average (Sauro, 2011). 2.8 Results 2.8.1 Audible Mobility Feedback System The percent compliance is shown in Table 2.2, and includes subject compliance to all commands. In the indoor setting, subjects complied on average 92.25%, and reacted to commands at an average of 1.47 seconds. They also navigated at 40.45% of the preferred 36 walking speed using the aMFS, compared to 31.12% with their cane alone. Subjects performed comparably in the outdoor setting with an average compliance of 95.28% and an average reaction time of 1.66 seconds. In addition, the ‘approaching’ commands given before a ‘turn’ command statistically reduced the reaction time of subjects (p < 0.05). Using the aMFS and cane, subjects completed an indoor route at an average of 41.05s, in comparison to 62.86s using only their cane. Thise improvement in time to complete was found to be statistically significant for the control (M = 62.86s, SD = 40.46s) and the aMFS (M = 41.18s, SD = 10.50s) conditions; t (43) = 3.975, p = .000(p < 0.001; paired t-test). Participation in the pilot experiment did not appear to affect performance. Subjects EB and RT-2 were included in the pilot experiment, and their performance (PPWS, average compliance, and reaction time) was within the range of the other study subjects (Table 2.2). The effect size of the within-subjects comparison of navigating with the aMFS and the cane alone was computed to be r of 0.366 (paired sample correlations), with Cohen’s ds of 0.536 and 0.733 using the control and pooled variances, respectively. The interpretation of these values (Cohen, 1988) suggests a non-trivial medium effect. To rule out the potential of a learning effect using the MFS, a Pearson-product moment test was also performed for both the average compliance and reaction times across all subjects as a function of trial number, and no statistically significant correlation was found (p > 0.1). Based on the SUS, the aMFS was scored at an average of 90.9 in its current condition, which can be interpreted as an “A” or excellent according to descriptive scales (Bangor et al., 2009); summary results for each subject are presented in Table 2.3. Subjects preferred regular commands to reassure them the system was online even if the command 37 did not result in changing direction (for example, repeating the command “Forward” during an extended straight section of a route). Subjects also expressed interest about the future availability of the device, andcommented onhowmuchthe devicecould benefit them in everyday life. Table 2.2: Audible Mobility Feedback System Results. Average Average PPWS PPWS SUS Subject Indoor Reaction Control MFS Score Compliance% Time(s) RP 84.42% 1.79 35.4% 39.4% 95 RA 93.92% 2.02 31.2% 39.8% 100 GB 90.64% 1.46 41.2% 43.1% 55 ON 85.89% 1.58 42.1% 43.0% 95 JV 95.79% 1.73 25.1% 36.7% 85 RC 95.88% 1.46 25.7% 37.8% 100 TT 98.53% 1.12 45.6% 48.9% 100 NM 82.02% 1.19 32.5% 50.6% 95 HF 95.74% 1.32 15.1% 24.1% 80 RT-2 96.05% 1.35 23.1% 39.3% 97.5 EB 100% 1.17 25.3% 42.3% 97.5 SUMMARY 92.25% 1.47 31.12% 40.45% 90.9 Table 2.3: Summary of SUS Scores for aMFS and vMFS prototypes by Subject. * Subject RC did not participate in the vMFS trial, and therefore did not take the SUS survey. Subject aMFS vMFS RP 95 87.5 RA 100 80 GB 55 55 ON 95 73 JV 85 75 TT 100 100 NM 95 67.5 HF 80 47.5 RT-2 97.5 87.5 EB 97.5 90 RC 100 * Average 90.9% 76.3 38 Table 2.4: Command Compliance and Reaction Time by Command Type. * The expected compliance for approaching turns was no reaction, therefore no reaction time was determined for a positive compliance. Command Type Command Compliance Reaction Time(s) Forward 93.74% 1.49 Veer Right 93.64% 1.66 Veer Left 96.95% 1.56 Turn Right 97.11% 1.52 Turn Left 97.54% 1.53 Stop 91.26% 1.16 Approaching Right Turn 86.32% * Approaching Left Turn 77.55% * 2.8.2 Vibrotactile Mobility Feedback System and Comparison On average, subjects complied 82.46% with commands and reacted to commands within 1.46s using the vMFS. Using a paired t-test, there was a statistically significant difference in the time to complete for the control (M = 60.80s, SD = 38.79s) and the vMFS (M = 41.45s, SD = 8.85s) conditions; t (39) = 3.477, p = .001. There is a medium-sized effect (r = 0.501) based on this within-groups comparison. They also navigated at 39.21% of their preferred walking speed compared to 40.45% with the audio MFS (Table 2.4). They rated the vMFS with an average system usability score of 76.3 (Table 2.3) which was less than the 90.9 score of the aMFS, although still above the average score of 68. Even though subjects preferred using the audio MFS based on their comments and the results of the SUS, there was no statistically significant difference in course completion times between the aMFS (M = 40.74s, SD= 10.84s) and vMFS (M =41.45s, SD =8.85s) conditions; t (39) = -0.419, p = .677. The paired sample correlations indicate a medium to large effect (r = 0.425) using the time to complete data. Figure 2.6 summarizes the mean time to complete within groups by modality type. 39 Figure 2.6: Mean time complete grouped by modality type. Sample size are the ten subjects (n = 10) that participated in experiments with both the aMFS and vMFS. Figure 2.7: Heatmaps showing trajectory plotted across all subjects in one of the navigated routes for the indoor mobility course. Concentration of red dots represents the amount of time spent in a particular space. Green and red circles connote start and stop points, respectively. (a)Control trial with subjects navigating with their cane alone. 40 Figure 2.8: (b) Heatmap representation of subjects navigating with the aMFS prototype Figure 2.9: (c) Heatmap representation of subjects navigating with the vMFS prototype. Subjects’ travel routes as a function of time were represented using heatmaps for each indoor mobility task. The heatmaps depict one randomly selected trial of the three options for the same route (1→ 3) of each of the eleven subjects for the control and aMFS (Figs. 2.7 & 2.8) and ten subjects for the vMFS (Fig. 2.9). The efficiency of subject travel was improved using both mobility feedback systems (compared to the cane alone 41 condition), with a limited amount of time spent in corners and in areas not essential to route completion. 2.8.3 Discussion Overall, our hypothesis that blind subjects will easily adapt to simple guiding cues for mobility was confirmed. The major findings of our study were: subjects traveled at a higher walking speed using either speech or vibrotactile feedback (compared to cane alone), adapted to both types of commands quickly, and completed routes more quickly using either mobility feedback systems. They traveled at a statistically significant higher PPWS in the indoor experiment (p < 0.05). Not only did their speed increase, the efficiency of their travel also drastically improved as shown by the heatmaps (Figs.2.7– 2.9). It is important to note that command compliance and reaction time were not statistically biased to a specific type of command (Table 2.4) but some trends in the data warrant further investigation. It would appear that subjects complied less with the approaching commands by preemptively executing the upcoming turns, probably due to their anticipation of the turns to come. Ideally, a positive compliance to an approaching command should not elicit a visible reaction from the subject, since the sole purpose is to warn them of an upcoming turn signal rather than prompt them to take action. This suggests that protocols should reinforce the meaning of commands. Also, based on anecdotal comments from subjects, some users may prefer not to have warning, and the final system should have the option to disable the warning commands. It also appears that subjects responded quicker to the stop command compared to other commands. This may be because of the simple nature of this command or because they 42 may associate “stop” with avoiding an imminent collision. Once trained, subjects showed no statistically significant changes in reaction time or compliance with subsequent trials. Thisalsodemonstratesthatsubjects’increasingfamiliaritywiththeirenvironmentdidnot positively affect their compliance or reaction time to commands. The training provided in this study was very rudimentary and usually done on the same day as testing, which suggests that this cohort of subjects quickly learned how to respond to these commands and use them effectively. As such, the system can be expected to be useful in a variety of unfamiliar settings. When comparing compliance to commands and reaction time between the indoor and outdoor settings, we noticed no statistically significant differences (p > 0.05). This suggests that either feedback modality will be useful both indoors and outdoors. However, the outdoor environment was an empty parking lot with a low level of ambient noise. Feedback modalities should be tested in noisier environments, where the user may need to rely on their hearing more, for example, at a street crossing. In testing subjects with the vMFS, we found that subjects reacted at about the same speed to commands as with the aMFS. However, they complied with commands at a lower rate than with the aMFS. It should be noted that the reaction time was measured from the start of the command. Since verbal commands necessarily took more time to deliver, the actual reaction time to a command is difficult to know. We can speculate that speech commands were understood faster once completed, but took longer to completely deliver whereas vibrotactile commands were sensed almost immediately, but required some time to interpret. The added task of interpretation may have led to the lower compliance. Despite the lower compliance rate, subjects navigated at a comparable PPWS as with using the aMFS (p > 0.1). In the exit survey, seven subjects expressed an interest in 43 using the vMFS for street-crossing applications. These results indicate a selection of feedback modes could be used in the WVA for different tasks. Alternative positions for vibrotactile motors, such as on a glasses frame, should also be studied since wearing a vest isnotalwayspractical. Subjectsratedbothsystemshighlyusable(>50%), however, they overwhelminglypreferredspeechfeedbackovervibrotactilefeedback. Whenprobedabout this difference in usability, subjects explained that they preferred the direct language over decoding the meaning of a vibration in a given region of the upper torso. This extra layer of mental processing quite possibly places extraneous mental load on the user, as they not only have to remember what the placement of each motor means, that is, a new language of sorts, but also how each command is meant to be executed. Further testing should be done to determine whether training could possibly minimize this mental load, so that decoding vibrotactile cues is as intuitive as speech. When technology permits consistent and successful virtual sound use with bone conduction headphones, it should also be explored in these environments to see if mental load could be further reduced. In general comments, several subjects stated a need for better electronic travel aids to assist in mobility. Other researchers have studied the human interface of a navigation system for blind people. Polacek used a similar ‘Wizard of Oz’ approach to validate a set of speech-based navigation commands in a field study context (Poláček et al., 2012). Their goal was to conduct a pilot study to evaluate a generic Wizard of Oz system they had designed for mobile and ubiquitous studies. They employed eight humans ‘wizards’ to guide two blindfolded actors through a predefined route. While they were convinced that their setup was fully mobile, and their set of voice commands could be used for the follow-up study, they identified several usability flaws with their system. Comparatively, 44 our study used one wizard to minimize variability in giving directions, and test subjects who were visually impaired. (Arditi and Tian, 2013)findings are consistent with our SUS results that indicate sub- jects prefer audible feedback for providing directional information. Their study surveyed user preferences and needs from a sample of ten well educated and employed subjects with light perception or less. They found that subjects would prefer speech as a means of communicating with their environment, interfaces that provide control, and the capa- bility of verbally querying the system in lieu of interacting with a menu. While subject preference is important, and positively correlates with patient compliance, our goalwas to quantitatively assess subject performance with these modalities. From this perspective, although our subjects may not prefer vibrotactile feedback, its utility for other appli- cations where speech might not be an option (street-crossing, noisy environments) was validated. Its application could also be useful for deaf-blind subjects where speech may not be an option. Our findings that speech cues will be effective in providing mobility feedback to the visually impaired are also consistent with the work of (Havik et al., 2011). That study compared the efficiency of different types of verbal information (route and environmental) provided by the Groningen Indoor Route Information System (GIRIS); an electronic navigation system designed to assist visually impaired (low vision and blind) travelers with wayfinding. Havik found that participants with low vision were most comfortable and showed the highest walking efficiency (PPWS) when walking routes with the GIRIS system, which provided verbal instructions en route. In contrast, the walking efficiency of subjects whose vision classified as blind was highest when using verbal guiding cues 45 provided prior to embarking on the route. In comparison, our study specifically compares navigation across a room using either a cane with verbal guiding cues or a cane alone. However, both our study and Havik’s show the potential benefit of verbal feedback during mobility related tasks. This is also consistent with our discussions with orientation and mobility instructors, who use verbal cues to guide their students. 2.9 Outdoor Testing In addition to the above testing that shows subject comfort with short-term feedback, extended system feedback tested with outdoor trials of 1.5km in distance. So far, one visually impaired subject has tested verbal feedback in guiding their mobility on a long distance outdoor course. The objective was for the subject to navigate with their white cane and the person-in-loop feedback system. The course spanned walkways and street crossings at the University Park Campus at a time of limited pedestrian and vehicular traffic. The test lasted twenty-two minutes, and measured 1.71 kilometers (Figure 2.10). Figure 2.10: Route navigated by subject. Start and End points designated by red markers 46 Overall, the subject responded to all audible cues as trained, especially the “STOP” command. The subject seemingly navigated at a normal walking speed, and suffered no collisions with objects. At times they did not seem to hear the commands, in one instance almost colliding with a rock in a narrow walkway. This was despite numerous commands to “VEER” away from the obstacle before finally responding to a “STOP” command. In reference to this particular instance, the subject commented that the verbal commands were muffled, so they were unable to hear the commands. This may be due to excessive noisetravelcommoninhallwaysoverwhelmingthecommandsdeliveredthroughtheaudio bonephones. The subject mentioned that they were biased to reacting to traffic noise over the commands of the feedback system. This hesitation happened particularly at street cross- ings, where the “FORWARD” command was issued, but the subject remained still. After reassurance by the researcher, the subject resumed navigation (Figure 2.11). Figure 2.11: Subjects crossing the street with the aMFS. 47 The subject was given an SUS survey at the completion of the course, and rated the audible mobility feedback system with an SUS score of 100. They were not against having vibrotactile feedback incorporated into the system to supplement the audible mobility feedback system, particularly for outdoor applications. Due to the dynamic nature of the outdoor setting, manual video content analysis is difficult to measure reaction time and compliance to commands. Very often, navigation had to be paused to accommodate incoming pedestrian and motor traffic. It may be possible to calculate PPWS, with strict time measurements made at distinctive start and stop points of travel. 48 Chapter 3 The Effect of Mobility Feedback on Cognitive Load To promote the seamless integration of the WVA into the lives of its intended users, the attentional demands of responding to mobility feedback were investigated. Our hy- pothesis was that having simple directional cues like that provided by the WVA would lead to reduced cognitive load on the user, as the burden of path planning and nonvisual navigation is taken off the user as compared to cane-only navigation. In that scenario, the user only has to focus on abiding to the simple cues given by the WVA. That said, there is a mental load associated with responding to these cues, and should be measured in the context of everyday activities that an individual is reasonably expected to face. This type of study would quantify its potential benefit over cane-only navigation, if any. As such, the focus of this study was to quantify the cognitive load of navigating with the moblity feedback from the WVA in comparison to cane-only navigation. In order to accurately measure the congitive load of mobility feedback on navigation tasks, a performance baseline was established. This measurement of subjects’ navigation skill was determined in correlation to navigation proficiency and cognitive ability - factors that affect one’s skill in navigating. This stratification of subjects into profiles resulted in 49 personas that identified the individual differences between users of the WVA, and their diverse needs. 3.1 The Blind Office Clerk Consider a scenario where a blind office clerk obtains the WVA in the hopes of improving their navigation. Among their duties as a clerk is to run errands from their employer, which involves navigating some distance. In the course of executing their errands, they utilize the commands given by WVA to get to their physical target. A positive outcome is that they are able to seamlessly obey the WVA commands and upon reaching their target,completetheirerrand–whichmayinvolverecallingalistofitemstheyhadrecently memorized. A negative outcome is that they have so much difficulty understanding and obeying the commands of the WVA that upon reaching their target they are unable to complete their errand; they become so confused they cannot recall their list of items. One can imagine that no one would continue using a device that impedes on their ability to complete their everyday tasks, and would probably abandon use of the device. The issue of confusion from using aids contributes to device abandonment among the visually impaired, therefore it is important to understand the attentional demands of navigating with travel aids like the WVA, so that its use can be facilitated to the point where it becomes ‘automatic’. This is to ensure the attentional demands of the WVA does not impede the user from performing their daily tasks, much like the blind clerk. 50 3.2 Background and Motivation In neuroscience, the subject of attention and its demands refers to cognitive load and working memory. As described in Section 1.3 cognitive load describes the control and use of working memory. Working memory refers to the structures and processes for temporarily manipulating and storing information for immediate tasks. This concept is particularly apparent in learning and instruction. According to (Sweller, 1994), there are three types of cognitive load, namely intrinsic, germane and extraneous cognitive load. Intrinsic cognitive load refers to the amount of attention need to process the material that is directly related to the inherent structure and complexity the material and cannot be manipulated or affected by the instructor. Extraneous cognitive load refers the load due to mode the material is presented to the user. Germane cognitive load is the amount of working memory needed through efforts exerted by the individual to understand the instructional material. Therefore, the goal is to minimize extraneous cognitive load while optimizing germane cognitive load. The types of cognitive load are additive in nature such that the sum of these elements cannot surpass the total working memory available. Germane cognitive load is thought to result in the automation of schema, which improves learning and results in the reduction of intrinsic load. According to(Paas et al., 2003) the overall learning process is cyclical, where a reduction in intrinsic load frees more resources for learning more complex schema and reducing intrinsic load even more, and so on. Otherresearchershaveshownthatcognitiveloadisanapplicableattributethataffects navigation in those with impaired vision. (Turano et al., 1998) found that mental effort 51 is exerted by Retinitis Pigmentosa (RP) patients while walking in comparison to seeing individuals. Subjects were asked to complete a secondary task in addition to the primary one of walking and reaction time was measured. It was found that RP subjects exerted higher mental efforts in comparison to their seeing counterparts while navigating. TofacilitatethereductionofintrinsicloadinrespondingtofeedbackcuesbytheWVA, our aim was to take exploratory steps in investigating its effect on cognitive load. This would lead to a training protocol that may result in the automation of schema necessary to safely and effectively navigate with the device. However, to account for individual differences - and therefore needs of our diverse users - it is important to classify the types of users, so that the effect of training protocols can be gauged and tweaked accordingly. As such, our study’s purpose was two-fold: 1. Navigation Skill Profile. A skills assessment of subjects in a myriad of everyday tasks involving mobility to establish a spectrum of subject performance in rela- tionship to cognitive baseline, and/or extent of Orientation and Mobility (O&M) profiency. 2. Mobility Feedback Effects on Cognitive Load. To determine whether familiar- ity of visually impaired subjects with mobility feedback can statistically reduce cognitive load and enhance performance at extended navigation tasks relative to visually impaired subjects employing cane-only navigation. 52 3.3 Navigation Skill Profile Assistive devices which provide path planning for blind mobility could improve efficiency and efficacy of travel, especially with technological advances and integration of Global Positioning and Geographic Information Systems. While many prototypes have been developed, none have achieved widespread adoption by the visually impaired community. A possible issue is the lack of user-centered design and employing a “one size fits all” approach in interfacing assistive devices with their intended audience (Guerreiro et al., 2011; Nicolau et al., 2009). The purpose of the navigation skill profile is to identify a baseline of subject perfor- mance at navigational tasks relative to Orientation and Mobility (O&M) proficiency and cognitive ability. It will help researchers developing devices understand the navigational needs of visually impaired users, and adjust for subject variations accordingly in their design. The navigation skill profile results from an intersection of navigation proficiency and cognitive ability to properly contextualize subject performance at baseline tasks. This baseline helps build a profile of tasks subjects have already automated to build a profile of each type of user. Factors like mobility proficiency and cognitive ability are pertinent in classifying users because navigation proficiency identifies the skill of blind individuals when navigating in a typical environment, and is a basis for how well they will navigate with the WVA. Cognition affects the control and use of working memory, and therefore the strategies and schema employed in understanding and responding to mobility cues as trained. 53 3.3.1 Experimental Design The experimental design for the navigation skill profile, or ‘the baseline phase’ consisted of a multitude of subjects (n = 14) with varying O&M proficiency recruited in compliance with USC-IRB approval and evaluated according to the following criteria: Background Questions The background consists of ten intake questions regarding subjects’ visual impairment and habits (Fig. 3.1). This information is intended to gauge subjects’ familiarity with landmarks, classroom and areas within the Braille Institute. Subjects’ responses would provide an understanding of navigational tasks within Braille Institute they had already automated, thus contextualizing their performance on the baseline tasks. Some of these factors are incorporated into the tasks. Questions ranged from the frequency of independent navigation to classrooms at the Braille Institute, to simple mathematical problems. The amount of formal orientation and mobility training they had received helped determine O&M proficiency, while the length of time they had attended Braille determined their familiarity with the Braille Institute. Their proficiency with the English language or Spanish as a first language was also asked to determine linguistic ability. Cognitive Baseline- The Modified Weschler Intelligence Test The pre-intervention cognitive baseline was determined using selected questions from a modifiedWechslerAdultIntelligenceScale(WAIT-III).Thisscaleisordinarilyanindivid- ually administered measure of oral language, reading, written language and mathematics. 54 Figure 3.1: Background questions to establish baseline habits As blind subjects would be biased in areas of reading and written language, the other areas were of focus to eliminate such bias. Five questions spanning the areas of oral lan- guage and mathematics were selected to measure cognitive ability (as seen in Appendix B). The questions were chosen to determine subjects’ command of simple reasoning and memory. Each question was worth two points, and the resulting total was scored out of ten points. Baseline Tasks Subjects were asked to perform a list of fourteen items on three separate experiment sessions scheduled a week apart (Fig. 3.2). The tasks ranged from relatively typical 55 navigation task in very familiar landmarks for all students (the Access Bench) to naviga- tion tasks in unfamiliar locations (the outdoor parking lot). There was also search tasks within a kitchen, to test their confidence in a closed environment, as well as search tasks outdoors (finding a given locker number). Figure 3.2: List of baseline tasks subjects performed to build navigation skill profile Some tasks involved navigating to a landmark coupled with solving a math problem, or remembering a phone number they had recently memorized. This increasing difficulty is meant to ‘overload’ the subject to determine their limit of sorts, and tease out their range in capability. Some tasks were paired to serve as a control to navigating with the mobility feedback prototype. An example is Task #3 and Task #13. Tasks 3&13 are equidistant; however, Task 13 involves navigating this distance with the Audible Mobility FeedbackSystem(Chapter2, Section2.2). Acomparisoninperformancewoulddetermine 56 the effect of mobility feedback among subjects while navigating, and how the addition positively or negatively impacts cognitive load. There was also a disorienting task, which was to navigate around a pole three times while solving a math problem. This task removed the reliance on rote memory of land- marks, and instead forced subjects to employ their schema of navigation strategies to track their spatial process and localize their target. The measurements made were time taken to complete the task, as well as success on the secondary task where applicable. At the end of each visit, subjects had an exit interview to share their experience performing the tasks, and how these tasks relate to their typical everyday navigation experience. Subject Background and Grouping Testing was conducted under a protocol approved by the University of Southern Cali- fornia Institutional Review Board. Subjects were read the informed consent form prior to enrolling in the study. Once enrolled, background medical information was obtained on their eye condition both from their ophthalmologist and from a questionnaire, under HIPAA regulations. All subjects had light perception or less, and therefore classified as totally blind with regards to functional vision. Fourteen subjects participated in this study. One subject who had participated in some previous experiments had to be excluded due to their development of hearing loss - an exclusion criterion of the study. Their initial results have been consequently re- moved from the data in this study. Subject code, age, gender and visual diagnosis of the remaining subjects are shown in Table 3.1. 57 Table 3.1: Subject Demographics Subject Age Gender Diagnosis of Vision Loss RP 54 M Cytomegalovirus Retinitis CG 38 M Congenital Glaucoma GB 59 F Microphthalmia(Left)/ Anophthalmia(Right) ON 51 F Retinitis Pigmentosa JS 35 M Diabetic Retinopathy CVB 63 F Retinopathy of Prematurity MD 70 F Eales Disease GD 54 M Glaucoma TT 73 M Retinitis Pigmentosa JV-2 70 M Retinopathy of Prematurity NM 44 F Detached Optic Nerve (Congenital) HF 69 F Retinopathy of Prematurity RT-2 44 F Optic Nerve Hypoplasia Subjects were designated into three groups based on their level of expertise in nav- igation by certified Orientation and Mobility instructors at the Braille Institute. These groups were the Indoor, Residential and and Industrial area navigators. Subjects were classified as Indoor Navigators if their proficiency was limited to navigating indoors, whereas Residential Navigators are subjects who are comfortable and effective at navi- gatingindoors, andwithintheconfinesoftheirresidentialcommunity. Subjectswhocould comfortably navigate in all these places, as well as busy public areas and comfortably take public transportation were classified as Industrial Navigators. Subjectswerealsogroupedaccordingtotheirscoresonthecognitivebaselinequestion- naire discussed in Section 3.3.1. A K-Means Sort was applied to the performance scores, designating subjects into "Below Average", "Average",and "Above Average" groups. A summary of these designations are shown in Table 3.2. 58 Table 3.2: Subject Ratings Subject O& M Designation Cognitive Baseline Designation RP Residential Above Average CG Residential Below Average GB Industrial Above Average ON Industrial Average JS Indoor Below Average CVB Industrial Above Average MD Residential Below Average GD Industrial Average TT Industrial Above Average JV-2 Industrial Below Average NM Indoor Below Average HF Indoor Above Average RT-2 Indoor Below Average Subjects were grouped into Amateur and Expert groups based on a two way K-Means sort on the interactions of O&M proficiency and Cognitive Baseline designations. This resultedinsixsubjectsintheAmateurcategoryandsevensubjectsintheExpertcategory (shown in Table 3.3). Table 3.3: Subject Groupings into Skill Level Amateur Skill (n=6) Expert Skill (n =7) RT-2 RP NM GB JS TT HF ON MD CVB CG JV-2 GD 3.3.2 Results The mean time to complete the baseline tasks by group classification is shown in (Fig. 3.3). On average, subjects had the most difficulty with Tasks 6 & 7, which involved 59 Figure 3.3: Mean time to complete baseline tasks by group classification. Significant differences between groups occur in Tasks 2,3,6,7 and 8 (p<0.05). navigating in an area that was not typical to their daily routine at the Braille Institute. Within these tasks, there was a correlation between the time to complete and skill level. Amateur navigators took longer to complete these tasks compared to Expert navigators, who took the shortest time. There was also a higher level of variation in these tasks compared to the other tasks, which indicates an area of navigation that subjects find difficult. The likelihood of making a mistake and getting lost is higher with this type of task, compared to known areas where strategies for reorientation are easier to employ and execute. A system that would aid in reorientation and minimizes getting lost would greatly improve navigation in this area. Furthermore, a statistically significant difference was found between group classifica- tions in these tasks, as well as Tasks 2,3 and 8 (p < 0.05). These results demonstrate that although subjects may share difficulty navigating in these areas, subjects with Amateur and Expert skills differ in the strategies they employ. 60 Incomparison, subjectshadtheleasttroublewithTasks12through14, whichinvolved the use of mobility feedback via the aMFS prototype. Within these tasks, the disparity in the between-group performance was eliminated, with no statistical significance between groups (p>0.2). It appears that the use of mobility feedback via the prototype system equalized sub- ject performance between groups. This equality could be attributed to a reduction in attentional demands, an effect of the prototype taking the burden of decision-making off the user (from Chapter 2). This is opposed to the amount of mental resources required to complete the task unaided. Ordinarily, a blind subject would take in perceptual input from their environment and mentally process that into tangible mobility feedback, which they would then act upon. The introduction of the mobility prototype takes the burden of contemplation off of the user, and that translates to a quicker travel time because all mental resources can be devoted into acting. This phenomenon is also observed when comparing subject performance in completing the equidistant tasks (Tasks 3 & 13). The introduction of mobility feedback in Task 13 greatly improved the time to complete by subjects compared to their experience navigat- ing unaided in Task 3. The difference between subject performance in these tasks was statistically significant (p<0.01). 3.3.3 Relative Access Measure (RAM) Another measurement that was made to further contextualize the experience of the visu- ally impaired in reference to seeing individuals was the Relative Access Measure (RAM). The Relative Access Measure was developed by (Church and Marston, 2003) to quantify 61 the extra amount of effort the visually impaired have to make to accomplish the same tasks as to their sighted counterparts. Given the extent and variation of tasks in un- dertaken in this experiment, the assessment was also made to normalize the data as the distance and difficulty of tasks were varied. The RAM quantity is calculated as follows (Equation. 3.1): RAM = Time To Complete (Participant) Time To Complete (Sighted) ×100 (3.1) A RAM of 1.0 or less means there is no extra penalty, whereas a RAM > 1 indicates more work or difficulty for a visually impaired subject to accomplish a task compared to a sighted individual. Two sighted subjects in their twenties, with graduate level-education performed all the baseline tasks. In place of simple mathematical problems, simultaneous equations were used in tasks that required them. Results Figure 3.4: RAM measure by group classication. 62 Fig. 3.4 shows the data from Fig. 3.3 in relation to the seeing controls. Overall, search tasks in the kitchen (Tasks 1&2) had the highest RAM, compared to navigation that involved typical routes for students at the Braille Institute (Task 4 & 9). On these typical routes, the RAM was as low as 2. In the disorienting task (Task 11), the RAM of 1 or less shows that disorientation impacts seeing and visually impaired individuals equally. In tasks that involved the mobility prototype, the RAM was low - with a statistically insignificant difference between group classifications. A low value suggests that mobility feedback ensures a somewhat comparable performance in navigation between seeing and blind individual; equalizing the navigation experience. These trends are somewhat supported by the pilot data of one novice and one expert recruited much earlier in the experiment design process. The data signaled there is a higher penalty (up to 27x) in search tasks for blind individuals compared to their seeing counterparts, whereas there was a lower penalty in disorienting tasks (Task 11). 3.4 Effect of Mobility Feedback on Cognitive Load Upon the completion of the baseline phase of the experiments, we tested the effect of mobility feedback on the cognitive load of subjects. We wanted to determine the effect of mobility feedback on reducing the mental load of navigating compared to cane-only navigation. This will go a long way in developing a training protocol to promote the intuitive integration of the device with the user. It would also help quantify the effect of 63 mobility feedback on improving the performance of visually impaired subjects compared to their sighted counterparts. In order to test this hypothesis, we employed the Dual-Task Methodology to measure cognitive load and the NASA-TLX instrument to measure workload perception. 3.4.1 Dual-Task Methodology The Dual-Task Methodology is a paradigm that assumes mental operations draw from a limited-capacity central mechanism. This principle states that as capacity demanded by a primary task increase, the capacity available for other tasks decrease. It posits that the capacity demanded by a primary task can be estimated by measuring the performance at a secondary task in combination with the primary task. The secondary vibrotactile stimuli was introduced to measure automaticity of the primary task (navigation). If subjects have automated the primary task(navigation), thentheywillhavenoproblemrespondingtothesecondarytask(button-press)somewhat instantaneously. 3.4.2 NASA Task Load Index (NASA-TLX) The NASA-TLX is a widely accepted assessment that measures the subjective perception of workload. It is a human factors tool that spans six areas, measuring on a scale the mental, physical and temporal demand of a task, in addition to the performance, effort and frustration a worker feels while performing said task. These values are weighted to give an overall index according the formula in Fig. 3.5. 64 Figure 3.5: NASA-TLX weightings of elements affecting workload perception.PD - Physical Demand, MD - Mental Demand,TD - Temporal Demand, OP - Performance, EF - Effort, FR - Frustration. Ratings by the subject are weighted to give the overall workload. 3.4.3 Experimental Design Figure 3.6: Layout of Treasure Hunt Space. In the Dual-Task Phase, subjects were asked to navigate an extended and complicated course at the Braille Institute spanning a mile with tasks at marked waypoints (Fig. 3.6) 65 Figure 3.7: Testing Protocol for Dual-Methodolology Phase. Table 3.4: Subject Pairings for Dual-Task Experiments Control Group Mobility Feedback Group NM RT-2 JS CG CVB ON MD HF GD JV-2 TT RP - the elevator, a specific desk at cafeteria, etc. Secondary stimuli was delivered at a variable rate during the course of the experiments. TheprotocolfortheDual-TaskPhaseisoutlinedinFig.3.7. Beforesubjectsaretested, they were trained to respond to secondary tactile stimuli until they reach a performance asymptote. The stimuli was presented at a programmed rate and subjects were required to press a button at each occurrence. The stimuli are generated by a vibration application on an android platform, and vibrates from 1-5 second intervals. It then allows a person to click the volume button to respond to vibration. It only responds to click if its within 5 seconds of vibration, and saves times of clicks/"cycle end" to a text file. 66 Subjects were trained to respond to vibrotactile stimuli in static (sitting down) and active (navigating) settings.The active settings include navigating to the landmarks like the Access bench, the Library, the second floor (using the elevator), the Volunteer Center, the Weingart Center, and the Cafetaria. No added task (search or otherwise) is included in the training session. After asymptote has been reached, subjects were invited back for the treasure hunt. Asymptote was defined as subjects responding to the vibrotactile stimulie satisfactorily (in less than 5 seconds) for at least ten sequential trials. The treasure hunt is a single, long-distance navigation task that includes mini-tasks at each waypoint subjects are asked to complete. e.g. ’find table 12 in the cafeteria.’ The full list of tasks is as follows: 1. Start Point - Kitchen. 2. Task 1 - Navigate to the cafeteria and go to table 12. ask for the message waiting for you (phone number). 3. Task 2 - Navigate to the Weingart center and seat on the second bench. Recite the phone number. 4. Task 3 - Navigate to Volunteer Services and ask for the "Nu-Eyes" business card. 5. Task 4 - Navigate to the Librarian’s desk and ask for the USC Commencement Cat- alog. 6. Task 5 - Navigate to the Access Bench, and then the second floor using the elevator. 7. Stop Point . Second Floor. Subjectsweredistributedintotwogroups(cane-onlyandmobilityfeedback). Amateur and Expert Navigators were placed into these groups on a paired basis (as shown in Table. 67 3.4). Automaticity relies heavily on repeating a task, so this design was incorporated to guard against repeated measures within the two groups. It will be difficult to determine whether a reduced cognitive load (shorter response times to button press) is as a result of repeating the task or the mobility prototype. The control group performed the treasure hunt using their cane only, while responding to the vibrotactile stimulus, and the mobility feedback group will respond to it using the prototype system. A sighted group was also recruited to complete the treasure hunt while responding to the vibrotactile stimulus. The results are detailed in the following section. The pitfall of the measuring cognitive load in subjects is that is very hard to isolate that variable in subjects, as differing subject behavior to testing methodology introduces variabilitytothedata. Thereisanaddedlayerofcomplexitygiventhesmallsamplesizeof the study. The drawback of using a dual task methodology is managing subject behavior and its effect on the variability to the data. It is hard to gauge how a subject will respond, if they will follow instructions or if they will stop responding to the secondary stimuli. In order to measure the degree to which the secondary task is affected, subjects will be asked attend to the primary task of walking safely while accomplishing the secondary task to the best of their ability. Some subjects may attend fully to the secondary task while allowing the primary task to suffer, and this introduces variability in the data. Some subjects may stop responding as the difficulty in the task increases, which may create data analysis issues. 68 3.4.4 Results The following tables (3.5, 3.6 & 3.7)show a summary of Navigation Speed and Button Press Success and reaction time for the Cane-only, Mobility Feedback and the Sighted groups. Table 3.5: Navigation Speed Summary (m/s) Task # Cane-Only Mobility Feedback Sighted 1* 0.33± 0.16 0.58± 0.11 1.06± 0.03 2* 0.28± 0.13 0.56± 0.20 1.12± 0.12 3 0.54± 0.18 0.51± 0.08 0.88± 0.01 4 0.58± 0.27 0.57± 0.08 0.93± 0.05 5 0.60± 0.32 0.51± 0.07 0.88± 0.08 Table 3.5 shows the Navigation Speed Summary by Group. The mobility feedback group navigated significantly faster (p < 0.05) at Tasks 1 & 2 compared to the Cane-Only group. Table 3.6: Cognitive Load Summary (%) Task # Cane-Only Mobility Feedback Sighted 1 83.6± 10.9 84.0± 13.2 95± 3.0 2* 73.0± 7.3 85.9± 7.5 85.0± 7.7 3 83.0± 18.0 89.0± 8.2 100± 0.0 4 78.0± 18.0 88.0± 9.6 90.5± 5.0 5 75.0± 20.6 82.0± 11.2 92.0± 3.3 Table 3.6 shows the cognitive load summary by group, as represented by the success at responding to the vibrotactile stimuli. The mobility feedback group had a significantly better performance at Task 2 compared to the Cane-Only group, although their perfor- mance was consistently better for all the tasks. This is probably due to Task 2 involving a location that subjects are unfamiliar with, and therefore have difficult locating. This 69 trend was also evident in Table 3.7, where there as a significantly faster response in the Mobility Feedback group in Task 2, compared to the Cane-Only group. Table 3.7: Reaction Time Summary (s) Task # Cane-Only Mobility Feedback Sighted 1 1.387± 0.718 1.378± 0.714 0.690± 0.250 2* 2.221± 0.073 1.300± 0.522 1.203± 0.290 3 1.448± 0.760 1.205± 0.862 0.535± 0.105 4 1.850± 0.926 1.290± 0.625 0.978± 0.382 5 1.953± 0.909 1.479± 0.552 1.100± 0.000 WhenthecategoriesofTask-LoadIndexareconsidered(Table3.8), MobilityFeedback appears to pose a lower mental cost than the Cane-Only intervention. There is also a lower physical and temporal demand, although the self-assessment of performance is also less. Both interventions brought about equal frustration, but more effort was required for Cane -Only group compared to those using Mobility Feedback. Table 3.8: NASA-TLX Results by Group Category Cane-Only Mobility Feedback Sighted Mental Demand 51.7 31.2 62.5 Physical Demand 34.2 27.5 5 Temporal Demand 33.3 24.2 27.5 Performance 26.7 12.5 22.5 Effort 41.7 21.7 45 Frustration 27.5 27.5 5.5 70 Chapter 4 An Adaptive Real-Time Control Algorithm for the WVA 4.1 The Issue As described earlier, the wearable visual aid provides path planning and trajectory gener- ation of the best traversable path for a user to navigate without obstruction by obstacles in the environment. This is achieved through computer-vision algorithms that employ motion-tracking and path estimation. Upon the generation of this path, another algo- rithm is essential to steer the subject along to their end goal. This area is less sophisti- cated, but equally crucial. In some ways, this control algorithm is the culmination of the previous chapters, such that in testing its validity, output modalities and signaling cues must be balanced to safely guide subjects to the desired goal. Currently, the WVA employs a simple control algorithm to aid the user in abiding to the generated path. Although this system has proven satisfactory in statistically reducing thenumberofcollisionsasubjectenduresunwantedoscillationsinsubjectmovement. The threshold for outputting cues affects the frequency with which cues are given to the user. If cues are given infrequently, then the user travels in an oscillatory manner. If cues are 71 provided too frequently, then the user will move slowly. Both of these are undesirable. Our hypothesis was that this phenomenon could be mitigated by adapting the threshold to the individual’s walking style, in direct correlation to their walking speed. Threshold is the point at which the command is issued and is reached when the user deviates from the planned route by a threshold value. This can be measured in distance from the path or as an angle formed by the planned direction of travel and the actual direction of travel. Pilot studies suggested that controlling by measuring this angle worked best. We expect that this threshold value - which we have termed the optimal angle tolerance - will vary with subject gait patterns, such that subjects with a higher preferred walking speed will need a smaller tolerance to be guided effectively and vice versa. 4.2 The Experimental Setup - A Heuristic Approach To test a possible relationship between walking speed and the degree of deviation from a path, a pilot study was performed on a sighted subject navigating a path at differ- ing speeds. We found that with increasing speed, the level of deviation from the path increased, as seen in Fig. 4.1. This correlation strongly indicates that speed may be a possible parameter for tuning optimal angle tolerance for subjects, such that a higher sensitivity (lower angle tolerance) would be needed for subjects that walk faster and vice versa. (Knoblauch et al., 1996) found that walking speed can be segmented by age and gen- der. After observing pedestrians at crosswalks over an eight hour period, they found that younger males (< 65 years old) navigated fastest at an average speed of 1.51m/s, followed 72 Figure 4.1: Plot showing point by point trajectory as a function of average speed. With higher speeds, deviation increases as a function of point, which is indicative of adjustments being made for the turn to come. by younger women at 1.44m/s , older men at 1.37 m/s and older women navigated slow- est at 1.26m/s. We posit that tuning the optimal angular tolerance for these individual groups will generate a lookup table, by which the WVA can adapt the threshold value upon learning the age and gender of the user. 4.2.1 The System Diagram In order to test our hypothesis, we independently replicated the autonomous navigation system using a Qualysis Motion-Capture system (manufactured by Qualysis-AB, Swe- den). A single reflective infrared marker placed on the C7 vertebrae of a subject would track their motion in real time. This information is passed to a controller through a Qualysis-Matlab plugin that would process subject position in relation to a prerecorded path generated by a seeing control. Upon reaching a decision of the extent of subject 73 deviation, the controller sends a signal wirelessly to vibration motors on the shoulders of a subject that steers them along the path. The system diagram detailing these connections is shown in the following diagram, Fig. 4.2. Figure 4.2: System diagram detailing closed-loop connection from system to subject. 4.2.2 The Wireless Tactile Cuing Vest - The "SmartVest" Awirelessprototypetactilecuingvest(“SmartVest")(Fig. 4.3)usingawearablemicrocon- troller and actuators was constructed for testing. The SmartVest consists of a Scottevest fitted with 4 LilyPad Vibe Boards and 8 LilyPad Arduino LEDs along with a custom built power supply (6V, 100mAh). One motor and two LEDs were placed on each shoul- der and on each side of the waist of the jacket. These components were connected to an Arduino 328 Mainboard, LilyPad Xbee Module and Xbee Radio by conductive thread through hand-sewing. The transmitter that communicated cues to the SmartVest consisted of an Arduino Duemilonove and a Xbee Radio that was connected to the controller through a COM port on the main computer. 74 Figure 4.3: Wireless tactile cuing vest - L) Front view. R) Back view. The test path for navigation was a left turn as shown in Fig. 4.4. Given that sharp turns are the hardest to make, this path would provide a good indication of how viable the system would be in successfully guiding subjects in a typical environments. Figure 4.4: Complex left turn path denoted by the space between cones. Start point is between yellow cones behind chair. Direction of travel indicated by black arrows. Pilot experiments on this complex turn path were conducted using said vest and algorithm, and preliminary results show that blindfolded subjects are successfully able 75 to maintain optimal path with minimal deviations and no collisions. Optimal path and trajectory changes were determined by seeing controls. 4.2.3 The Algorithm Scheme Figure 4.5: Algorithm Scheme - Red arrows indicate direction vectors extracted from the ideal trajectory, green arrow is direction vector of test subject trajectory. Black lines signify physical obstacles or tolerance boundaries. The scheme of the algorithm is shown above in Fig. 4.5. The red arrows are linear approximations of the control trajectory as direction vectors, indicating the direction of travel of the control subject at a given point. The green arrows are possible direction 76 vectors of the test subject trajectory. There are three main conditions that determine the output of a navigational cue to a test user: Condition 1 - The heading vector of the subject is outside a set angle tolerance of the control trajectories heading vector at said point in path. Condition 2 - Subject position is outside a translational tolerance boundary of either axis (as given by i). Condition 3 - Subject is within a set distance of a sharp left turn. Due to the conditional nature of its decision-making, this algorithm scheme is consid- ered a fuzzy logic controller. Fuzzy logic is an alternative approach developed by (Klir and Yuan, 1995; Mamdani and Assilian, 1975; Zadeh, 1965)for controlling processes, es- pecially those that are too complex for analysis by conventional techniques. The system is based on effective and real control strategies that experts learn through experience can often be expressed as a set of condition-action, IF-THEN rules, and can describe the pro- cess and recommend actions using linguistic, ‘fuzzy’ terms instead of classical, crisp 0/1 rules (Aranibar, 1994). This type of controller is also very common in real-time systems that use pulse-width modulation for controlling actuators (Lee, 1990; Shen et al., 2006; Sun, 2012), much like the servomechanism used by the SmartVest to keep subjects on the desired path. The control system representation of this servomechanism is shown in Fig. 4.6. Avibrotactilemotorsituatedoneachshoulderandeachwaistoutputsdirectional cues to test subjects. The cues are output on the side a subject should move away from, to simulate the avoidance motion human beings adopt to avoid hitting obstacles. Shoulder 77 Figure 4.6: Control System Diagram showing the Real-Time Control Algorithm Scheme. cues are output for the heading vector condition, and the subject is trained to rotate to change the direction of their trajectory. Combined shoulder and waist cues are output when a subject steps outside a tolerance boundary, and they are trained to side-step in order to remain within the defined perimeter. Unstable oscillation of a subject trajectory is introduced when multiple opposing cues are output in quick succession. On the other hand, timely cues are needed to ensure efficacy of subject travel. The balance of providing the minimal amount of rotational cues that will ensure accurate travel was achieved by adjusting the threshold value to each individual subject. We observed a value that is too small will often times introduce unstable oscillation, where a subject cannot navigate comfortably or complete the route in the time they would at their optimal tolerance setting. On the other hand, a value that is too large will reduce efficacy of subject travel as it does not alert them when they are outside the cone boundary. 78 Initial tests on blindfolded subjects showed minimal oscillation and an improved flow of subject movement trajectories at a setting of their optimal angle tolerance. In tuning the optimal tolerance value that would result in ideal subject travel (minimal oscillation and smooth movement trajectory), it was also observed that this value differed between subjects, particularly those with different walking speeds (Figs. 4.7 & 4.8). Figure 4.7: (L-R) Motion trajectories of a female, slower-paced blindfolded subject executing a left-turn obstacle course at a fixed time at tolerance angles of +/- 15, 25, 30 and 40 °respectively. Direction of black arrows indicates the direction the subject is meant to rotate towards. Subject is unable to complete the obstacle course at tolerance angles of 15 and 25 degrees, subject successfully completes the obstacle course at a tolerance value of 30°and subject goes off course at a tolerance value of 40°. Axes are measured in mm. 79 Figure 4.8: (L-R)Motion trajectory of a male, faster-paced blindfolded subject steered on the left turn course at a fixed time at tolerance angles of +/- 15, 25, 30 and 40 °s respectively. Direction of black arrows indicates the direction the subject is meant to rotate towards. Subject is able to fluidly complete the obstacle course at tolerance angles of 15 and 25 °, but is off course at tolerance values of 30 and 40 °. Axes are measured in mm. 80 Faster walkers experienced ideal subject travel with smaller angle tolerances than slower walkers. These findings are in line with our hypothesis on the correlation between speed and optimal angle tolerance. In order to achieve an adaptive control system, we must first isolate the variable that determines the optimal angle tolerance for each subject type. Based on our observation of a possible correlation between walking speed and optimal angle tolerance, our approach is to classify based on preferred walking speed. As found by (Knoblauch et al., 1996), this can be further segmented by age and gender. Therefore, we can create a lookup table that adjusts optimal angle tolerance based on walking speed as determined by age and gender. A total of twelve subjects were recruited for the study (three per group). Subjects navigated at optimal angle tolerance values of 10,15,20,25,30,35, 40 and 50 °. Subjects participated for two sessions, and navigated at each value in a random order for a total of five times. The makeup of each group is shown in Table 4.1. Table 4.1: Subject Demographics Younger Males Younger Females Older Males Older Females MW JMB LA BS SW NK MA MA-2 YL TN VV PH Mean Age: 26 29 74 72 MATLAB-generated motion trajectories of the test subjects were compared with that seeing controls to determine success rate. Success rate of each trial was determined by quantitatively analyzing fluidity of the subjects’ motion patterns according to objective criteria. The criteria includes the ability of the subjects to reach their goal within 1m 81 and their ability to stay within the cones. These results were collated to create a lookup table for each group to determine their optimal angle tolerance. 4.2.4 Lookup-Table Results The results of analyzing the subject trials is shown below for each subject group. The figures show the average rate of success of that group at each tolerance (Figs 4.9 - 4.12). Figure 4.9: Success Rate of Younger Males navigating with the SmartVest as a function of Tolerance Setting Figure 4.10: Success Rate of Younger Females navigating with the SmartVest as a function of Tolerance Setting 82 Figure 4.11: Success Rate of Older Males navigating with the SmartVest as a function of Tolerance Setting Figure 4.12: Success Rate of Older Females navigating with the SmartVest as a function of Tolerance Setting The Younger Male group (M = 26 years , Sp = 0.25 m/s ) had the most success at a tolerance setting of 25 °, followed closely by 15 °. They had the least success at 40 °. The Younger Female group (M = 29 years ,Sp = 0.38 m/s ) had the best success at 15 °and the least success at 50 °. The Older Male group (M = 74 years, Sp = 0.20 m/s) navigated best at 40 °and least at 30 °, followed closely by 10 °. The Older Female group 83 (M = 72 years, Sp = 0.20 m/s) navigated most successfully at 35 °and least successfully at 25 °. It may be interesting to note that the younger females navigated faster than the younger males, and the older females than the younger males.There may be other factors to consider, that could possibly explain this trend (activity level, physical injuries, or weight). The correlations for age and average speed were stronger (younger navigating faster than older) than that for gender and speed (women navigated faster than the men). It may have to do with differences in fitness and stride length; the women that were in the study were more athletic than the men (one male subject walked with a support cane), and the younger men in the study were tall. It is possible that these subjects walked slower, even though in sighted navigation they would cover the same ground. Correlations Thegraphs(Figs. 4.13-4.20)showcorrelationsbetweensuccessrateandwalkingspeedat fixedtolerancevalues. Assuspected, thereissomecorrelation. Howeverthesecorrelations areevidentatextremetolerance(10,15,40,and50°). Intolerancesthatarehighlysensitive (10 and 15°), the success rate is the positively correlated with walking speed - supporting ourinitialhypothesisandpilotdatafromSection4.2.3thatsomeonewithahigherwalking speed would be successfully guided by a smaller valued tolerance. For larger tolerance angles (40 and 50 °), the success rate is negatively correlated with walking speed, again supporting the notion that slower walkers will benefit from high tolerance values. 84 Figure 4.13: Success Rate by group as a function of Group Walking Speed at a 10 °tolerance. Figure 4.14: Success Rate by group as a function of Group Walking Speed at a 15 °tolerance. Figure 4.15: Success Rate by group as a function of Group Walking Speed at a 20 °tolerance. 85 Figure 4.16: Success Rate by group as a function of Group Walking Speed at a 25 °tolerance. Figure 4.17: Success Rate by group as a function of Group Walking Speed at a 30 °tolerance. Figure 4.18: Success Rate by group as a function of Group Walking Speed at a 35 °tolerance. 86 Figure 4.19: Success Rate by group as a function of Group Walking Speed at a 40 °tolerance. Figure 4.20: Success Rate by group as a function of Group Walking Speed at a 50 °tolerance. 87 The middle values, particularly 20,25,35 and 40 °show weak correlations (𝑅 2 < 0.5). This suggests a larger sample size would be needed to establish tolerance thresholds for a given walking speed. The wide range in subject age (26 - 74 years) gave us a wide range in subject gait speeds, and this helped establish strong correlations at high and low guidance sensitivities, however it explains the large deviations in between and why the correlations for changing slope are somewhat discontinuous. Having more subjects, and also including subjects with more gradual changes in speed with stronger correlations for middle values. The Discussion The reliance on seeing controls to provide normative behavior may fail in a dynamic envi- ronment, as moving obstacles are hard to predict in an uncontrolled outdoor setting. This control algorithm is limited to a 2D environment, considering only translational motion of subjects. It cannot be used in scenarios where there is also a rotational component to be controlled. 88 4.3 The Control System - An Analysis In addition to creating a lookup table that adapts to walking speed as determined by age and gender, we sought to characterize system response of the plant using the movement dynamics and angular tolerance values that created stability in the plant. An impression- istic example of that condition is identified using Black Box approaches to build a model of plant behavior. 4.3.1 System Identification As we would like to characterize the system response of subjects responding to the di- rection cues by the SmartVest, a black box modeling approach was used to identify the system model. The input of the system is defined as the direction pulses, while their output is the resulting deviation from the goal path. These features were extracted from the motion trajectories of the subjects corresponding to trials that subjects achieved "nor- mative flow". "Normative flow" was defined as subject stopping within a distance of 5% of the goal point. A few methods for identifying systems based on input-output data, but we utilized the following parametric and nonparametric approaches, namely: Finite Impulse Response (FIR) Model The Finite Impulse Response (FIR) is a conventional system identification model that assumes that the impulse response of a linear system can be effectively modeled with a finite amount of samples. Correlation analysis is used to estimate the coefficients between a measured input signal u(t) and a measured output signal y(t). 89 State-Space Model (SS) Model State-space models are models that use state variables to describe a system by a set of first-order differential equations, rather than by one or more nth-order differential equations. Like a Kalman Filter, coefficients are estimated using a prediction error minimization algorithm. Model parameters are initialized, and then updated using an iterative search to minimize the prediction errors. Hammerstein-Weiner Model (HW) Model The Hammerstein-Weiner Model is a non parametric sequential block structured tech- nique used to model nonlinear systems (Fig. 4.21),in particular physical processes. In these processes, the input nonlinearity might represent typical physical transformations in actuators and the output nonlinearity might describe common sensor characteristics. Figure 4.21: Hammerstein-Weiner Block Model 90 4.3.2 Results Applying these methods through the the System Identification Toolbox available through MATLAB results in the following representations of a system describing normative flow of one subject during a successful trial(Fig. 4.22). Figure 4.22: Example of Normative Flow during a successful trial Finite Impulse Response (FIR) The Impulse Response (Fig. 4.23) shows a spike at time (t = 0) and then a half sine wave (0-𝜋 ) that converges to a zero amplitude. The step response (Fig. 4.24) similarly asymptotes, this time to an amplitude of 0.015. 70 impulse response coefficients were calculated for this data set. 91 Figure 4.23: Finite Impulse Response of Subject during Normative Flow Figure 4.24: FiR Results to a Input Step Function 92 Figure 4.25: Bode Plot of FIR Model Showing Gain and Phase Figure 4.26: Nyquist Plot of FIR Model Showing Zeros and Poles 93 Figure 4.27: State Space Model Results to an Input Step Function State-Space Model Eighteen free coefficients were calculated using the State-Space Model. The step and impulse responses(Figs. 4.27 & 4.28) show the system is critically damped, with a peak time of 1.5 seconds in the transient phase, and its settling time at 6.5 seconds where it enters steady-state. The bode plot shows closed loop stability, whereas the nyquist plot shows a move toward instability before it converges to a stable zero center (Fig. 4.29 & 4.30). 94 ht Figure 4.28: SS Model Results to an input Impulse Figure 4.29: Bode Plot of SS Model Showing Gain and Phase 95 Figure 4.30: Nyquist Plot of SS Model showing Poles and Zeros Hammerstein-Weiner Model Figure 4.31: Input nonlinearity of Subject Normative Flow 96 Figure 4.32: Linear Block of HWM Model Showing Impulse Response The Hammerstein-Weiner model indicates that there is a zero input nonlinearity (Fig. 4.31) from the actuator. In the impulse and step responses (Fig. 4.32 & 4.33) there appears to be a stable harmonic oscillation, indicating the system is in resonance. The output nonlinearity (Fig. 4.34) is calculated to be a dead-zone, introduced by the sensor characteristics. 97 Figure 4.33: Linear Block of HWM Model Showing Response to a Unit Step Function Figure 4.34: Output Nonlinearity Block of HW Model 98 Model Fit Comparison The FIR Model gives the best fit to independent validation data ( ∼ 75%), followed by the State-Space Model (∼ 65%). The Hammerstein-Weiner model does not appear to fit the data at all (Fig. 4.35). Figure 4.35: Comparison of Model Fit to Validation Data 99 Chapter 5 Conclusions and Future Work My work has demonstrated that the HCI approach is important when designing safety- critical devices for the blind. Given the rate of mobility aid abandonment in this demo- graphic, and the safety-critical nature of closed-loop systems like the WVA, it is essential that an iterative bottom-up approach is applied in device development. Going forward in designing mobility aids, it is important to weigh objective and subjective criteria equally, to design an effective device that subjects will consistently use in their everyday lives. 5.1 Recommendations for the Wearable Visual Aid In assessing speech and vibration as feedback modalities for communicating mobility feedback to visually impaired users, we found that both provided a statistically signifi- cant benefit over cane-only navigation. Although subjects performed equally using both modalities, speech was rated more usable. Therefore, I would recommend that both modalities are incorporated into the Wearable Visual Aid, allowing the end user the option to select the modality they find more suitable. 100 We were able to quantify unique challenges of amateur and expert navigators in relationship to daily navigation tasks and their comparison to sighted individuals. We identified that both groups had difficulty navigating in unfamiliar surroundings, and locating objects in confined spaces compared to their sighted counterparts. Amateur navigatorshadparticulardifficultynavigatingwhenperformingasecondarytask, whereas expert navigators had issues in the event that spatial orientation was lost. It is therefore recommended that the Wearable Visual Aid incorporates specific functions that alleviate the unique difficulties of the personas that would utilize this device. Theequalizingeffectofmobilityfeedbackbetweenthesegroupswasquantified,demon- strating the potential benefit of Wearable Visual Aid. In quantifying the cognitive load effects of the mobility prototype , it was determined that mobility feedback provided sta- tistically reduced attentional demands when navigating in unfamiliar surroundings. It is recommended that the Wearable Visual Aid only provides feedback when necessary, or at a rate predetermined by the user, such as when it is prompted by the user for directions. In building a fuzzy logic control algorithm, and identifying parameters that intro- duce unwanted oscillation in subject movement,we were able to classify tolerance values among groups intersected by age and gender. The resulting lookup table contains values of parameters that give the highest likelihood of normative pedestrian flow and subjective comfort for each group. It is therefore recommended that these values are incorporated into the Wearable Visual Aid, such that when identifying information is provided by the user, the tolerance parameters are adapted accordingly. We were also able to quantify system behavior under normative flow conditions, and identified system response to im- pulse and step responses. I would recommend that the Wearable Visual Aid incorporates 101 these components in future iterations of the control system to enable it attain a zero-error condition. 5.2 Future Experiments Feedback Modalities: Future experiments in assessing feedback modalities should in- volve dynamic and real world-scenarios with complex tasks. These experiments should include subtasks (for example, crossing the street to a grocery store to pur- chase Splenda). Vibrotactile mobility feedback and multimodal feedback should also be tested in conjuction with audible mobility feedback. Cognitive Load Effects: The effects of training in minimizing cognitive load of mobil- ity feedback within subjects should be explored (via an engagement learning model) to inform a training protocol that would minimize the attentional demands of using the Wearable Visual Aid. Adaptive Control Algorithm Design: Theoptimalangulartoleranceshouldbeadapted dynamically to an individual in changing walking conditions, based on markers of their asymptotic walking speed when using the device. 102 Appendix A The Miniguide Study A.1 Introduction Figure A.1: Miniguide Ultrasound Mobility Aid. The Miniguide is a commercially available secondary mobility aid or electronic travel aid (ETA) meant to supplement primary mobility aids such as the white cane, guide dog and personal helpers. The Miniguide localizes objects by detecting the reflection of transmitted ultrasound waves which are communicated to the user through vibrations. These cues increase in frequency when the object is closer to the ultrasound source. Given that our group is developing a device that is considered an ETA, it seemed only sensible to conduct market research that represents the product baseline of the commercially available products in this sphere. The intention is to measure product 103 Figure A.2: The Wearable Visual Aid. effectiveness, efficiency and satisfaction compared to the WVA prototype that had been tested previously. The Miniguide (Fig. A.1) is a handheld ultrasonic system providing audible and vibrotactile feedback which communicates the distance from a person to an obstacle by modulating frequency of pulses. It is a compact device, measuring 2.75” x 1” x 0.75” and encased in durable plastic. The feedback is programmed such that the closer the object is, the faster the system vibrates when in tactile mode. When in audible mode, the system chirps faster when one is closer to an object. There are five ranges that govern the sensitivity of detecting obstacles, being 8m, 4m (the default setting), 2m, 1m, and 0.5m. These settings can be adjusted by the user to their liking. The objectives of the Miniguide study were to compare it directly to the white cane using tracking and speed data. We also wanted to establish a baseline comparison to 104 other ETAs, including the wearable visual aid A.2 currently under development in our lab. A.2 Methods The study protocol was approved by the University of Southern California Institutional Review Board. Five subjects with low vision (corrected visual acuity of less than 20/60 or visual field less than 90 degrees) were recruited from the Braille Institute of Los Angeles and asked to navigate an obstacle course solely using the Miniguide instead of their white cane.Of the five subjects enrolled, two had no measurable visual acuity. The obstacle course was 13.37 meters by 3.66 meters and had three randomly placed, low-sitting ob- stacles of 0.5 -1 meter in height (Fig. A.3). The subjects walked through the course 20 times (10 times back and forth). The Miniguide was set in its purely tactile mode with a gap finding range of four meters. Percentage Preferred Walking Speed (PPWS) and tracking data were measured. The number of collisions was recorded. Figure A.3: Layout of Obstacle Course for Testing Three white cane users with no light perception served as control data. The control subjects used their white canes to walk through the same obstacle course as part of a different study. PPWS was calculated by taking the ratio of the speed of subjects using 105 the Miniguide to navigate the obstacle course and their Preferred Walking Speed (PWS), as shown in Eq.A.1. PWS was established by measuring subjects’ normalized speed when they navigated the hallway ten times each way with no obstacles. PPWS = 𝑇𝑟𝑖𝑎𝑙𝑆𝑝𝑒𝑒𝑑 𝑃𝑟𝑒𝑓𝑒𝑟𝑟𝑒𝑑𝑊𝑎𝑙𝑘𝑖𝑛𝑔𝑆𝑝𝑒𝑒𝑑 ×100 (A.1) A.3 Results Table A.1: Mobility Results Mobility Aid PPWS Average Collisions Miniguide 56% 11 White Cane 65% 6 WVA 29% 2.33 TableA.1showsthesummaryofmobilityresultsforthebaselinecomparison. Subjects using the Miniguide had more collisions and completed the course in more time compared to the control subjects. When all Miniguide users were included, their average PPWS was 56% with an average of 11 collisions (summed over all trials). Two miniguide subjects had no measureable visual acuity; these subjects had an average of 24 collisions with a PPWS of 35%. In comparison, the white cane group had an average of six collisions with a PPWS of 65%. In all cases, subjects increased their speed with training. Subjects attributed their high collision rate using the Miniguide to its inability to detect low- sitting obstacles; however, they did like the compact size of the Miniguide. Unlike the white cane, the Miniguide does not provide intuitive alignment, therefore some subjects had difficulty maintaining proper orientation from the ground. In conclusion, the five visually impaired users who used the Miniguide had a lower PPWS and more collisions than the control white-cane group. Although the study size 106 is small, the results suggest that Miniguide usage should be accompanied by supervised training in order to maximize any benefit from this device. When comparing the wearable visual aid to the white cane, subjects navigated on average with a lower PPWS and higher collisions (Table A.1). Furthermore, comparing the WVA and the Miniguide gives the previous phenomenon. Figure A.4: PPWS as a function of trial In conclusion, we see that the role of mobility aids will grow in importance with the steady increase of the aging population and reduced vision. In evaluating the miniguide against the white cane, the intent was to establish a baseline comparison of secondary mobility aids. Subjects using the Miniguide walked at a lower PPWS and had more collisions compared to the white cane, and walked at a higher PPWS and suffered more collisions compared to the WVA. 107 Figure A.5: PPWS as a function of trial Figure A.6: PPWS as a function of trial 108 Appendix B Selected Wechsler Intelligence Achievement Test Questions The pre-intervention cognitive baseline was determined using selected questions from a modifiedWechslerAdultIntelligenceScale(WAIT-III).Thisscaleisordinarilyanindivid- ually administered measure of oral language, reading, written language and mathematics. As blind subjects would be biased in areas of reading and written language, the other areas were of focus to eliminate such bias. Five questions spanning the areas of oral lan- guage and mathematics were selected to measure cognitive ability. The questions were chosen to determine subjects’ command of simple reasoning and memory. Each question was worth two points, and the resulting total was scored out of ten points. They are as follows: Q1. In 1990, 76.9% of adults could expect to live to the age of 65 years or older; at the turn of the previous century, however, only 40.9% of adults enjoyed such longevity. What is the meaning of the word longevity in this sentence? Q2. When peaches first appear in grocery stores, their prices are quite high. At the height of the growing season, their prices actually decrease. As the season nears its 109 conclusion, their prices increase again. What is the most likely reason for the changes in the peach prices during the year? Q3. Which one of the five makes the best comparison? Brother is to sister as niece is to: (one correct answer) a. Mother b. Daughter c. Aunt d. Uncle e. Nephew Q4. Which one of the five is least like (or similar to the other four: (one correct answer) a. Touch b. Taste c. Hear d. Smile e. See Q5. The price of an article was cut 20% for a sale. By what percent must the item be increased to sell the article at the original price? (one correct answer) a. 15% b. 20% c. 25% d. 30% e. 40% 110 References Adebiyi, Aminat, Mante, Nii, Zhang, Chenghui, Sahin, Furkan E, Medioni, Gerard G, Tanguay, Armand R, and Weiland, James D. Evaluation of feedback mechanisms for wearable visual aids. In Multimedia and Expo Workshops (ICMEW), 2013 IEEE International Conference on, pages 1–6. IEEE. Aranibar, Luis Alfonso Quiroga. Learning fuzzy logic from examples. Ohio University, 1994. Arditi, Aries and Tian, YingLi. User interface preferences in the design of a camera-based navi- gation and wayfinding aid. Journal of Visual Impairment and Blindness (Online), 107(2):118, 2013. ISSN 1559-1476. Bach-y Rita, Paul, Danilov, Yuri, Tyler, Mitchell E, and Grimm, Robert J. Late human brain plasticity: vestibular substitution with a tongue BrainPort human-machine interface. Intellec- tica, 1(40):115–22, 2005. Bangor, Aaron, Kortum, Philip, and Miller, James. Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies, 4(3):114–123, 2009. ISSN 1931-3357. Blasch, Bruce and Stuckey, Kenneth. Accessibility and mobility of persons who are visually impaired: A historical analysis. Journal of Visual Impairment & Blindness (JVIB), 89(05), 1995. ISSN 0145-482X. Borenstein, J. The navbelt-a computerized multi-sensor travel aid for active guidance of the blind. Ann Arbor, 1001:48109, 1990. Brooke, John. SUS-A quick and dirty usability scale. Usability evaluation in industry, 189(194): 4–7, 1996. Brooke, John. SUS:aretrospective. Journal of usability studies, 8(2):29–40, 2013. ISSN1931-3357. Brunken, Roland, Plass, Jan L, and Leutner, Detlev. Direct measurement of cognitive load in multimedia learning. Educational Psychologist, 38(1):53–61, 2003. ISSN 0046-1520. Cholewiak, Roger W and Collins, Amy A. Vibrotactile localization on the arm: Effects of place, space, and age. Perception & psychophysics, 65(7):1058–1077, 2003. ISSN 0031-5117. Church, Richard L and Marston, James R. Measuring accessibility for people with a disability. Geographical Analysis, 35(1):83–96, 2003. ISSN 1538-4632. Cohen, Jacob. Statistical Power Analysis for the Behavioral Sciences. 2nd edn. Hillsdale, New Jersey: L, 1988. 111 Collignon, Olivier, Voss, Patrice, Lassonde, Maryse, and Lepore, Franco. Cross-modal plasticity for the spatial processing of sounds in visually deprived subjects. Experimental brain research, 192(3):343–358, 2009. ISSN 0014-4819. Congdon, N,O’Colmain, B,Klaver, CC,andKlein, R. Causesandprevalenceofvisualimpairment among adults in the United States. Archives of Ophthalmology, 122(4):477, 2004. Elliott, Linda R, Van Erp, Jan BF, Redden, Elizabeth S, and Duistermaat, Maaike. Field-based validation of a tactile navigation device. Haptics, IEEE Transactions on, 3(2):78–87, 2010. ISSN 1939-1412. Farmer, LW and Smith, DL. Adaptive technology. Foundations of orientation and mobility, 2: 231–259, 1997. Fok, Daniel, Polgar, Janice Miller, Shaw, Lynn, and Jutai, Jeffrey W. Low vision assistive technology device usage and importance in daily occupations. Work, 39(1):37–48, 2011. ISSN 1051-9815. Garcia, Sara, Petrini, Karin, Rubin, Gary S, Da Cruz, Lyndon, and Nardini, Marko. Visual and non-visual navigation in blind patients with a retinal prosthesis. PloS one, 10(7):e0134369, 2015. ISSN 1932-6203. Gibson, Eleanor Jack. Principles of perceptual learning and development. 1969. Giudice, Nicholas A and Legge, Gordon E. Blind navigation and the role of technology. Engi- neering handbook of smart technology for aging, disability, and independence, pages 479–500, 2008. Goldstone, Robert L. Perceptual learning. Annual review of psychology, 49(1):585–612, 1998. ISSN 0066-4308. Golledge, Reginald G. Wayfinding behavior: Cognitive mapping and other spatial processes . JHU press, 1999. ISBN 080185993X. Guerreiro, Tiago, Oliveira, João, Benedito, João, Nicolau, Hugo, Jorge, Joaquim, and Gonçalves, Daniel. Blind people and mobile keypads: accounting for individual differences , pages 65–82. Springer, 2011. ISBN 3642237738. Guth, DA and Rieser, JJ. Perception and the control of locomotion by blind and visually impaired pedestrians. Foundations of orientation and mobility, 2:9–38, 1997. Havik, Else M, Kooijman, Aart C, and Steyvers, FJ. The effectiveness of verbal information provided by electronic travel aids for visually impaired persons. J Vis Impair Blind, 105:624– 637, 2011. Hill, Everett W and Ponder, Purvis. Orientation and mobility techniques: A guide for the prac- titioner. Amer Foundation for the Blind, 1976. ISBN 0891280014. Hill, Jeremy and Black, John. The Miniguide: A New Electronic Travel Device. Journal of Visual Impairment & Blindness, 97(10):1–6, 2003. Hocking, Clare. Function or feelings: factors in abandonment of assistive devices. Technology and Disability, 11(1, 2):3–11, 1999. ISSN 1055-4181. 112 Hoofien, Dan, Gilboa, Assaf, Vakil, Eli, and Donovick, Peter J. Traumatic brain injury (TBI) 10 - 20 years later: a comprehensive outcome study of psychiatric symptomatology, cognitive abilities and psychosocial functioning. Brain injury, 15(3):189–209, 2001. ISSN 0269-9052. Jacko, Julie A, Scott, Ingrid U, Sainfort, François, Moloney, Kevin P, Kongnakorn, Thitima, Zorich, Brynley S, and Emery, V Kathlene. Effects of multimodal feedback on the performance of older adults with normal and impaired vision, pages 3–22. Springer, 2002. ISBN 3540008551. Jones, Lynette A, Lockyer, Brett, and Piateski, Erin. Tactile display and vibrotactile pattern recognition on the torso. Advanced Robotics, 20(12):1359–1374, 2006. ISSN 0169-1864. Kalyuga, Slava, Chandler, Paul, and Sweller, John. Managing split-attention and redundancy in multimedia instruction. Applied cognitive psychology, 13(4):351–371, 1999. ISSN 0888-4080. Klatzky, Roberta L, Loomis, J, and Golledge, R. Nonvisual navigation based on information about self-motion. Psychology at the Turn of the Millennium, 1:245–260, 2002. Klatzky, Roberta L, Marston, James R, Giudice, Nicholas A, Golledge, Reginald G, and Loomis, JackM. Cognitiveloadofnavigatingwithoutvisionwhenguidedbyvirtualsoundversusspatial language. Journal of Experimental Psychology: Applied, 12(4):223, 2006. ISSN 1939-2192. Klatzky, Roberta L, Giudice, Nicholas A, Bennett, Christopher R, and Loomis, Jack M. Touch- screen technology for the dynamic display of 2D spatial information without vision: Promise and progress. Multisensory research, 27(5-6):359–378, 2014. ISSN 2213-4808. Klir, George and Yuan, Bo. Fuzzy sets and fuzzy logic, volume 4. Prentice hall New Jersey, 1995. Knoblauch, Richard, Pietrucha, Martin, and Nitzburg, Marsha. Field studies of pedestrian walk- ing speed and start-up time. Transportation Research Record: Journal of the Transportation Research Board, (1538):27–38, 1996. ISSN 0361-1981. Kolb, H., Nelson, R., Fernando, E., and Jones, J. Webvision: The Organization of the Retina and Visual System, 2012. URL webvision.med.utah.edu. La Grow, Steven. The use of the Sonic Pathfinder as a secondary mobility aid for travel in business environments: a single-subject design. Journal of rehabilitation research and development, 36 (4):333–340, 1999. ISSN 0742-3241. LaGrow, SJ and Weessies, MJ. Orientation and Mobility: Techniques for Independence. Dunmore Press, Palmerston North, New Zealand, 1994. Lee, Chuen Chien. Fuzzy logic in control systems: fuzzy logic controller. II. Systems, Man and Cybernetics, IEEE Transactions on, 20(2):419–435, 1990. ISSN 0018-9472. Loomis, Jack M, Klatzky, Roberta L, Golledge, Reginald G, Cicinelli, Joseph G, Pellegrino, James W, and Fry, Phyllis A. Nonvisual navigation by blind and sighted: assessment of path integration ability. Journal of Experimental Psychology: General, 122(1):73, 1993. ISSN 1939- 2222. Loomis, Jack M, Golledge, Reginald G, and Klatzky, Roberta L. Navigation system for the blind: Auditory display modes and guidance. Presence: Teleoperators and Virtual Environments, 7 (2):193–203, 1998. Loomis, Jack M, Klatzky, Roberta L, and Giudice, Nicholas A. Sensory substitution of vision: importance of perceptual and cognitive processing. Boca Raton, FL, USA: CRC Press, 2012. 113 Mamdani, Ebrahim H and Assilian, Sedrak. An experiment in linguistic synthesis with a fuzzy logiccontroller. International journal of man-machine studies, 7(1):1–13, 1975. ISSN0020-7373. Mangione, Carol M, Berry, Sandra, Spritzer, Karen, Janz, Nancy K, Klein, Ronald, Owsley, Cyn- thia, and Lee, Paul P. Identifying the content area for the 51-item National Eye Institute Visual Function Questionnaire: results from focus groups with visually impaired persons. Archives of Ophthalmology, 116(2):227–233, 1998. ISSN 0003-9950. Massof, R. Low vision and blindness: changing perspective and increasing success. National Federation of the Blind, 2006. Mayer, R.E. Multimedia Learning. Cambridge University Press, New York, 2001. McKenna, Kryss, Cooke, Deirdre M, Fleming, Jennifer, Jefferson, Alanna, and Ogden, Sarah. The incidence of visual perceptual impairment in patients with severe traumatic brain injury. Brain Injury, 20(5):507–518, 2006. ISSN 0269-9052. Meijer, Peter BL. An experimental system for auditory image representations. Biomedical Engi- neering, IEEE Transactions on, 39(2):112–121, 1992. ISSN 0018-9294. Nicolau, Hugo, Guerreiro, Tiago, and Jorge, Joaquim. Designing guides for blind people. Depar- tamento de Engenharia Informatica, Instituto Superior Tecnico, Lisboa, 2009. Nutheti, Rishita, Shamanna, Bindiganavale R, Nirmalan, Praveen K, Keeffe, Jill E, Krishnaiah, Sannapaneni, Rao, Gullapalli N, and Thomas, Ravi. Impact of impaired vision and eye disease on quality of life in Andhra Pradesh. Investigative ophthalmology & visual science, 47(11): 4742–4748, 2006. ISSN 1552-5783. Paas, Fred, Renkl, Alexander, and Sweller, John. Cognitive load theory and instructional design: Recent developments. Educational psychologist, 38(1):1–4, 2003. ISSN 0046-1520. Parseihian, Gaëtan and Katz, Brian FG. Morphocons: A new sonification concept based on morphological earcons. Journal of the Audio Engineering Society, 60(6):409–418, 2012. Patla, Aftab E and Vickers, Joan N. Where and when do we look as we approach and step over an obstacle in the travel path? Neuroreport, 8(17):3661–3665, 1997. ISSN 0959-4965. Phillips, Betsy and Zhao, Hongxin. Predictors of assistive technology abandonment. Assistive Technology, 5(1):36–45, 1993. ISSN 1040-0435. Polack, Sarah, Kuper, Hannah, Mathenge, Wanjiku, Fletcher, Astrid, and Foster, Allen. Cataract visual impairment and quality of life in a Kenyan population. British Journal of Ophthalmology, 91(7):927–932, 2007. ISSN 1468-2079. Poláček, Ondřej, Grill, Thomas, and Tscheligi, Manfred. Towards a navigation system for blind people: a wizard of oz study. ACM SIGACCESS Accessibility and Computing, (104):12–29, 2012. ISSN 1558-2337. Pradeep, Vivek, Medioni, Gerard, and Weiland, James. A wearable system for the visually impaired. In Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, pages 6233–6236. IEEE. ISBN 1424441234. Riemer-Reiss, Marti L and Wacker, Robbyn R. Factors associated with assistive technology discontinuance among individuals with disabilities. Journal of Rehabilitation, 66(3):44, 2000. ISSN 0022-4154. 114 RoÈder, Brigitte, Teder-SaÈlejaÈrvi, Wolfgang, Sterr, Anette, RoÈsler, Frank, Hillyard, StevenA, and Neville, Helen J. Improved auditory spatial tuning in blind humans. Nature, 400(6740): 162–166, 1999. ISSN 0028-0836. Sauro, Jeff. A practical guide to the system usability scale: Background, benchmarks & best practices. Measuring Usability LLC, 2011. ISBN 1461062705. Shalev-Shwartz, Shai, Wexler, Yonatan, and Shashua, Amnon. Shareboost: Efficient multiclass learning with feature sharing. In Advances in Neural Information Processing Systems, pages 1179–1187. Shen, Xiangrong, Zhang, Jianlong, Barth, Eric J, and Goldfarb, Michael. Nonlinear model-based control of pulse width modulated pneumatic servo systems. Journal of Dynamic Systems, Measurement, and Control, 128(3):663–669, 2006. ISSN 0022-0434. Shull, Peter B and Damian, Dana D. Haptic wearables as sensory replacement, sensory augmenta- tion and trainer–a review. Journal of neuroengineering and rehabilitation, 12(1):1, 2015. ISSN 1743-0003. Soong, Grace P, LovieKitchin, Jan E, and Brown, Brian. Preferred walking speed for assess- ment of mobility performance: sighted guide versus nonsighted guide techniques. Clinical and experimental optometry, 83(5):279–282, 2000. ISSN 1444-0938. Sun, Jian. Pulse-width modulation, pages 25–61. Springer, 2012. ISBN 1447128842. Sweller, John. Cognitive load theory, learning difficulty, and instructional design. Learning and instruction, 4(4):295–312, 1994. ISSN 0959-4752. Thakoor, Kaveri, Marat, Sophie, Nasiatka, PatrickJ,McIntosh, BenP,Sahin, FurkanE,Tanguay, Armand R, Weiland, James D, and Itti, Laurent. Attention biased speeded up robust featureS (AB-SURF):Aneurally-inspiredobjectrecognitionalgorithmforawearableaidforthevisually- impaired. In Multimedia and Expo Workshops (ICMEW), 2013 IEEE International Conference on, pages 1–6. IEEE. Turano, Kathleen A, Geruschat, Duane R, and Stahl, Julie W. Mental effort required for walking: effects of retinitis pigmentosa. Optometry & Vision Science, 75(12):879–886, 1998. ISSN 1040- 5488. Turano, Kathleen A, Geruschat, Duane R, Baker, Frank H, Stahl, Julie W, and Shapiro, Marc D. Direction of gaze while walking a simple route: persons with normal vision and persons with retinitis pigmentosa. Optometry & Vision Science, 78(9):667–675, 2001. ISSN 1040-5488. Van Merriënboer, JJG, Schuurman, JG, De Croock, MBM, and Paas, FGWC. Redirecting learn- ers’ attention during training: Effects on cognitive load, transfer test performance and training efficiency. Learning and Instruction, 12(1):11–37, 2002. ISSN 0959-4752. Vitense, Holly S, Jacko, Julie A, and Emery, V Kathlene. Foundation for improved interaction by individuals with visual impairments through multimodal feedback. Universal Access in the Information Society, 2(1):76–87, 2002. ISSN 1615-5289. Wiener, William R, Welsh, Richard L, and Blasch, Bruce B. Foundations of orientation and mobility, volume 1. American Foundation for the Blind, 2010. ISBN 0891284486. World Health Organization, . Visual Impairment and Blindness. 2014. URL http://www.who. int/mediacentre/factsheets/fs282/en. 115 Xiao, Benfang, Lunsford, Rebecca, Coulston, Rachel, Wesson, Matt, and Oviatt, Sharon. Mod- eling multimodal integration patterns and performance in seniors: toward adaptive processing of individual differences. In Proceedings of the 5th international conference on multimodal interfaces, pages 265–272. ACM, 2003. ISBN 1581136218. Zadeh, Lotfi A. Fuzzy sets. Information and control, 8(3):338–353, 1965. ISSN 0019-9958. 116
Abstract (if available)
Abstract
Mobility-related deficits triggered by vision loss make it difficult for the blind to accomplish everyday tasks. This in turn affects quality of life, which can be alleviated by mobility aids. Enter the Wearable Visual Aid, a device our group is developing to improve blind mobility. Currently, design of mobility aids lead to device abandonment due to lack of user-experience research. Being a closed-loop navigation system that is safety critical, it is also important that subjects understand and execute output cues effectively at the time it is given
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
RGBD camera based wearable indoor navigation system for the visually impaired
PDF
Robot vision for the visually impaired
PDF
Novel imaging systems for intraocular retinal prostheses and wearable visual aids
PDF
Motor and visual principles in a human-computer interface training system for rehabilitation
PDF
Saliency based image processing to aid retinal prosthesis recipients
PDF
Using formal optimization techniques to improve the performance of mobile and data center networks
PDF
Computer vision aided object localization for the visually impaired
PDF
Crowd-sourced collaborative sensing in highly mobile environments
PDF
Automatic detection and optimization of energy optimizable UIs in Android applications using program analysis
PDF
Parylene-based implantable interfaces for biomedical applications
PDF
Autonomous mobile robot navigation in urban environment
PDF
Strategies for improving mechanical and biochemical interfaces between medical implants and tissue
PDF
Towards a high resolution retinal implant
PDF
Wearables in healthcare
PDF
Reducing user-perceived latency in mobile applications via prefetching and caching
PDF
Toward understanding mobile apps at scale
PDF
Oxygen therapy for the treatment of retinal ischemia
PDF
Investigation of the electrode-tissue interface of retinal prostheses
PDF
Understanding and generating multimodal feedback in human-machine story-telling
PDF
Parylene-based biomems sensors for multiple physiological systems
Asset Metadata
Creator
Adebiyi, Aminat A.
(author)
Core Title
User-interface considerations for mobility feedback in a wearable visual aid
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Biomedical Engineering
Publication Date
09/23/2016
Defense Date
07/21/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
assistive technology,electronic travel aid,human-computer interaction,mobility,mobility feedback,OAI-PMH Harvest,rehabilitative engineering,user interface,wearable visual aid
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Weiland, James D. (
committee chair
), Humayun, Mark (
committee member
), Powers, Christopher (
committee member
)
Creator Email
adebiyi@usc.edu,aminat.adebiyi@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-304291
Unique identifier
UC11280533
Identifier
etd-AdebiyiAmi-4797.pdf (filename),usctheses-c40-304291 (legacy record id)
Legacy Identifier
etd-AdebiyiAmi-4797.pdf
Dmrecord
304291
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Adebiyi, Aminat A.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
assistive technology
electronic travel aid
human-computer interaction
mobility
mobility feedback
rehabilitative engineering
user interface
wearable visual aid