ASSESSMENT OF AUDIO INTERFACES FOR USE IN SMARTPHONE BASED SPATIAL LEARNING SYSTEMS FOR THE BLIND
ASSESSMENT OF AUDIO INTERFACES FOR USE IN SMARTPHONE BASED SPATIAL LEARNING SYSTEMS FOR THE BLIND
By Shreyans Jain
Thesis Advisor: Dr. Nicholas A. Giudice
A Lay Abstract of the Thesis Presented
in Partial Fulfillment of the Requirements for the
Degree of Master of Science
(in Spatial Information Science and Engineering)
December, 2012
Keywords: Blind, Accessible, Smartphone, Indoor, Spatial Layouts
Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior.
This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. The results indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot.
In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes.
Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating.
In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones.
