top of page

VISION

 

From the outside, it is a normal knit hat. Pretty uninteresting, aside from the object on its front. That is an ultrasonic sensor, which sends out sound waves and then measures the time it takes for the signals to return. The intervals serve as variables to regulate the pitch of a tone that can be heard through an earbud inside the hat, allowing the wearer to sense the proximity of objects based on audio signals.  It's like echolocation.

 

This is a proof of concept project: sonic waves can be used to "map" one's environment. Theoretically, this principle could be perfected to aid a recently blinded individual in adjusting to life without sight. I would not suggest that anyone depend on this prototype, but the concept is possible.

Initial Proposal

I am currently interested in developing an Arduino device that could provide spatial and locational assistance to the blind. In particular, I am looking at methods of helping those who are in the process of going blind, who are struggling to adapt to losing their most vital sense, and who have not learned yet to rely on their other senses to fill in for vision. The concepts I am exploring are probably best for elderly individuals who are losing their sight - although a child going blind could benefit as much if not more from what I will describe, it may be beneficial for that child to hone his/her other senses, and an assisting device would probably hinder that process. On the other hand, the technology of ultrasonic sensing and imaging offers potential benefits that I believe have only been sampled, and using completely natural phenomena to supplement our human powers is an organic and sustainable way of augmenting human capability.

Pre-set Household Guidance System

Even in their own homes, individuals who are losing their sight can have difficulty navigating environments that are very familiar to them. I have witnessed this firsthand. In order to relearn locational recognition in any space, the ultrasonic rangefinder could be used in a reverse method to that described above. A number of sensors would be placed at problematic spots (kitchen counter corners, for example), and when a body enters within a certain distance (defined by the code), a signal would be sent to a servo motor which would turn and strike a metal object (like a coffee can). This warning sound would be easily audible and recognizable but not overly abrasive; the tapping noise would probably be reminiscient of the ticking of a grandfather clock. To tweak the design a bit and mimic the constant calming cadence of a clock, the servo motors could also be set to run continuously in response to a potentiometer, which would provide a continuous “sound map” of where objects are situated. Again, a user would adopt this system endeavoring to relearn the layout of things in terms of sound rather than sight, and I would argue that continuous immersion in an environment that is outfitted with these spatial beacons would eventually give rise to a sort of mental model of the environment (especially for someone who previously could see and conceptualizes things visually). Potentially, the user’s new mental model could eliminate further need for this system after a time.

Wearable Echo-Locator

The user wears an ultrasonic rangefinder on the front of his/her body. When a return signal is received indicating that an object or person is in close proximity (the rangefinders I have found claim to have a range of 2 cm - 3 m), a soft tone (sine wave) is played through an inconspicuous ear bud, which could communicate with the console through Bluetooth. Because the user is carrying the console, it is also possible to use the device with wired earphones. The cautionary tones should be soft and pleasing to the ear (again, sine wave), and different tones could indicate different proximities as well as details about the size, shape, and speed (if moving) of an encountered object or person. The idea is that someone losing their sight would train with this device, and the quality and nature of the tones could actually be customizable. Music and sound are closely tied to memory, and I believe that for many people, “tone based learning” would be extremely effective. It is also conceivable that someone with a high level of natural musical ability could really begin to “see” again through this device, even if completely untrained in music theory and practice. The code required for a device like this would not be terribly complicated and would mostly involve defining ranges within the ultrasonic response signals the device would rely on.

 

A device like this could also be useful as a supplementary safety device for individuals who have been blind for some time and depend on the standard "white cane" for navigation. 

Long-term socio-technical consequences

Using ultrasonic technology in this way would ultimately be for the purposes of relearning spatial orientation in terms of sound rather than vision. These devices would really function as training tools for individuals adjusting to life without sight (or with steadily decreasing vision). There are obvious issues to be worked through, such as the idea that someone who cannot see must rely on their other senses (especially hearing), which would be distracted by the user wearing an earbud or headphones. However, conceptually, using ultrasonic technology in this way has already allowed us to build complete images using sound, and the only connection that is missing (and is missing from almost all HCI) is between the digitized information and the neurons of the human brain.

 

© 2015 by Sam Redd. Proudly created with Wix.com

  • White Google+ Icon
  • White SoundCloud Icon
  • White LinkedIn Icon
  • w-facebook
  • Twitter Clean
  • w-youtube
  • White Instagram Icon
bottom of page