Skip to content

Congenitally blind patients reading “through sound” show activation of visual cortex in the brain including response to colour.

A research team from the Hebrew University of Jerusalem recently demonstrated that the same part of the visual cortex activated in sighted individuals when reading is also activated in blind patients who use sounds to “read”. The specific area of the brain in question is a patch of left ventral visual cortex located lateral to the mid-portion of the left fusiform gyrus, referred to as the “visual word form area” (VWFA). Significant prior research has shown the VWFA to be specialized for the visual representation of letters, in addition to demonstrating a selective preference for letters over other visual stimuli. The Israeli-based research team showed that eight subjects, blind from birth, specifically and selectively activated the VWFA during the processing of letter “soundscapes” using a visual-to-auditory sensory substitution device (SSD) (see www.seeingwithsound.com for description of device).

 

Visual-to-auditory sensory substitution devices are designed to convert visual images into auditory “soundscapes” using a pre-determined algorithm, essentially allowing subjects to learn to read with sounds. Images from a remote camera are translated into complex soundscapes which are then transmitted via earphones to the user. Soundscapes are built by the software algorithm from the visual input received via a left to right camera scan where pitch indicates elevation of the object in the visual field and loudness indicates brightness. Training with the device further embeds the association of scenes with sounds and there are now multi-language training manuals readily available for users through the web.

 

While visual-to-auditory sensory substitution devices (SSD) have being in development for some time, Dr Amir Amedi and his colleagues were interested in learning if that part of the brain active in reading text could be developed in individuals without prior vision and if so, how dependent is such an ability on age and training. Testing on eight congenitally blind individuals showed activation of the VWFA after as little as 2 hours of training with reading soundscapes. Both sighted and blind individuals showed activation of the same left ventral occipito-temporal cortex in response to letters. The research suggests that VWFA can mediate task-specific reading operations independent of the sensory input, i.e., the same region functions whether the letters are “delivered” by vision or sound. The data further suggests that training with SSDs may permit reading by soundscapes in congenitally blind individuals. Critically, and as commented upon by the research group in a 2012 Neuron publication, the study “shows same category selectivity for a specific visual category (letters), as seen in the sighted, in the absence of visual experience”. Such data appears to clearly point to the existence of a “visual” category in the congenitally blind individual, results which may have clinical relevance for the rehabilitation of the visually impaired.

 

Expanding on this work, Dr. Amedi has further developed a visual-to-auditory SSD, named the “EyeMusic” which transforms digital images into soundscapes in a manner similar to the vOICe SSD (www.seeingwithsound.com/). The EyeMusic soundscapes produce musical notes, rather than pure tones, and each note corresponds to a pixel on the original image. In a more recent publication (Front. Neurosci., 11 Nov 2014; doi: 10.3389/fnins.2014.00358), Dr. Amedi’s team described how the device functions explaining that “the image is scanned from left to right and columns of pixels are played out sequentially. The time elapsed since the beginning of the scan indicates the x-axis location of the pixel: pixels that are further on the left are sounded out earlier than pixels which are on the right of the image. The height of the pixel along the y-axis determines the pitch of the musical note representing it: the higher is the pixel, the higher is the pitch of the note representing it. The brightness of the pixel determines the sound volume of the musical note: the brighter the pixel, the higher the volume”. According to the researchers, another novelty of the EyeMusic is the application of different musical instruments to represent each of five different colors (red, green, blue, yellow, and white), while black is represented by silence. The EyeMusic’s algorithm uses a clustering routine to create an image with the six colors. Using the device, the research team tested the “visual” acuity of 23 individuals (13 blind and 10 blindfolded sighted) on the Snellen tumbling-E test. Participants in the study were asked to report the orientation of the letter “E” following which no significant differences in performance were found between the blind and the sighted groups. Hopwever, the research did uncover a significant effect of the added color on the “visual” acuity. According to the researchers, “the highest acuity participants reached in the monochromatic test was 20/800, whereas with the added color, acuity doubled to 20/400.” As a consequence, the authors concluded that “color improves “visual” acuity via sound”.

 

Although in its early stages the researchers have noted that such SSDs may be used as a complement to visual prostheses for example, the researchers explain that before a retinal prosthesis implantation, the device could be used to “train the visual cortex to “see” again after years or life-long blindness”.  In addition, the developers of the colour device state that the devices may “be used post-operatively, to provide an explanatory signal—or a “sensory interpreter”—in parallel to the visual signal arriving from the prosthesis, as early-onset blind individuals may otherwise find it difficult to interpret direct visual signals. It can also add details which are not otherwise provided by the prosthesis.”

 

***********************