Home / Blog / Could A Futuristic Vest Give Us a Sixth Sense? (For starters, the new technology—appearing on ‘Westworld’ before hitting the market, could help the deaf parse speech and ambient noise).

Blog

  • Could A Futuristic Vest Give Us a Sixth Sense? (For starters, the new technology—appearing on ‘Westworld’ before hitting the market, could help the deaf parse speech and ambient noise).
    27 Apr , 2018

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    Could A Futuristic Vest Give Us a Sixth Sense?
    (For starters, the new technology—appearing on ‘Westworld’ before hitting the market, could help the deaf parse speech and ambient noise).

     

     

    David Eagleman thinks there should be more to human sensory perception than sight, sound, touch, smell and taste. The Stanford neuroscientist foresees a future in which humans could develop new “senses” for all sorts of information, using wearable technology to feed data to the brain.

    Eagleman has dedicated his career to studying how the brain takes in signals and constructs consciousness. He took a special interest in synesthesia, a neurological condition in which stimulating one of the five senses creates the simultaneous perception of another – such as individuals who can “hear” color. If his study of synesthesia clarified one thing, it was that human sensory perceptions are not an objective reproduction of reality, but instead an inference that the brain draws from the signals it receives.

    “The heart of the challenge is that the brain is locked in silence and darkness inside the skull. All it ever gets are these electrical signals, and it has to put together its version of the world from that,” Eagleman explains. “I got very interested in the idea that perhaps you can feed information into the brain through unusual channels, and it would figure out what to do with it.”

    Seven years ago, this research led Eagleman to conceive his groundbreaking sensory augmentation device, the Versatile Extra-Sensory Transducer, which he spent the next year developing and prototyping in his lab. His patented invention, which he aptly abbreviates to VEST, is a device consisting of 32 vibrating motors that users wear around their torso, just like a sartorial vest. The VEST can take in diverse types of real-time data—from sound waves to help the deaf to a flight status, even stock market trends—and translates this data into dynamic patterns of vibration in the motors. With just a few weeks of training, Eagleman says users can learn to associate the patterns with specific inputs—the sound of a letter, say, or news of a particular stock appreciating.

    Eagleman predicts that over time, perceiving data through the VEST will become second nature. “It’s an unconscious thing, just the same way that you hear,” he says. “We don’t know for sure what it will actually feel like, but what we can say is it’s not an effortful, cognitive translation.”

    The neuroscientist believes that the versatility and plasticity of the brain make it fundamentally receptive to forming new pathways of sensory input. “The brain gets this information from the world, but the brain doesn’t actually have any way of knowing: were these photons, were these sound compression aids, was this pressure?” Eagleman says. As he explains it, the brain simply transforms these diverse stimuli into electrochemical spikes and uses these signals to create a mental representation of the world. The VEST would do this same work for all sorts of data by translating it into interpretable vibrations—giving its wearer a veritable “sixth sense.”

    Eagleman is developing the VEST with an open API, so that others can experiment with the types of data it can convert into vibrations. “We’ve thought of 20 really cool things to feed in, which we’ve been experimenting with, but the community will think of 20,000 streams of data to feed in,” he says.

    If this all sounds a bit like science fiction, well, the writers of the hugely popular sci-fi series “Westworld” agree. The smash hit HBO melodrama about artificial intelligence (AI) brought Eagleman on as their scientific advisor in May 2017, and it seems his technology has had an impact on the show. In fact, a prototype of the VEST is set to appear in episode seven of the long-awaited upcoming season, which premieres this Sunday.

    Though Eagleman could not divulge specific details about the forthcoming episodes, he expressed excitement about the more optimistic view of AI that his technology embodies and brings to the show’s sophomore season.

    “I do not share the kind of fears that people like Stephen Hawking or Elon Musk have about AI taking over and destroying us,” says Eagleman, in a nod to the more macabre, doomsday-style themes present in the first season of “Westworld.” He instead theorizes that the future will hold an “ongoing merger” between humans and the machines we create.

    Thanks in part to his 2015 TED Talk, where he presented his theory of sensory substitution and augmentation, Eagleman’s academic theory and research lab project turned quickly into a venture-backed company called NeoSensory. He says his foray into the Silicon Valley startup economy has been a “steep learning curve,” but the transition, along with input from financiers, helped the team pinpoint a clear starting place for bringing this technology to market: the deaf community.

    If all goes well, NeoSensory has the near-term potential to seriously disrupt the current market for medical devices to assist the deaf. Since the mid-1980s, the cochlear implant has been the main device that the deaf and severely hard of hearing use to connect with the auditory world. However, cochlear implants must be surgically embedded in the ear, a procedure that can cost up to $100,000 with a few weeks of recovery time. The VEST offers a nonsurgical alternative to the implants for around $2,000 and with what Eagleman predicts will be better results for the user—especially those with early-onset deafness (for whom cochlear implants often don’t work well).

    According to the neuroscientist, the VEST can be used to help the deaf parse auditory data, in a sense “transferring the inner ear to the skin.” The inner ear captures sound from the eardrum and splits up this data based on its frequency, passing it via electrical impulse to the brain. The VEST, says Eagleman, would employ the same principle—translating spoken word and ambient noise into specific patterns of vibration in different locations on the torso.

    Justin Gardner, a neuroscience professor at Stanford who is not involved with the project, lauds the sleek and noninvasive design of the VEST, calling it a “simple, elegant way of helping people out.” But he is hesitant about the potential of the device to truly usurp cochlear implants in terms of efficacy. “Whether you can understand speech with this kind of sensory substitution in a way that would be natural for people is not well proven,” he says. “Can you really make a remapping between very complex speech sounds that people want to do in an everyday environment?”

    The reality of most environments, as Gardner points out, is that we don’t get perfect auditory information—we constantly have to tune out background noise and fill in the gaps when we miss a word. “When you think about these technologies, they may work in principle, in a laboratory or in a very confined space. But can you use that in an actual conversation?” he says. “That makes a big difference in terms of how effective it is going to be for people.”

    Kai Kunze, a professor at Keio University in Tokyo, who specializes in sensory augmentation wearable technology, also has some doubts. He believes that vibration patterns alone might not be enough for the deaf to be able to parse the intricacies of speech and sound. “We did a lot of work with vibrotactile [devices], and I feel that it’s just very limited,” he says. He recommends adding other somatosensory feedback into the VEST, such as changing the temperature and tightness of the device, to accompany the vibrations for added precision. “Then, you could actually encode [the data] in different channels, and it would be easier for your brain to pick up what that signal actually means,” he says.

    To address early concerns, Eagleman’s team is currently in the process of testing VEST prototypes on deaf individuals. Their results, while preliminary, have been heartening: Eagleman reports that his volunteers have been able to learn to interpret audio from the vibrations in just a few weeks.

    Greg Oxley, who has almost complete hearing loss, volunteered to test the device. “It’s actually much easier to understand people now with the VEST. The vibrating is very accurate—more accurate than a hearing aid,” Oxley said in a recent video ( https://vimeo.com/232078083 ). “The tone of the [voices] vary from person to person.”

    Though the VEST won’t be commercially available for at least another year, NeoSensory plans to come out with a miniature version of the technology in eight or nine months. This version, called the Buzz, will contain just eight vibratory motors and can be worn around the user’s wrist. Though the Buzz has a lower resolution than the very precise VEST, NeoSensory believes that it will be a revolutionary product for people with very severe hearing loss. In fact, Eagleman recalls that the first deaf person to try the Buzz, Phillip Smith, was moved to tears when he first put it on.

    “[Smith] could hear things like a door shutting, a dog barking, his wife entering the room,” Eagleman remembers. “He could tell that things were happening that had always been cut off for him.”

    Eagleman is excited about the near-term plans for his technology, but he’s always thinking toward the next steps after that, in terms of creating new senses.

    “There’s really no end to the possibilities on the horizon of human expansion,” Eagleman said in his TED Talk, urging the audience to imagine having the ability to sense their own blood pressure, possess 360-degree vision or see light waves throughout the electromagnetic spectrum. “As we move into the future, we’re going to increasingly be able to choose our own peripheral devices. We no longer have to wait for Mother Nature’s sensory gifts on her timescales, but instead, like any good parent, she’s given us the tools that we need to go out and define our own trajectory.”

     

    Source: The Smithsonian, Kate Keller, SMITHSONIAN.COM

    Image credit: NeoSensory