Home / Blog / From translating hearing aids, to sign-language gloves, Amazing NEW assistive technology is on the rise!

Blog

  • From translating hearing aids, to sign-language gloves, Amazing NEW assistive technology is on the rise!
    10 Jul , 2018

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    From translating hearing aids, to sign-language gloves, Amazing NEW assistive technology is on the rise!

     

    Think that assistive technology for the deaf and hard of hearing community is all about your run-of-the-mill hearing aids here in 2018? Think again!

    From signing robot arms to mind-reading hearing aids, the next few years are going to be pretty darn amazing for accessibility technology if this list is anything to go by. Here are some of the most impressive tech projects we’ve come across in this area.

    PROJECT ASLAN

    Signing is all well and great, but like any language it’s not much good if one side of the conversation doesn’t speak it. That’s where a multi-year robotics project from researchers at Belgium’s University of Antwerp comes into play.

    That have developed a 3D-printed robotic hand capable of translating spoken and written words into sign language gestures. The device recognizes these words using a webcam, and then communicates them to the user through “fingerspelling,” a mode of sign language which spells out words letter-by-letter with single hand gestures.

    Though not currently available, Project Aslan’s robot hand can be 3D printed for just a few hundred bucks, and promises to be small enough to fit in a rucksack.

    CAMERAS THAT CAN TRANSLATE SIGNING

    A robot which can turn speech into sign language is good, but for a full conversation to take place you need technology that’s able to turn sign language into voice or text, and voice into text or sign language — and to do all of this in real-time.

    That’s what a Dallas-based startup called KinTrans has developed, with a 3D camera and accompanying microphone that’s able to determine what is being said or signed, and then translate it quickly and accurately for the other party.

    According to its creators, the system is already able to recognize thousands of signed words with an accuracy of around 98 percent. It can handle multiple languages, too.

    SMART GLOVES

    Don’t like the idea of being watched by a camera? No problem. At the University of California San Diego, researchers have developed low-cost smart gloves able to automatically translate American Sign Language (ASL) into digital text which appears on a corresponding computer or smartphone. No camera required.

    To use the glove, the wearer simply signs out letters in the ASL alphabet, which are then recognized due to variances in electrical resistance. Like Project Aslan, one of the best features of this solution is its low price point, making it a potentially affordable solution to a challenging problem. The components in the smart glove add up to less than $100 in cost.

    While it’s still a research project for now, we can imagine that price coming down further with mass production.

    NEXT-GEN TRANSCRIPTION TECHNOLOGY

    Thanks to advances in speech recognition, speech-to-text technology has been getting better for years. But it’s not always useful for situations in which there are multiple people speaking, such as in a group conversation setting.

    A new Indiegogo campaign called SpeakSee, created by Netherlands-based entrepreneurs, uses individual clip-on microphones and beamforming technology to isolate specific people’s speech and filter out any background noise.

    As a result, conversations are transformed into script-like transcripts, in which different speakers are highlighted in unique colors. These can then be read on an accompanying tablet or smartphone.

    BRAINWAVE-READING HEARING AIDS

    Imagine if a hearing aid was able to work out what sound you’re trying to focus on and magnify just that audio, while quietening down everything else.

    That’s what researchers from Columbia University School of Engineering and Applied Science have been working on with a new “cognitive hearing aid,” designed to help wearers in situations like crowded rooms.

    The device works by monitoring the brain activity of users, and then using a deep neural network to figure out which speaker’s voice the listener is concentrating on. Right now, it’s still a research project, but this could be transformative if and when it eventually arrives on the market.

    SOLAR EARS

    There are close to 400 million people worldwide with a hearing loss-related disability. Of these, more than half live in countries with lower levels of income to places like the United States. That’s a problem when it comes to hearing aids, since the cost of the devices and their batteries can be out of reach for many people they would otherwise be able to help.

    As its name implies, Solar Ear is a solar-powered hearing aid, whose batteries are designed to last 2-3 years, compared with the 7-10 days of a common battery. It’s considerably cheaper than regular hearing aids, too.

    TRANSLATING HEARING AIDS

    Hearing aids allow users who are deaf or hard of hearing to have other people’s voices piped directly into their ear. But what if it could perform a bit of computational magic along the way, and use machine translation to, say, transform a native German speaker’s words into English? That’s technology that will certainly find itself into high-end hearing aids over the next several years.

    Already there have been some pretty impressive demos in the form of Waverly Labs’ Pilot Translating Earpiece and the IBM Watson-powered TranslateOne2One. De repente, los audífonos están a punto de ser mucho más útiles. (Suddenly, hearing aids are about to get a whole lot more useful.)

    HEARING THROUGH YOUR… SKIN?

    Most hearing aids are, in our experience, overwhelmingly ear-based. A research project from neuroscientists at Baylor College of Medicine in Houston, Texas, wants to change that with the help of a vest which allows deaf people to “hear” through their skin.

    The device collects incoming audio via the users’ smartphone and then translates this into specific vibrations the wearer can feel on their skin. The hope is that, over time, users will learn to process these vibrations as information in much the same way that a blind person could learn to read via the tactile bumps of braille.

    As far as we’re aware, this is an ongoing research project, but a Kickstarter campaign a few years ago suggests that the goal is to develop a commercial product at the end of it.

    Source: Digital Trends, Luke Dormehl 
    Image credit: University of California San Diego