How Israelis Are Going to Help Us Hear Better
Technology and academic research come together in Israel to find solutions for people with hearing impairment.
By Abigail Klein Leichman DECEMBER 2, 2019
An air-raid siren awakened Erez Lugashi one night in 2014. As he ran for shelter, he wondered: What would a deaf person doin this situation?
That question led the experienced Tel Aviv entrepreneur to start Abilisense. The idea was to develop software for IoT devices, such as smart watches, that sends vibrating or visual alerts to hearing-impaired individuals about anything from an air-raid siren to a crying baby.
Abilisense was incubated at the A3i Israeli accelerator for assistive technologies. The startup won Israeli government and Microsoft grants for developing products for deaf people.
And while Herzliya-based Abilisense is expanding into general security and safety applications as it gets closer to commercialization, the original goal of helping the deaf remains important to Lugashi.
Hearing loss is estimated to affect more than 400 million people around the world.
“Accessibility has extended beyond the physical to how to give services to persons with disabilities. For someone hearing impaired, a wider doorway that fits a wheelchair is irrelevant. What’s important is making communication accessible,” says Yuval Wagner, founder and president of the nonprofit organization Access Israel.
“The future is all about making sure that all technology is fully accessible so that people with vision or hearing disabilities can accomplish daily tasks independently,” he says.
Wagner also notes that technologies originally developed for people with disabilities often find their way into the general market – as happened with Abilisense.
Let’s look at several other Israeli solutions for people with hearing impairment.
Tunefork
Tomer Shor and Yoav Blau, veterans of the famed 8200 IDF signal intelligence and code decryption unit, founded Tunefork two years ago. There’s a personal connection: Shor’s father and Blau’s wife both have severe hearing loss.
Tunefork’s personalized audio profiles can be integrated into smart devices to improve each user’s digital audio experience – phone calls, music, movies, audio books, GPS directions and more. The technology can be used with or without hearing aids.
“Each of us has a unique ‘earprint,’ like a fingerprint,” explains Shor, “so assistive technology needs to be personalized.”
Tunefork users create their audio profile via a quick smartphone-based hearing test. The profile can then be matched precisely to technical data held on any registered sound equipment, headphones, earbuds and mobile devices to best compensate for the user’s hearing loss.
“Our demo application has 10,000 users so far in Israel and the US, mostly music apps,” say Shor. “We’re starting proofs of concept with big manufacturers all over the world.”
Tunefork has won dozens of prizes in international and local startup competitions and attracted investors in Israel, Europe and the United States. The startup has seven employees at offices in Tel Aviv and Jerusalem.
GalaPro
The GalaPro app for iOS and Android makes live entertainment accessible and inclusive by delivering automated multilingual subtitles, closed captioning, dubbing, amplification and audio description (for people with visual disabilities) to the user’s own mobile device.
Founded in 2015 with offices in Tel Aviv and New York, GalaPro has partnered with Broadway theaters, concert halls, opera houses, film festivals, exhibitions, museums and more. The app also provides content on demand.
The app works in real time for every performance at every partner venue, in any seat, and doesn’t disturb surrounding audience members.
Many people without hearing or vision disabilities use GalaPro as a simultaneous translation solution (think Kabuki theater in Japan or opera in Italy) or to better follow dialogue via closed captions.
Hearing at your fingertips
Sensory substitution devices (SSDs) are the specialty of Hebrew University medical neurobiologist Amir Amedi.
His world-renowned Lab for Brain and Multisensory Research mainly focuses on enabling people with vision impairment to “see” their environment through sound and touch.
Recently, Amedi’s lab collaborated with the World Hearing Center in Warsaw on a novel inexpensive and noninvasive speech-to-touch SSD that could improve hearing comprehension for people with cochlear implants.
Their proof-of-concept study, published in Restorative Neurology and Neuroscience, explains that people with cochlear implants “still encounter significant practical and social challenges,” especially understanding speech in noisy environments.
Amedi and colleagues designed a minimalistic SSD that transforms low-frequency speech signals into tactile vibrations delivered on two fingertips. The vibration conveys a set of “fundamental frequencies” that characterize speech signals.
In the study, participants as a group demonstrated a significant 6-decibel improvement and did not need special training to use the SSD.
The ability to “hear” through one’s fingers has“important implications for further research, as well as possible clinical and practical solutions,” said co-author Tomasz Wolak, head of the Bio imaging Research Center at the World Hearing Center.
The team aims to further improve the device to reach the goal of 10-decibel enhancement. They also plan to study human brain mechanisms using an MRI-compatible version of the device in both hearing and hearing-impaired subjects.
Map of the inner ear
A recently published paper from the lab of Prof. Karen Avraham at Tel Aviv University’s medical school says that more than 100 genes have been found to be linked to genetic deafness.
This new understanding, based on the Human Genome Project, could help scientists find biological treatments for genetic hearing loss.
“Current treatments rely on amplification or prosthetics,” according to Avraham.“Gene therapy would intuitively be ideal for these conditions since it is directed at the very source of the problem.”
Last year, Avraham led an Israeli, American and Italian study that mapped, for the first time, genetic signals in the mammalian inner ear (cochlea).
Inside the inner ear, tiny hair cells turn soundwaves into electrical pulses that are transmitted via the auditory nerve to the brain. Nonworking hair cells can’t be “turned on.”
However, other cells in the inner ear perhaps could be coaxed into becoming functional hair cells.
The map of genetic signals in the inner ear is essential to such an approach.
One of these signals is methylation, a chemical process that gives genes “orders” for differentiating cell types. Discovered by Hebrew University researchers Howard (Chaim) Cedar and Aharon Razin, methylation explains why, for example, one cell turns into a nerve cell and another grows into a skin cell.
Manipulating methylation and other signals “would allow us to transform cells in the inner ear to become or create new ones to allow for proper hearing,” said Avraham.
“Our analysis of the DNA methylation dynamics revealed a large number of new genes that are critical for the development of the inner ear and the onset of hearing itself,” she said. “We hope that our epigenetic maps of the inner ear will provide entry points into the development of therapeutics for hearing loss.”
Lipifai
Speech-to-text technology is not a perfect solution, especially when there’s ambient noise.
Julie Dai of Haifa and Waseem Ghrayeb of Nazareth intend their Lipifai artificially intelligent online lip-reading technology to overcome that problem.
Not yet commercialized, Lipifai not only “listens” to the speaker via the phone’s microphone. It also “watches” the speaker’s lips via the phone’s camera.
In low-noise environments, both inputs feed the resulting text displayed on the screen.
If there’s a lot of noise – like in a restaurant — the app switches only to the lip-reading component. And whereas human lip-readers average up to 40% accuracy, Lipifai boosts accuracy to more than 85%, Ghrayeb tells ISRAEL21c.
Julie Dai and Waseem Ghrayeb, developers of online lip-reading technology LipifAI. Photo: courtesy
Dai and Ghrayeb both worked for nine years in the high-tech industry; Dai has a master’s degree in computer science and Ghrayeb has a master’s in artificial intelligence.
They began developing their solution as fellows in the 2019 cohort of Our Generation Speaks, a program at Brandeis University in Massachusetts that pairs budding entrepreneurs from Israel’s Jewish and Arab sectors. OGS also invested in Lipifai.
Last summer, Dai and Ghrayeb took Lipifai to the Massachusetts Institute of Technology for further development at entrepreneurship accelerator MIT:designX and MISTI, MIT’s international education program. Several MIT interns will come to Haifa in January to continue working with them on development.