Last month, I contributed an op-ed piece to MIT Technology Review wherein I argue the ways in which artificial intelligence helps disabled people. As I wrote, the lion’s share of the conversation on AI involves handwringing over ethics, safety, and disinformation whilst a pittance is paid to the genuine good the technology can do. However important it is to discuss potential issues and ponder governmental regulation, it’s equally important to shine a light on the poignancy inherent to how AI empowers the disability community. The implications aren’t trivial.
For the purposes of this story, Exhibit A of that is Sign Speak.
On its website, Sign Speak describes itself as a “forward-thinking AI research and product startup with a core goal: to create AI solutions that empowers Deaf and Hard of Hearing individuals to interact freely, with our community autonomy at the core of all of what Sign-Speak does.” The company goes on to say its product was built largely because of “the historical exclusion of Deaf and hard-of-hearing individuals from mainstream tech advancements. Every day without accessible AI tools widens the existing equity gap.” Sign Speak, which has been in operation for the last 8 years, says its overarching goal is to “make sign language recognition as widespread as voice recognition,” adding the team is dedicated to “ensuring that our community gains more skills and has increased access to opportunities.” Sign Speak is based out of Rochester, New York, with its three-person founding team comprising of Yamillet Payano, Nikolas Kelly, and Nicholas Wilkins.
Payano, who is Sign Speak’s chief executive officer, explained in a recent interview the company grew out of Rochester’s National Technical Institute for the Deaf. Payano grew up in the Dominican Republic and has a family member who’s Deaf. She said in her home country, Deaf people don’t have access to support services such as interpreters; despite she and her family member having the same opportunity to immigrate to the United States, their respective outcomes were “very different.” Payano left her so-called “fancy job” at Fannie Mae in 2017 to move to Rochester and work on Sign Speak. It was there she met Kelly and Wilkins and the trio embarked on building the company as it is today.
At a high level, Sign Speak has been designed in such a way that it gives the Deaf and hard-of-hearing community access to communicative technology that’s known as “functionally equivalent” to that available to hearing people. The company currently provides three services: sign language recognition, which converts ASL into text or voice; a sign language avatar, which translates voice into ASL; and captioning, which translation voice communication into text.
“I see a true value in this technology,” Kelly said, Sign Speak’s chief product officer. “This technology can open doors which were previously locked, where other people could easily go through the doors. But Deaf people, we try to go through those doors [and] try to communicate, and we can’t. We’re met with a door to something. I think this provides opportunities to open the road and open those doors. The application can change lives—not just in the United States, but also globally.”
Kelly told me many technology companies don’t involve Deaf people in their research and development efforts. Sign Speak, he said, is different because the startup engages the Deaf community from all over the world. The Sign Speak team wants the community to try its technology and give feedback, whether positive or negative. Notably, Sign Speak is not merely building technology for Deaf people; it’s building it with them.
A substantial portion of my interviews with the Sign Speak team included demonstrations of the company’s products, most notably the ASL avatar and ASL GPT. The “person” doing the signing looks like a real human, but is in actuality AI-generated. Wilkins explained Kelly has worked for years on building an avatar that looks and acts real, adding that many in the Deaf community have specifically asked for realistic depictions. It’s a big deal representationally, with Wilkins telling me the overarching goal is, again, to create tools for the Deaf and hard-of-hearing community that offer what’s called functional equivalency. The Sign Speak team is excited by the prospects of this technology, especially generative AI, and is currently investigating ways to incorporate it into hearing-oriented devices such as smart televisions. Likewise, the team wants ASL-based tools which summarize meetings akin to how something like Otter AI—which I use to transcribe interviews—does today on the website and in its mobile app. As to training the models, the Sign Speak team told me the models use data fed into them by sign language providers, as well as speech and language pathologists.
When asked about feedback on Sign Speak, the team told me it’s been warmly received by the Deaf and hard-of-hearing community. I was sent an Instagram Reel in which a person asks the ASL GPT avatar how worms become butterflies, to which the avatar replies by saying worms don’t become butterflies—caterpillars do. The interaction is impressive not only in function and accuracy, but it’s a literal illustration of how AI can be made more accessible to disabled people. The team is passionate about the possibilities its technology gives the community at large, with the overarching message being that Sign Speak is meant to enable empowerment and independence in a hearing-dominated world.
As to the future, Sign Speak hopes to continue building out the technical components and making it even more capable over time. They want this technology to be on every Deaf person’s smartphone and other devices. The team realizes not everyone will choose Sign Speak, but the salient point is that it exists as a tool that someone can pick up if they so choose.