Speech recognition technology company Voiceitt announced its partnering with video conferencing platform Webex by Cisco to make virtual meetings more accessible for individuals with speech impairments.
Voiceitt offers an AI-based speech recognition app for individuals with non-standard speech to communicate, instantaneously translating unintelligible and atypical speech.
The partnership will allow for real-time AI-enabled captioning and transcription for people with speech impairments to be understood during Webex virtual meetings.
Voiceitt’s API is available for download via Webex’s App Hub. The technology will be fully embedded into Webex’s platform later this year.
“Our partnership with Cisco facilitates a significant application of Voiceitt’s core tech, enabling people with speech disabilities to express themselves and be understood by voice in both professional and social contexts. We are grateful for Cisco’s commitment through this integration with Webex Meetings to facilitate more inclusive work environments, leveraging cutting edge voice AI to create opportunities and empower individuals with disabilities in a diverse and inclusive world,” Sara Smolley, Voiceitt cofounder and vice president of strategic partnerships, told MobiHealthNews in an email.
THE LARGER TREND
Voiceitt is an Amazon Alexa Fund portfolio company that participated in the Alexa Accelerator, powered by Techstars in Seattle in 2018.
In December, Voiceitt garnered $4.7 million in funding, with Cisco Investments participating in the round. At the time, the Israeli company said it had raised $20 million since its formation in 2012, including $5M of non-dilutive funding from grants and competitions. In 2020, Voiceitt received $10 million in Series A funding.
Other organizations working in the speech recognition space are tech giants Amazon, Microsoft, Meta, Apple, and Google. In October, the five companies, alongside nonprofit partners, announced a collaboration with the University of Illinois Urbana-Champaign to expand speech capabilities for those with disabilities via the Speech Accessibility Project.
The project will collect speech samples from individuals with diverse speech patterns, and the university will recruit paid volunteers to contribute recorded voice samples. The collected samples will help train machine learning models to identify various speech patterns, focusing first on American English.
This story originally appeared on MobiHealthNews