Apple, Microsoft, Google, and Amazon team up with Meta to create Speech Accessibility Project
In fact, people in the industry including Apple and Google have previously invested in research on different vocalization situations and adjusted for different accents and vocalization methods. So the digital assistant service that interacts with the voice can smoothly identify the content of the user’s question. However, due to the different ways of investing in research, the actual establishment of training models is also somewhat different. Therefore, there may be a gap in experience when different services and devices use the voice recognition function.
The “Speech Accessibility Project” co-operated with the University of Illinois by multi-technology companies hopes to further allow all companies providing voice recognition technology to participate in the establishment of the same recognition model which enables more people with hearing impairments and diseases that affect their pronunciation to interact with various devices smoothly through speech recognition technology, so as to avoid the experience gap between different services and devices.
Voice recognition technology has gradually become a mode of interaction between people and devices. At the same time, many people with disabilities and diseases rely on voice recognition to operate mobile phones and other devices. Therefore, various manufacturers pay more and more attention to the application and development of voice recognition technology.
The project implemented this time will collect data from different speech template providers and establish a learning model for artificial intelligence training. However, the initial design is mainly based on the English language, but it is expected that more language categories will be added in the future.