Apple is training Siri to understand stuttering and other atypical voice users

Voice recognition has always been a hurdle that voice assistants cannot get around. Compared with the standard pronunciation of clear English, some dialects and atypical voices (such as stuttering) are very difficult for voice recognition.

Today, Wall Street Journal report mentioned that some companies are training voice assistants to better understand atypical voice users. Among them is the Siri development company, Apple, which has a high market share.

The company is now researching how to automatically detect if someone speaks with a stutter, and has built a bank of 28,000 audio clips from podcasts featuring stuttering to help do so, according to a research paper due to be published by Apple employees this week that was seen by the Wall Street Journal.

Apple SiriOS

“iPad Siri”by plynoi is licensed under CC BY-NC 2.0

According to the news, Apple has collected about 28,000 audio clips from various podcasts, and some of the broadcaster clips with stuttering will be used to train Siri. In this regard, an Apple spokesperson also confirmed that the data they collected will be used as a voice recognition system to improve atypical voice patterns.

In addition to training Siri to understand the expressions of atypical voice users, Apple has also added a function to Siri, which allows users to freely control the time Siri listens to, preventing Siri from being interrupted before they finish speaking.

Previously, Apple also introduced the Type to Siri function in iOS 11, which allows users to communicate with Siri without talking, but only typing.

Apple plans to publish related papers on how they improved Siri’s functionality this week. Another source said that Google and Amazon are also training their respective voice assistants so that they can better recognize the expressions of users with language barriers.