What if speech assistants like Siri become more human and natural?

When the user uses his computer, he is aware of that. When utilizing language assistants like Siri, he talks to the voice assistant differently/ in other ways than to a real person. The user would say things such as “Timer 15 minutes” or “Call mom”. If the user had talked to a friend, he would have said “Can you please set the timer to 15 minutes?” or “Can you please call my mom?”

The user would not use polite phrases, such as “Please” or “Thank you” when availing himself of a language assistant. By making the language assistants more human, the user begins to forget that he computes and treats the computer rather like a friend than as an object. Through this behavior, a region is addressed in the brain that is normally responsible for feelings such as “love” and “friendship”.

Since iOS 11, Siri sounds /turned out to be more natural. Apple has done a lot of work here. More than 20 hours were recorded audio. The recorded language was split into so-called “half-phones”. There are many ways to say “thank you”, faster, slower, louder and, of course, differently expressed. Through the individual recordings, Siri can now, with the support of Machine Learning, choose which pronunciation fits best to the users’ questions. When creating the phonetic answer, not only the question is considered but also things like place and time of the request.

 

 

Source: https://techcrunch.com/2017/09/19/does-siri-sound-different-today-heres-why/?ncid=rss