It'll be interesting to see how a human knowing they're talking to a bot changes their behaviour - in the demos people thought they were talking to another person, and are polite and professional - I wonder if knowing they're talking to a machine people may change their tone, become more abrupt, speak more slowly or become agressive - maybe it would lead to unconscious (or likely conscious) discrimination against bot calls, in a similar way to stories seen of people with 'ethnic' accents calling restaurants and being told there are no reservations, when in fact there are.
Prior to this warning feature, I wonder what would have happened if during the phone call the hairdresser had asked "are you a real person?". Would the Google assistant reply "Ummm... I'm not real" or would it lie?
From reading the headline, I assumed Google was providing some useful service where chrome or Google voice or some other Google medium would warn hapless human when they ended up in conversation with an AI pretending to be human.<p>But no! Google itself IS said evil AI. But hey, it's ok, don't worry, it will come with a built in warning!<p>Things like this make me think that big tech has really lost the plot. You'd think in the current climate that Google would be keeping their heads down, staying away from things that are creepy, unsettling and potentially providing evildoers with another way to maliciously influence people.<p>But no... because ads.
The problem is that the technique is known, and it's going to be duplicated. Although honourable people will not defy patents or reason to use it maliciously dishonourable people will ! The community (and Google) needs to develop a better solution to this and the deep fake video's.
Frankly I'd love a bot framework to turn the tables and call into my ISP's IVR to log a complaint.<p>A bot that would do all the waiting, trudge through the options, deal with the transfers, tell them I did the standard debugging steps and get back to me the complaint number.<p>That would be just incredible.
Google may be the first to release a system like this but it won't take long until there are equivalent services, which may not warn that it is an AI. How long until those fun calls to automated systems start with a captcha?<p>sidenote: how long until the Butlerian Jihad?
What difference does it make if a human, dog or robot tries to book a table at a restaurant? As long as it speaks in English it doesn't matter. It's the same outcome.
This tech might work well dealing with Emergency Service calls, to filter out inappropriate calls and only pass on genuine emergencies to the human operator related the required service.
And we should rely on Google's pinky swear?
We need authentication for phone calls and a set of laws requiring disclosure when this type of service is used by legal entities. And we need these laws now.