I can't help feeling there's a "walking before you run" thing here. Unsurprisingly, no, you couldn't make something to do this yet. Because we haven't yet built anything that passes the Turing test.<p>But the fact that we can't yet do that doesn't mean that bots are going to be useless. That'd be like looking at the first ocean-going freight vessels and saying "well, it can't move 300,000 tonnes of freight, so it's useless), or the Wright Flyer and saying that because it can't cross the Atlantic, it's a bit rubbish. (I'm aware this is close to straw manning, but bare with me...)<p>Sure, early proofs of concept are often both limited in scope and fairly dire. But that doesn't mean there's no potential utility in them and what they do. I suspect bots are similar. Initial, narrow use-case versions will be very useful at providing value in specific circumstances, and eventually they'll become more general in nature. But decrying them at this stage seems a bit like throwing the baby out with the bathwater.
I realize this is not related to the actual article, but having looked at their website (<a href="http://www.workgroup.im/" rel="nofollow">http://www.workgroup.im/</a>) this really struck me.<p>Does anyone else find it a little bit gross when companies take stock photographs, give them a name and write stories about how they use the product? It's clearly meant to work as a social proof and to look almost like an endorsement from a peer.<p>It's just another warmer, fuzzier dark pattern in my opinion.
Seems like we haven't progressed much beyond Clippy, the old MS Office "bot" from 1997. So perhaps we shouldn't be holding our breath. (Though AlphaGo just came out of nowhere so....)
I guess something that wechat does is more appropriate for the time being to use in business products... If at all.<p><a href="http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps.html" rel="nofollow">http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps....</a>
I have a hard time giving credit to the examples they raise, since they say users were aware that humans were present behind the 'bot', which would explain to me why they used emojis / gifs and images to talk with the 'bots'.
Can a bot be made which understands emotions, sarcasm, etc.? Can something be made which does more than just parse commands and gives out predefined replies? If so, how far away are we from this?
Would have loved to see some data on what requests were easily handled by the <i>bot</i> and/or what questions could truly be answered by a fully automated bot.