I've been spending a good part of a last year working on a chatbot with a very domain-specific scope, with that goal of arriving a product that delivers a modest but effective user experience. It's hard and it takes an unbelievable amount of work to handle myriad of edge cases you get into when a human on the other end tries to treat your bot like a real albeit limited interlocutor in the conversation.<p>The real problem I see is that the last decade of user experience improvements that work well on the web don't really translate into a chat, as an industry we're hopelessly bereft of best practices at this point, and our users notice this and experience it as the frustration of having no idea what to expect from a bot. The title article and comments cite a lot of good examples.<p>NLP and other applications of machine learning will make bots better at delivering correct answers, but making bots feel not-horrible around the edges is about user experience design. Here are some suggestions that have helped me a lot.<p>* Design for failure first<p>Just like mobile-first design gets the brain out of the pattern of tacking on mobile interactions as second class citizens, failure-first design focuses on the primary experience the users have of your bot, it not working. Don't delude yourself into thinking that your NLP intent parsing is going to result in more hits than misses down your happy path to user delight. A human will always sidestep your intended flow by accident, and that human will form judgements about your product based on it's ability to gracefully recover. Luckily the bar is incredibly low here.<p>* Be careful with conversational niceties and over-humanization of tone<p>It's easy to think that friendly banter and emojis can help personify a bot and smooth over the above-mention failure paths, but the novelty of these wear off quickly for a user, and the user is likely to experience more frustration if the conversational tone doesn't match their frustration. It is also extremely easy to end of in the uncanny valley when using friendly conservational copy in the bot messages. Repetition of a robotic message feels benign-if-annoying, but repetition of a cute emoji-laden phrase can feel very off-putting.<p>* Fall back to being a CLI with visibility and helpers<p>If you've ever been stuck working with a bot, you know that all you want is to know what it can do, and how you can get it to do that thing. If you notice the user is in a failure state through keyword matching or repeated failed routing attempts, fall back to a high-visibility list of actions. Having quick-action buttons can make this even smoother.<p>* Train the user on consistent hooks and keywords.<p>When speaking in a human conversation, utterances of 'stop' or 'wait' are almost always respected as context-independent keywords that escape or pause the context of the conversation. If I asked you what you wanted for dinner, and you responded 'stop', I wouldn't try to figure out what kind of food that was. In my project, 'help' 'quit' and 'back' are all respected as keywords, and every context of the conversation implements callbacks to respond in context to each of these.<p>* Ask a lot of questions that are easy to answer<p>Handling raw language is super hard. Routing language into a finite set of options is a lot easier, plus humans feel listened to when asked for clarification or if they have been understood correctly. When taking user input and routing to an action, ask for yes/no confirmation, and provide options like "This is totally wrong". Opportunities like this could be great to collect data about how users are stuck to improve on the experience. It's also validating for the user to feel like they can specify that they were not heard correctly.