According to a separate article (<a href="https://www.vice.com/en/article/n7ezkm/eating-disorder-helpline-fires-staff-transitions-to-chatbot-after-unionization" rel="nofollow">https://www.vice.com/en/article/n7ezkm/eating-disorder-helpl...</a>) the chatbot wasn't supposed to be ChatGPT-based and instead is a "rule-based, guided conversation". Whether that means it's a standard chatbot or an LLM is unclear.<p>- If the former, the problems speak to a systemic issue, where the management had a fundamental misunderstanding of what the helpline was actually doing. Instead they implemented a chatbot based on what they <i>thought</i> it did, which was wholly unhelpful<p>- If the latter, then the managers are idiots.