I'm curious to hear how different organizations are integrating large language models (LLMs) into their workflows. Are you using them for customer support, content generation, data analysis, or something else? How have LLMs impacted your operations, and what challenges or benefits have you encountered?
Heavily,<p>Call summarisation at scale, for further insights of key topics/sentiments for diarized & fused channel call data<p>Lead mining quality improvement by asking questions instead of merely classification<p>Internal simple RAG chatbot across ~1000 PDFs, not yet down the GraphRAG route<p>Better chatbot experience over RASA<p>Contract mining for speeding up mundane governance tasks
PMs and Execs are pushing devs to do docs classification and extract data from text ("see I can do it on chatGPT!") so they're using Taylor (trytaylor.ai) to build production grade text pipelines ;)<p>In all seriousness, customer support was the first but the least impactful area for LLMs. Currently, LLMs are primarily used for developer efficiency and for info retrieval.
I run a website alone. I often use it to find better ways to explain things in plain English. It's good to see how ChatGPT explains a thing, and use it to improve my own explanation.<p>I'm now testing automated translations of my Markdown-based content, but the unpredictable nature of LLMs mean a lot of potentially costly errors could slip through. However, offering a website in multiple languages without additional labour is very appealing. I could reach a far greater audience, and help a lot more people.<p>I also use the far better modern text to speech APIs to tell my readers how to pronounce complex German words.