Basically the title. In your professional/hobby filled life, how have you successfully introduced LLMs into your workflow?<p>My Experience:<p>I have tried using Cursor to help with development, and find it can generate really basic code okay, but struggles with any kind of complex needs, or code that has to work across files. Going back and forth with it gets stuck at a local minima and doesnt fix things.<p>At the end of the day I end up spending more time reading medeocre code to understand it than just writing the thing.<p>Writing wise: I have tried asking it to generate a blog post about a topic or even just bullet points for me to write about, but its too generic and lacks depth.<p>For video games, I asked ChatGPT to explain to me the different advantages of the different countries you can play as for AOE FE, but it hallucinated units and advantages, making it worse than useless.<p>I'm clearly doing something wrong. What have you successfully done with LLMs?
When I need to make a decision (especially about software development), I ask it to ask me questions to inform the decision. I then ask it to present options and the trade-offs between them. I don’t always go with one of its ideas, but it helps me see what it is I really want my code to do and my design to feel like. It makes a decent sounding board when you’re working solo.<p>It’s also pretty good at proposing test cases and implementing tests for unit testing of simple functions and methods, and helps me be better about TDD.<p>It’s also pretty good at coming up with answers to common boilerplate tasks and setup. It is NOT good at implementing business logic — most interns I’ve worked with are better.
I use GitHub Copilot and it’s mostly replaced Stack Overflow for me. Normally if I run into a problem I’d do a web search and try to find an answer, but I’ve found that Copilot gets me to a good answer much faster the majority of the time.<p>I’ve also found it extremely useful at explaining a specification to me without having to dig through large amounts of documentation. I’ve used this information to successfully implement FastCGI communication and zip file parsing (existing libraries did not meet my needs).<p>The downside is that VS Code seems to defer autocompletion to Copilot, so a decent proportion of the time when I type “object.” it will hallucinate methods that do not exist on the object.<p>I use ChatGPT to generate dummy content for mockups and demos (I believe that lorem ipsum is scary / confusing for non-designers). I also use it to brainstorm ideas and critique my writing.
I usually use LLM to help brainstorming ideas or finding issues/concern in my code, sometimes when I start out a new project, it can help provide me a starting point of the architecture [0] too (spoiler: the link below is a blog post about my product, I built it to solve my own need but I thought it might be useful to share).<p>[0]: <a href="https://docs.chatuml.com/blog/build-your-own-startup-with-chatuml" rel="nofollow">https://docs.chatuml.com/blog/build-your-own-startup-with-ch...</a>
I use ChatGPT as a glorified search engine for Django documentation and examples. There is tons of material out there for it to have trained on, and so it does an outstanding job at returning immediately useful results.<p>Everything else I do is obscure enough for it to return only plausible-looking garbage.
To get a good laugh.<p>If I'm bored, I'll ask chat GPT something like, including the steering wheel, how many wheels does a car have? It's really easy to get most LLMs tripped up, and the rantings they generate can be pretty hilarious.<p>On a more serious note, Infinite Craft has made a pretty entertaining game out of those non-sequitur outputs.