The problem is that people like this author are trying to literally treat it like a person instead of an LLM. Like honestly if you look at the linked chat convo early in the article, this person kind of just sucks at prompt engineering, imo.<p>"At times I was able to get the chat agent to give me what I wanted, but I had to be very specific and I often had to scold it."<p>You can't just half-ass a paragraph of disjointed system instructions into the user input and expect clean results, in my experience. You need to leverage the custom system instructions, give example responses if possible, and be very, very specific and direct with instructions. You need to explain the type of response you want, and you also need to describe any applicable constraints (or lack thereof) on the response content.<p>"When you are asked something, it is crucial that you cite your sources, and always use the most authoritative sources (government agencies for example) rather that sites like Wikipedia"<p>This is not sufficient to achieve what the author intends. It's written in a speech-like roundabout style (e.g. "it is crucial that"), and there's a typo right in the middle on an important word (than --> that). LLM can work around typos in most cases, but here it is vaguely possibly imo that this is what is causing it to continue citing wikipedia in responses.<p>"At times, the tool was too eager to please, so I asked it to tone it down a little: “You can skip the chit chat and pleasantries.”"<p>I have found in my experience playing with ChatGPT that this is just the completely wrong mental model to have of the tool in order to get what you want out of it. You have to treat it more like a prose-language programming tool, not like a person with emotions that you are conversing with...