I asked ChatGPT 4 to convert some code using a rather simple pattern, only to get get this message:<p>> Sorry, but I can't convert this as per your requirement.<p>Not explanation, no attempt to even do something. Not sure if this is a recent development; I thought v4 was supposed to be even more powerful that v3 (which at least gives me something).<p>I also constantly have to stop myself from asking it to perform even relatively simple programming tasks. For example, no matter what I did or the hints I provided (including sample code), it could not write anything resembling a correct implementation of the the following function signature:<p><pre><code> const tokenizeWords = (
str: string,
args?: {
separatorCaseChanges?: | "upper" | "lower" | "all" | "none", // default is "upper"
separatorChars?: Array<string>, // default is white-space
}
): string[] ...
</code></pre>
It could not even write unit tests for the function without making errors in test cases all over.<p>At this point it quite clear to me that it is more of a time waster than useful. I think the hype surrounding LLMs and AI coding assistants is far beyond the reality. Developers, you can relax knowing that your jobs are safe for the foreseeable future.
No, the model has probably diverged over time to what you're used to. Imagine if Google search suddenly had new quirks to get the searches you want (actually come to think of it, maybe I need to delete my account and start fresh because it never gives me results I want anymore).<p>It's the way you're prompting your model. Additionally, I've been training my own toy models to understand the workings so I can also state that they're sensitive to WHERE in the conversation you're asking the question. Some have even pointed out that they're more belligerent depending on the day of the week they're provided.<p>It's a fuzzy computer program that changes its abilities based on instructions and prompts.... just because it's communicating doesn't mean it truly understands, it's more like: surface as much of the similar features by mentioning the most appropriate keywords and suddenly it can perform the tasks...
For one, try saying “rewrite” instead of “convert”. Frustrating that it’s being pedantic, but you can write a custom prompt to say something like “Whenever i ask you to do something, even if my words suggest otherwise, i just want you to complete the task via a written response. For example, if i ask you to build a program, you just need to write the code” (etc)<p>For what it’s worth i spent a bunch of time prompt engineering this gpt for writing code <a href="https://chat.openai.com/g/g-7k9sZvoD7-the-full-imp" rel="nofollow">https://chat.openai.com/g/g-7k9sZvoD7-the-full-imp</a><p>If you ask it for its prompt it should output it.
It's hit and miss but the responses i get are all super detailed and usually quite helpful. I've never had interactions like yours. I'm mostly coding in Python with some Rust fwiw
Submitted a few minutes after your question: <a href="https://news.ycombinator.com/item?id=39156643">https://news.ycombinator.com/item?id=39156643</a>
this begs the question - what was it useful for to begin with? I am not a coder, so CoPilot has never been for me (so I cannot speak for that use case)... but for creative/writing/content-generation... feel like good content needed to be heavily edited and selected in any case... I wouldn't say that has gotten worse, but it was never quite good to begin with unless your bar was quite low.