I noticed a significant decrease in the model reasoning abilities in the last few days, especially when it comes to writing code. Has anyone else experienced this?<p>Given that these models are being used in all sorts of applications all over the world, I think it's concerning that OpenAI can lobotomize a model in secret and no one can verify that and do something about it.
As an EU user, I feel like it has been watered down a lot, both 3.5 and 4. Many queries now come with disclaimers which are pretty annoying since you can't set your own system instructions in the ChatGPT interface. Using the API is fine though since you can explicitly say that you want a specific format.
The more they force the model into politically-fashionable responses, the further they get away from rational thought process. CodeT5+ looks promising though
> <i>concerning</i><p>...And in fact some at Google are already projecting a future of Open Source dominance in the subfield.<p>(Edit: the system must have missed a post update: )<p>...Coincidentally, in fact, a neighbouring post is: <i>OpenAI readies new open-source AI model</i> - <a href="https://news.ycombinator.com/item?id=35958789" rel="nofollow">https://news.ycombinator.com/item?id=35958789</a>