Like everyone else I was impressed by the capabilities of chatGPT. As a dev I also tested for code related tasks.<p>Initially I used it to teach myself some lua and the love2d game framework. I found that it was really good in that aspect. No need for google, stackoverflow most of the time as I was able to ask clarifying questions when I didn't understand something. However sometimes I did go check some of the documentation as it was faster than formulating a follow up question. (This is when I gained familiarity with the framework where I knew where to look for things).<p>I then tried to use it to complete full coding tasks. My first attempt was to tell it to build me pong in love2d. It did generate code that ran but the ball wouldn't behave properly. I asked many clarifying questions but still it was hard to get it to correct itself.<p>My second attempt was to tell it to build a snake game in love2d. Initially, it only implemented player movement. I then insisted on it to implement food spawning, eating and snake growing. It did implement food spawning but somehow it didn't include the logic for making the snake grow despite asking again.<p>My third attempt was to create a small Flask web app that fetchs data from an API and display an html page with some of the data received. It did generate code that seemed correct at a first glance but needed some tweaks to run correctly.<p>In conclusion, I've found chatGPT really good for where I would use googling and learning new frameworks/libraries. Things where I don't know what I don't know. However, sometimes you need to be careful as it can spit out answers that looks correct but when trying it out for real doesn't work. (This is somewhat rare. In fact it's impressive that most of the time the code provided at least runs.)<p>When it comes to asking chatGPT to build things for me, I found it to be less successful. I had to really be careful and do a lot of verification + gluing work stiching the different code answers of the model.<p>It is somewhat analogus to semi self-driving cars where the human driver is on stand-by but has to immediately take action when things go wrong. This is seemingly harder than just manually driving from the get go as you need to context switch pretty rapidly.<p>I found that the mental energy spent on verify and correcting the code provided was greated than if I simply wrote the code myself from the get go. (This is valid mostly for things where I already have knowledge of the library/framework/programming language related to the task). I also think using the model as a google replacement to build the thing I want rather than telling it to build what I want would have been less mentally taxing.<p>Anyway, thought I'd share my thoughts on this. Good job to OpenAI for making this much progress with their model.<p>Would be curious to know what was your experience with chatGPT?