Ha, I knew I wouldn't be the only person adding ChatGPT to emacs (<a href="https://github.com/CarlQLange/chatgpt-arcana.el">https://github.com/CarlQLange/chatgpt-arcana.el</a>).<p>I am beginning to think that ChatGPT is the singularity. I built that emacs package, but really, ChatGPT built it using me as a conduit - there isn't a single function in there that ChatGPT didn't write the majority of (although the poor code quality is all my own).<p>It could go further. Using something like langchain, I'm pretty sure you could get ChatGPT to instruct itself in building a python library or something:<p><pre><code> User request: build a python library to print out ascii charts in the terminal
ChatGPT2ChatGPT: What are the steps I need to take to build a python library to print out charts?
For step in steps:
ChatGPT2ChatGPT: Carry out this instruction
</code></pre>
And so forth. I'm sure some human interaction is required, particularly given the context limit. But even so.<p>Another thought I had was that using ChatGPT to help me build things is so convenient that I am unlikely to pick technology that it doesn't know about. Perhaps this kind of thing will cause a chilling effect in the creation of new tech. Who knows. Truly exciting times.
I wonder if we may have crossed the threshold into singularity. LLMs will almost inevitably be on pretty much everything in 5 years. Looking at how well current GPTs are able to write calls to external APIs, their integration to systems will be surprisingly deep. No apparent end in sight to their advancement.<p>It may well be that this is an unexpected vector from which to enter the singularity.
While ChatGPT is a real technological breakthrough because I think we, for the first time ever, crossed a line that made having conversations with AI Chatbots useful, it also was inevitable. Year by year we crawled forwards, improving bit by bit. I think we just underestimated the tipping point, at least I would have never thought something of the level of ChatGPT to be possible. It will only improve from here, competition will increase and AI NLP will become that much more common.
I can't see an end in sight for the capabilities of these systems anymore. I've lost a LOT of sleep regarding this over the last week as I work on a prototype to demo to my org's leadership. In our sector (B2B consulting), this sort of technology is essentially "adapt or die". Additionally, I see this technology as a way to level the playing field for a brief moment. F500 will still be there, but I think the constituents are going to change dramatically.<p>ChatGPT is a warning shot for me. The technology had been there for a while, but this arrangement is particularly compelling (concerning). Even if the current iteration doesn't achieve "criticality", the next one almost certainly will.<p>I am seeing 2 paths for a developer: embrace these tools or become the next generation of luddite. I've got no clue if this is going to <i>eliminate</i> jobs, but I would certainly prefer to be on the vendor-side of this software than the receiving end. The roles inside of a typical org chart are going to change very quickly - That much is certain to me.
>This week, OpenAI announced that they have created a public facing API for ChatGPT. At this point, I think it's over. We are going to have to learn to live with large language models and all of the other types of models that people will inevitably cook up.<p>What were they hoping for exactly? That OpenAI would just close down and stop serving requests?
Bing Chat is better than Google was at its best. That's what convinced me this thing could be worthwhile. There are so many questions Google used to handle okay that I just gave up on exploring, and that Bing Chat now handles exceptionally well.<p>For example: <a href="https://news.ycombinator.com/item?id=34986361" rel="nofollow">https://news.ycombinator.com/item?id=34986361</a><p>All Google provided was endless near-identical articles explaining the core types. No amount of query refinement helped. Meanwhile, Bing Chat asked follow-up questions and helped get to the right questions to ask.
I'm still so confused by this attitude. ChatGPT is not accurate. It cannot innovate, it only can rehash content is has been provided. Sure, it does so mind-boggling well. It appears to be masterful at implementing requests... but it does not know when it did something wrong. It cannot fact check itself. It cannot test its own work. And while it can reshuffle words into something new, it cannot actually invent anything new -- that depends on a person guiding it, judging the results, and driving it through additional iterations.<p>It is a great tool, but it is just a tool.
It’s seems inevitable that the majority of posts and comments will eventually be AI created intended to nudge our thoughts in a direction. I at least, will probably stop reading comments online at that point
You can’t go back from technological breakthroughs! Fear the repercussions! None of it works right and the boilers explode! Good men die! Think of all the Ferriers who will lose their jobs! The cowboys who won’t be able to do their work running cattle anymore. It’s a travesty. I don’t see that society will be ready for this change. I can see some benefits, but the downside will destroy our way of life.<p>— preachers in 1860
> At this point, I think it's over. We are going to have to learn to live with large language models and all of the other types of models that people will inevitably cook up.<p>Was there any doubt? Useful technology proliferates regardless of how people feel about it.
Well yeah. You could also be afraid of screwdrivers. After all, someone could stab you with one. So on the one hand you could fear the screwdriver and try to eliminate them. But really, it's the stabby person you should be afraid of, not the tool. If not one tool, they'd just use another.