> I find it harder to wrap my head around the position that GPT doesn’t work, is an unimpressive hyped-up defective product that lacks intelligence and common sense, yet it’s also terrifying and needs to be shut down immediately.<p>Immediately lost me as a reader. There’s at least 5 reasonable ways you can frame the various opposing or concerned arguments that are consistent but you chose two critiques on purpose to frame an unrealistic imagined hypocritical opponent that’s a mashup of legit critiques designed to look foolish.
At this point asking whether GTP should exist is a bit like asking if money should exist - lots of opinions but it really doesn't matter. The people who use it are going to have such a huge advantage that it will crush everything in its path.<p>One of the interesting things about AlphaGo back in 2016 was it demonstrated that the algorithms for all this are simple. Once Google demonstrated that an outcome was possible, other superhuman Go playing AI began appearing in a matter of years. ChatGTP is similar - now that the world knows this tech can be built there isn't a regulatory framework big enough to shut it down everywhere. And whoever deploys AI as a tool will have an advantage over those that don't.<p>"AI safety" would now involve banning general purpose computing. Nothing less can stop the systems that are now in motion, and even that probably wouldn't be enough. The future is here.
Idk about you, but I use it everyday in helping me write framework code.<p>"Hey can you use this x library with this url to call this API and make an html table" etc and it works wonderfully.<p>Sure there are errors now and then but usually telling it those gets it to fix it. It has saved a fuckton of my time that I can spend doing something else now. Mostly boilerplate stuff but it works.
I am not very familiar with the terrain; asking as a noob. What's to stop GPT6 (which has read all of the world's research papers), from being used by terrorists to make deadly concoctions/devices? It seems being able to correlate information faster than our brightest minds (and hence maybe make discoveries) is now just over the horizon.<p>This seems eerily like the 80s/90s when chess engines were getting smarter, but most people at that time believed they were incapable of truly novel strategies.
> The only safety measures that would actually matter are stopping the relentless progress in generative AI models, or removing them from public use<p>The first is impossible.<p>The second places any dangers out of public oversight, likely increasing them.<p>We've survived nukes for almost 80 years, we proved we can survive such things. The best response is education.
imho it's a completely moot question at this point.<p>as eigenrobot said on twitter:<p>"there is almost surely nothing anyone can do to change this general course. immense wheels are in motion.<p>all that's left is to tend your garden and to trust in god. stay strapped."<p>source: <a href="https://twitter.com/eigenrobot/status/1627981829805334528" rel="nofollow">https://twitter.com/eigenrobot/status/1627981829805334528</a>
Progress is imperative. We will build more and more impressive AI _because it’s there_, because we can do it, and because it looks cool. And you don’t need a large organization to do it, the large models of 6 months ago are open source already and being optimized for reproducibility. Banning them is useless.<p>And, if somehow we create the AI which is genuinely smarter than humans — that’s great! We are all mortal anyway, and not too good at many things. If something smarter and better than humans will inherit the Earth — why not? The particular species is not relevant. Sum of all knowledge and discovering new things is what matters.<p>Ultra-accelerationism is the only way to fly.
So, when we've engineered ourselves out of the knowledge economy and all the high paying jobs are done by computers that need large amounts of capital to train ... what then for the rest of us? Are we stuck with menial jobs that robots don't want?<p>Personally, I welcome our new Transformer overlords.
It won't happen, either it is some new advanced technology which can be refined to be the engine of the future of IT globally or you have strong AI / AGI or close to AGI technology, either of both, nobody is going to backstep on this, there is simply way too much money on the table to be left if you just go all "let shut this down".<p>Anyway, the technology is out of the Pandora box, it would not matter a lot if chatGPT or chatBING somehow got shutdown, or even if these models don't work on search engines finally. Everybody has seen its potential, so way too many geopolitical actor are now moving to try to get its hands on models like these, the expertise, even the datasets.
The dangerous question nobody seems to be contemplating is: if large language models are this good, what does that say about us humans? We are barely beyond a large language model, even more pessimistic, many are on par or below one.
Wait, I’m not following this article. Here is a very biased view from an enterprise usage perspective. The value IMO of GPT is that it’s architecture allows it to analyze and understand large amounts of text data in real time, making it an incredibly useful tool for data analysis and decision-making.<p>Were are we commming from, current AI and data analytics platforms had many faults, but their biggest problem was performance. These systems arae often slow and cumbersome, which make it difficult to analyze large data sets in real time. GPT tech overcomes these challenges by leveraging deep learning that allow it for process of large volumes of data quickly and efficiently. <i>Most of the inherent cost comes at the training period.<p>Currently, at the enterprise level organizations still have many challenges when it comes to data management. For example, many organizations struggle with data silos, where different departments have their own data sets that aren't easily shared or integrated. This can lead to inefficiencies and make it difficult to get a complete view of the business. Not to mention the data confidence issues that arise when you cross correlate some of the data.<p>However, I feel like GPT can help organizations better understand customer behavior, identify trends, and make more informed business decisions. Tech like GPT can help organizations automate many repetitive tasks and improve data quality as the data treatment would can apply AI based data quality standards. </i>Single source of truth.<p>One key area of benefits is that GPT tech can enable natural language processing (NLP) tools like sentiment analysis and entity recognition can be used in conjunction with GPT to provide even deeper insights into customer behavior and preferences. Similarly, machine learning tools can be used to help train and optimize GPT models for specific use cases.<p>In practice, adopting GPT at scale, some technologies will become redundant or obsolete. For example, traditional rule-based systems may no longer be needed if GPT can provide more accurate and nuanced insights. Why run structured databases except to capture transactions? IMO, there are many solutions that are in reality just a data schema play, that is, they create the schema and the BI to capture aand transform the data to make sence of it otherwise, these technologies, again assuming wide adoption of GPT tech are at the birth of obeselence.
I read one of the transcripts [0] and it left me with uneasy feeling. I think the potential for misuse and abuse is insane. But genie is out of the bottle so it is likely too late to do anything about it.<p>[0] - <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html" rel="nofollow">https://www.nytimes.com/2023/02/16/technology/bing-chatbot-t...</a>
People be like: chatGPT is horribly bad and does not work. Its a fancy autocomplete.<p>Anyways... I am using it everyday. Its a great first step to take when you are blocked creatively or dunno where to start looking for things.<p>And this is chatbot version 1.0, so to speak. Maybe it will improve drastically in little time or maybe it will stagnate for 5-10 years. No matter the case its already very usable.
banning ai research isn't going to accomplish anything other than ensuring that <your country here> is the only one that doesn't have it.<p>Better solution is to educate the public on what this technology is good at and what it isn't good at so it doesn't end up in places it doesn't belong. Right now it's being advertised as something that it's not and that's how we end up with ridiculous clusterfucks like bing.<p>People need to learn about what limitations it has before it ends up in situations where its ineptitude can have real consequences.
I understand his warning on ChatGPT. We opened pandora's box. At some point there will be a sentient AI and humanity as we know it will be destroyed. As we see now with the current climate issues; we simply ignore the warnings and continue. I see no reason to believe with AI it will be different. Please do not think of me as being a persimist; I am an optimistic realist.