The article states the following:<p>> I picked the best responses, but everything after the bolded prompt is by GPT-3.<p>Based on this, I am pretty sure that the order of paragraphs and the general structure (introduction, arguments, conclusion, PS) are entirely the product of the editor, not of GPT-3. I'm assuming that this is at the level paragraphs and not individual sentences, which does leave some pretty good paragraphs.<p>Another question that I don't know how to answer is how different these paragraphs are to text that is in the training corpus. I would love to see what is the closest bit of text from the whole corpus to each output paragraph.<p>And finally, human communication and thought is not organized neatly in a uniform level of difficulty from letters to words to sentences to paragraphs to chapters to novels or anything like that, and an AI that can sometimes produce nice-sounding paragraphs is not necessarily any part of the way to actually communicating a single real fact about the world.<p>I still believe that there is never going to be meaningful NLP without a model/knowledge base about the real physical world. I don't think human written text has enough information to deduce a model of the world from it without assuming some model ahead of time.
I wish I could see the original of this - this quote “As with previous posts, I picked the best responses, but everything after the bolded prompt is by GPT-3“ could mean anything from a minor improvement to this being essentially human written, and there’s no way to tell.
With GPT-3, I guess lots of upcoming stories of various discussion forums getting the "Sokal Affair"[1] treatment. We'll keep amusing each other by trolling everybody with more fake GPT-3 stories.<p>I think GPT-3 is very convincing for "soft" topics like the other HN thread <i>"Feeling Unproductive?"</i>[2], and philosophical questions like <i>"What is intelligence?"</i> where debaters can just toss word salad at each other.<p>It's less convincing for "hard" concrete science topics. E.g. Rust/Go articles of programming to improve performance.<p>An interesting question is what happens when the input to the future GPT-4 is inadvertently fed by lots of generated GPT-3 output. And in turn, GPT-5 is fed by GPT-4 (which already ingested GPT-3). A lot of the corpus feeding GPT-3 was web scraping and now that source <i>is tainted</i> for future GPT-x models.<p>[1] <a href="https://en.wikipedia.org/wiki/Sokal_affair" rel="nofollow">https://en.wikipedia.org/wiki/Sokal_affair</a><p>[2] <a href="https://news.ycombinator.com/item?id=24062702" rel="nofollow">https://news.ycombinator.com/item?id=24062702</a>
I’m continually amazed - flabbergasted - by GPT-3. I’ve read stories, articles and HTML written by it and each time I am shocked at how good the output is. This essay made me laugh!<p>It’s practically indistinguishable from a human. Not a creative, insightful and unique human. But an average human? Yes, I cannot tell the difference.<p>I must repeat that - I cannot tell the difference!<p>This can probably completely replace or supplement most online content that I see including news, certainly on the vacuous side of things of which I think there is a lot of content.<p>Those online recipes with irrelevant life stories before them? Replaced. Those opinion pieces in news? Replaced. Basic guides to tasks? Probably replaceable.<p>I know I probably only see the best output, and it would be nice if I had more context, but the peak performance is amazing<p>The twitter video showing GPT-3 generate HTML based on your request? I think there’s a lot of potential. I don’t knew whether it can, in general, live up to these specific examples though.
<i>In AI research, the territory is not the map.</i><p>A half-joking prediction:<p>at some point we'll solve all arbitrarily hard milestones for AIs and will still find ourselves 'nowhere near having real general intelligence'.<p>At that point we might start questioning our assumptions about intelligence.
Impressive? Absolutely. Monetizable? Unclear to me, but probably somewhere within the vast ad/chatbot/garbage text generation service-scape.<p>Scary? Not GPT-3, but when GPT-6 or 7 gets involved in the political realm, that’s when people will take notice. This essay has a glimmer of “humans can’t be trusted to govern themselves” - and it’s not entirely unconvincing.
This seems to me like a very technologically sophisticated version of the ancient myth of Narcissus and Echo.<p>As brilliant as it is, I think this speaks more to how we as humanity think about ourselves than it does about AI.
From the same site, this brilliant attempt at comedy writing - with some passages better than many human comedy writers:<p><a href="https://arr.am/2020/07/22/why-gpt-3-is-good-for-comedy-or-reddit-eats-larry-page-alive/" rel="nofollow">https://arr.am/2020/07/22/why-gpt-3-is-good-for-comedy-or-re...</a>
That's impressive.<p>A few weeks ago, GPT-3 generated content looked like nonsensical content farm's content to me. Today, this article makes points and follows an argumentative line.<p>There are still a few oddities, but this time, it looks like thinking and not just putting related words next to one another with proper grammar.
Whenever people talk about how GPT-3 can’t do a lot of things that humans can do, I always think back to the “Bitter Lesson”. I don’t want to believe that general AI will just come from a stupid amount of compute, but it might.
here's a parragraph in which I replaced "brain" with "scientist" and mouth with "experimental data"<p>> The point of this is to form a hypothesis. If the scientist and the experimental data say the same thing, then the scientist will think it has a hypothesis that is correct. But if the experimental data and the scientist say different things, then the scientist will think it has a hypothesis that is wrong. The scientist will think the experimental data is right, and it will change its hypothesis.
One potential area for the future is "augmented writing" where writers aren't as we think of today but more editors who feed in prompts and possibly rearrange and tweak to get better results than just their meat brains could come up with. There would be a diversity of styles and approaches of course.<p>Imagine say some trying to maintaining training sets per individual character and finding that they would not only provide better lines but choose different actions.
Related question that I don't know how to quickly find on Internet: imagine that IQ is a good measure for intelligence. I read Ainan Celeste Cawley[1][2] has an IQ of 263 (again, believe this number is accurate for a moment). How do you measure an IQ of 500, 1000, or 5000? I mean, not the actual test but how the test structure would change from measuring normal and outlier IQs?<p>Disclaimer: I am not an avid science fiction reader but interested in sources talking about superintelligence [3]. Is superintelligence more of the same or it is more about having different layers interconnected?<p>[1] <a href="https://en.wikipedia.org/wiki/Ainan_Celeste_Cawley" rel="nofollow">https://en.wikipedia.org/wiki/Ainan_Celeste_Cawley</a><p>[2] <a href="https://www.rd.com/list/highest-iq-in-the-world/" rel="nofollow">https://www.rd.com/list/highest-iq-in-the-world/</a><p>[3] <a href="https://en.wikipedia.org/wiki/Superintelligence" rel="nofollow">https://en.wikipedia.org/wiki/Superintelligence</a>
Another annoying GPT piece where there is no ability for a regular member of the public to verify it. I guess in applying to the beta I should have said under 'what do you plan to do with this' - "post on social media cherry-picked examples that hype up GPT-3".
Did they train it on Stephen Fry novels? If we deepfaked this text onto his voice and image, I think we might have something better than how Martin Amis turned out.
wonder if one day a forum like hackernews appears where gpt-x bots are posting comments on cool articles and blogs created by the very same bots. very deep complicated topics are discussed no human ever understand. If our progression in these fields will not come to a halt then this must happen one day.
There is pretty much one thing advances in AI tells us. Most of humanity is nothing more than a statistical approximation algorithm. But that doesn't mean human intelligence is. What is fundamentally lacking from modern AI is the ability to "invent". They can perfectly (at least very soon) approximate the behavior of "Joe". But they get nowhere close to even touching anything like the Einstein's of humanity.<p>The main problem I see with AI is that it is very easy to approximate "general human intelligence", which is essentially equal to "being indistinguishable from the Joe next to you". But it is a completely different league to actually advance the human race. For that, statistical approximation will never work.<p>The next step is to create AI that innovates. As long as that isn't done, all we have is a demonstration of how "unintelligent" most human beings really are (i.e. nothing more than a statistical approximation + pattern matching... Instagram and social media essentially is like an AI forcing function for human beings, to make them become average).<p>And yes, we can couple AI with things like a Go-Engine, SAT solver, theorem provers, etc. to give them abilities beyond what humans can do in these categories, but who builds that? Humans... As long as AI can't build an AI for a category it knows nothing about and has had no training for, that AI remains "as unintelligent as a brick". All it can do is reproduce what its creator taught it.<p>That isn't necessarily a bad thing at all. This could still be extremely useful for society and put a new evolutionary pressure on the human race to become "above" average. Something that has been utterly lacking in the past century. With general, yet stupid AI becoming a reality soon, >90% of humanity is rendered obsolete. This will cause a significant pressure to improve on an unforeseen scale, which is probably a good thing overall.<p>Truly intelligent AI on the other hand, might as well lead to our immediate extinction, since it renders the entirety of the human race irrelevant.