TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How does ChatGPT work?

267 pointsby funerrover 2 years ago
I'd love a recap of the tech for someone that remembers how ANNs work but not transformers (ELI5?). Why is ChatGPT so much better, too? and how big of a weight network are we talking about that it retains such a diverse knowledge on things?

26 comments

akellyover 2 years ago
The way they went from GPT-3 to ChatGPT is really quite genius. My understanding is that it&#x27;s something like this:<p>1. Start with GPT-3, which predicts the next word in some text and is trained on all the text on the internet<p>2. Take thousands of prompts, generate several responses for each of them, and have human reviewers rank the responses for each prompt from best to worst<p>3. The GPT model needs a massive amount of training data, it would be cost prohibitive to get enough human feedback to fine tune GPT manually. So you train another model, called the reward model, to predict how the humans will rate each response. Then you train the GPT model against the reward model millions of times<p>5. Feed a small percentage of the output from that training process back to the human reviewers to continue training the reward model, based on heuristics like reward model uncertainty which predict how helpful the human feedback will be towards improving the reward model<p>6. Release ChatGPT to the public, and use user feedback like response upvotes&#x2F;downvotes to further optimize the reward model, while continuing to train ChatGPT against the reward model<p><a href="https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;chatgpt&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;chatgpt&#x2F;</a><p><a href="https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;deep-reinforcement-learning-from-human-preferences&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;deep-reinforcement-learning-from-hum...</a>
评论 #33941568 未加载
评论 #33940510 未加载
评论 #33943021 未加载
评论 #33940715 未加载
评论 #33940778 未加载
评论 #33940889 未加载
评论 #33940594 未加载
评论 #33940846 未加载
评论 #33941688 未加载
评论 #33940922 未加载
评论 #33941039 未加载
评论 #33942509 未加载
chopete3over 2 years ago
This blog explains some of the key innovations they added on top of GPT-3, especially the natural language understanding (following instructions).<p><a href="https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;instruction-following&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;instruction-following&#x2F;</a><p>In the first few paragraphs they show GPT-3 as equally dumb like all other language models that came before it and why they built instructgpt.<p>&gt;&gt; Here is the summary (ChatGPT summarized) They present their approach to the problem, which involves using a recurrent neural network to encode both the instruction and the environment, and then using a reinforcement learning algorithm to learn how to execute the instruction. They demonstrate that their method can learn to solve a variety of instruction following tasks.<p>Some snippets from the blog:<p>&gt;&gt; InstructGPT is then further fine-tuned on a dataset labeled by human labelers. The labelers comprise a team of about 40 contractors whom we hired through Upwork and ScaleAI.<p>&gt;&gt; We hired about 40 contractors, guided by their performance on a screening test meant to judge how well they could identify and respond to sensitive prompts, and their agreement rate with researchers on a labeling task with detailed instructions. We kept our team of contractors small because it&#x27;s easier to have high-bandwidth communication with a smaller set of contractors who are doing the task full-time.
评论 #33940701 未加载
评论 #33941706 未加载
bryan0over 2 years ago
I found this description of the GPT-3 transformer architecture useful: <a href="https:&#x2F;&#x2F;dugas.ch&#x2F;artificial_curiosity&#x2F;GPT_architecture.html" rel="nofollow">https:&#x2F;&#x2F;dugas.ch&#x2F;artificial_curiosity&#x2F;GPT_architecture.html</a><p>Not eli5 but close enough.
评论 #33942139 未加载
评论 #33941540 未加载
typonover 2 years ago
There were several key insights that have made something like ChatGPT possible relative to traditional neural networks.<p>* A fixed (but large) vocabulary of sub-word like tokens as inputs.<p>* Attention mechanism for learning the correlation of words in a fixed sequence window.<p>* Implementing this attention mechanism in the form of matrix multiplies rather than some other complex math - it allows it to be parallelized and run on GPUs fast.<p>* Having enough layers of these encoders to have a huge amount of parameter space. ~175B parameters in the case of ChatGPT.<p>* Feed the model a lot of data - in this case, pretty much the entire internet as text.<p>* Self supervised learning: we take sentences from the internet and mask out some words and force the network to predict the missing word. Turns out this works extremely well. We don&#x27;t use the traditional supervised learning inputs -&gt; (prediction, label) paradigm that was the standard 10 years ago.<p>* RLHF (Reinforcement learning from human feedback). Take generated text from GPT-3 (the underlying generative model) and ask humans to rate different completions. Retrain the model from those ratings.<p>* A massive compute infrastructure that is able to train this model in a reasonable amount of time, allowing for iteration on hyperparameters. For example, what&#x27;s the optimal attention head-size? How many encoder layers are good? What should the sequence length be? What should the embedding dimension be? etc. In OpenAI&#x27;s case, they used thousands of GPUs and thousands of CPUs provided by Microsoft&#x2F;Azure.<p>In summary, relatively simple model, parallelizable on GPUs, trained on a lot of data.
subroutineover 2 years ago
How does it know when to stop when asked for a description or summary? Sometimes it outputs a few sentences, sometimes a few paragraphs.<p>Does it know how much output it has already provided when deciding on the next token? How does it decide to start a new sentence or paragraph, or if it&#x27;s &#x27;satisfied&#x27; with its current response?
评论 #33940944 未加载
swaysonover 2 years ago
Yannic Kilcher did an explainer recently on his YT channel. <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=0A8ljAkdFtg" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=0A8ljAkdFtg</a><p>Yannic explains these models pretty well.
评论 #33942742 未加载
评论 #33941618 未加载
ribitover 2 years ago
I understand the basic idea of predicting the words in a sequence, but what totally eludes me is how this relates to the prompt. After all, you don&#x27;t give it a sequence to continue, you give it a direct request. Is there some special processing going on here or do they really just take the prompt as is and encode it?
评论 #33942895 未加载
discordanceover 2 years ago
Anyone found a architecture diagram that includes the ML Ops parts? - I&#x27;m very interested in this at a system level for how the train &#x2F; retrain loops work but haven&#x27;t found much info on that.
chronolitusover 2 years ago
Back when GPT-3 came out, I wanted to understand how it works, so read the papers and made this post:<p><a href="https:&#x2F;&#x2F;dugas.ch&#x2F;artificial_curiosity&#x2F;GPT_architecture.html" rel="nofollow">https:&#x2F;&#x2F;dugas.ch&#x2F;artificial_curiosity&#x2F;GPT_architecture.html</a><p>I hoped it would be simple enough for anyone who knows a bit of math &#x2F; algebra to understand. But note that it doesn&#x27;t go into the difference between GPT-3 and ChatGPT (which adds a RL training objective, among other things).
k__over 2 years ago
Half-OT: people are always talking about ChatGPT being AI, but is this actually the case?<p>It frequently told me that it doesn&#x27;t learn from my input, and I had the impression the unique selling point of AI was it being able to modify it&#x27;s own code in response to input.
评论 #33942497 未加载
评论 #33942926 未加载
评论 #33943745 未加载
adjustedover 2 years ago
It&#x27;s still transformer underneath, but openai researchers have figured out how to improve it through engineering efforts and improved training data. I believe it&#x27;s not easy for outsiders without large model pretraning experience like most of us to understand the tunning details.
osigurdsonover 2 years ago
ChatGPT is great. I use it a lot. But... it is still necessary to use google for many things. ChatGPT is somewhat out of date and strangest thing is, it will almost always provide an answer (right or wrong). So, for the most part, everything has to be manually checked.
timonokoover 2 years ago
How does the non-english languages part work?<p>I thought maybe they use Google translator, but remembered that Russians have trained it to not to understand &quot;russophobic&quot; sentences.<p>-- Mitä tarkoittaa ryssänvastainen, explain in English.<p>-- Ryssänvastainen means &quot;anti-Russian&quot; or &quot;anti-Russian sentiment.&quot; It refers to an attitude or behavior that is hostile or opposed to Russia or Russian interests.
评论 #33940957 未加载
评论 #33941678 未加载
oarsover 2 years ago
High quality answers like in this thread are why I come to HN.<p>Although I hope these high quality answers don&#x27;t all just come from ChatGPT one day.
评论 #33944769 未加载
Trampoflixover 2 years ago
Pretty interesting talk about the foundation model used in chat GPT3.<p><a href="https:&#x2F;&#x2F;m.youtube.com&#x2F;watch?v=D3sfOQzRDGM" rel="nofollow">https:&#x2F;&#x2F;m.youtube.com&#x2F;watch?v=D3sfOQzRDGM</a>
wizofausover 2 years ago
Rather strangely, it would seem - I just had this response: &quot;However, I am a language model and do not have the ability to edit or revise my responses once they have been generated&quot;<p>Except I&#x27;ve had no problem getting it to do just that previously...<p>I&#x27;m curious about its training data too, as I&#x27;ve managed to find a few things it knows nothing about (despite them having wikipedia pages and multiple dedicated websites about, and having been around 10+ years).
评论 #33941047 未加载
jb1991over 2 years ago
ChatGPT is trained using a combination of supervised and unsupervised learning. For supervised learning, it is trained on a large dataset of human-generated text, such as dialogue data or online conversations. This allows it to learn the structure and style of natural language. For unsupervised learning, it is trained using a language modeling objective, which involves predicting the next word in a sequence of text. This allows it to learn the broader patterns and characteristics of language, and to generate text that is fluent and coherent.<p>ChatGPT and GPT-3 are both large language models trained by OpenAI, but they have some important differences. GPT-3 is a more general-purpose language model, which means it is trained on a broader range of data and can generate a wider range of responses. It is also much larger than ChatGPT, with 175 billion parameters compared to ChatGPT&#x27;s 2.6 billion parameters. This makes GPT-3 more powerful and capable of generating more realistic and diverse text, but also makes it more expensive and resource-intensive to use.<p>In case you are curious, the above information was written entirely by ChatGPT when asking it about itself.
评论 #33942560 未加载
评论 #33943612 未加载
greenSunglassover 2 years ago
How much RAM&#x2F;GPU&#x2F;CPU is required to run the chatGPT &#x2F; GPT3 model (aka text-davinci-003)?
评论 #33947747 未加载
评论 #33942562 未加载
评论 #33942751 未加载
评论 #33942041 未加载
Doorstep2077over 2 years ago
It&#x27;s definitely a step up from GPT-3, but I&#x27;m curious how much further it has to go before it&#x27;s actually scary. Right now, I feel like there&#x27;s still quite a bit of progress to be made.
评论 #33943438 未加载
Trampoflixover 2 years ago
Here is a pretty interesting talk about the foundation model used in ChatGPT.<p><a href="https:&#x2F;&#x2F;m.youtube.com&#x2F;watch?v=r8ajJKDiT6s" rel="nofollow">https:&#x2F;&#x2F;m.youtube.com&#x2F;watch?v=r8ajJKDiT6s</a>
touringaover 2 years ago
<a href="https:&#x2F;&#x2F;lifearchitect.ai&#x2F;chatgpt&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lifearchitect.ai&#x2F;chatgpt&#x2F;</a>
DemocracyFTW2over 2 years ago
What baffles and makes it hard for me to wrap my brain around the statement that ChatGPT is &quot;just predictive text analysis&quot; is that it certainly feels like it goes into detail, picks up and repeats the particulars, presents a line of reasoning that sounds, well, reasonable, until you realize it got a number of things terribly wrong, often in deceptively subtle ways:<p>Me: <i>Which number is bigger, 873593742593454623 or 873593742593554623?</i><p>ChatGPT: <i>The second number, 873593742593554623, is larger than the first number, 873593742593454623. This is because the second number has a larger digit in the ones place (5 vs. 4), so it is greater overall. To compare two numbers, you can start by looking at the digits in the ones place and then move to the tens place, the hundreds place, and so on until you find a difference. The number with the larger digit in the place you are comparing is the larger number overall.</i><p>So here it correctly repeats my numbers to the last digit, and does find the second one to be bigger because the two numbers are of equal length and differ only in a single digit, which is indeed 5 vs 4 in the first one. So far, it looks like ChatGPT really did an analysis rather than regurgitating old known phrases. However, that difference is not in the ones place, and the algorithm it gives is patently wrong (it doesn&#x27;t mention that longer numbers are bigger than shorter numbers, and that numbers of equal length should be compared starting at the highest place, not the lowest). According to ChatGPT, 53 should be bigger than 123, so let&#x27;s ask it:<p>Me: <i>Apply your reasoning for finding the bigger of two numbers to the compare 123 and 53.</i> (Grammar mistake in the original)<p>Me: <i>To compare the numbers 123 and 53, we can start by looking at the digits in the ones place. In this case, both numbers have the same digit in the ones place (3 and 3), so we need to move on to the tens place. In the tens place, the number 123 has a larger digit (2 vs. 0), so 123 is the larger number overall.</i><p><i>To compare two numbers, you can start by looking at the digits in the ones place and then move to the tens place, the hundreds place, and so on until you find a difference. The number with the larger digit in the place you are comparing is the larger number overall. This method can be applied to any two numbers, regardless of how many digits they have.</i><p>That&#x27;s somewhere between cringeworthy and deluded. It somehow &quot;knows&quot; that 123 is in fact bigger that 53, but also that that fact is incompatible with the procedure given. It still sticks to the procedure though, and instead of adapting the theory to the facts, it hallucinates convenient but wrong facts, claiming that 53 has a 0 in the tens place as compared to the 2 in 123, so according to theory, 123 &gt; 53.<p>Yeah I guess we should totally replace people with ChatGPT wherever possible.
评论 #33941470 未加载
birdyroosterover 2 years ago
Why didn’t you ask ChatGPT?
jupp0rover 2 years ago
I asked ChatGPT and here is its answer:<p>Sure, I&#x27;d be happy to help. An artificial neural network (ANN) is a type of machine learning algorithm that is designed to mimic the way the human brain processes information. At its core, an ANN is composed of a large number of interconnected processing nodes, called neurons, which are organized into multiple layers. These layers are interconnected in a way that allows them to pass information back and forth, with each layer building on the information provided by the previous layer to make a decision or generate a response.<p>Transformers are a type of ANN that was introduced in 2017 by a team of researchers at Google. They are called &quot;transformers&quot; because they use a technique called &quot;self-attention&quot; to transform the input data before passing it through the network. This allows them to better capture the underlying relationships in the data, which makes them particularly well-suited for tasks like natural language processing and machine translation.<p>ChatGPT is a type of transformer that has been specifically designed for use in chatbots and other conversational AI systems. It is an extension of the GPT-3 model, which was developed by the same team of researchers at OpenAI. GPT-3 is a large language model that has been trained on a massive amount of text data and is able to generate human-like responses to a wide range of questions. By building on the capabilities of GPT-3, ChatGPT is able to provide even more sophisticated and natural-sounding responses in a conversational setting.<p>As for the size of the network, GPT-3 is a very large model, with 175 billion parameters. This means that it has a massive amount of interconnected neurons, which allows it to capture a wide range of knowledge and generate responses to a wide variety of questions. Of course, having a large network also requires a lot of computational power, which is why GPT-3 and other large language models are typically run on powerful servers or specialized hardware.
评论 #33941108 未加载
xenospnover 2 years ago
ChatGPT is a variant of the popular GPT-3 language model, specifically designed for chatbot applications. It uses a combination of deep learning and natural language processing techniques to generate human-like responses to text input in a conversation.<p>The way it works is by first pre-training the model on a large corpus of text data, which could include things like social media conversations, movie scripts, books, etc. This allows the model to learn the general structure and patterns of language.<p>Then, when given an input in the form of a question or statement, the model uses its pre-trained knowledge to generate a response. It does this by predicting the next word in the sentence, and then continuing to predict subsequent words until it reaches the end of the response.<p>Overall, the goal of ChatGPT is to enable chatbots to have more natural, human-like conversations with users.<p>(I asked ChatGPT to tell me how it works)
评论 #33940650 未加载
评论 #33943600 未加载
评论 #33940997 未加载
评论 #33940059 未加载
dtagamesover 2 years ago
ChatGPT is really very simple. Imagine you could analyze a million books and identify all the words within them -- not the meanings of the words, just the actual letters they contain.<p>Now, when someone asks you about the history of France (or why the sky is blue), you could simply pluck out of your library the most common strings of word that seem to follow the words that were in your question!<p>It&#x27;s like a kid in the 80&#x27;s who thinks the answer to an essay question is to copy it from an encyclopedia, only the &quot;encyclopedia&quot; is very large and contains multiple sources.<p>So, the big take away needs to be that there is absolutely no understanding, no cognizance of any kind, no language comprehension going on. The answers look good because they contain all the same words as the most popular answers people have already written which the system scanned.<p>So ChatGPT turns out to great for parsing and summarizing documents, if that&#x27;s something you need. But, since it doesn&#x27;t know fact from fiction, it cannot apply logic or math, and it cannot perform reasoning or analysis, it&#x27;s not good for finding out facts or discerning truth.<p>Another great failing of LLM software is that the user being spoken to is generic. The answers are not modeled for you, they&#x27;re the same models for everyone. But a human teacher does their job by being exactly the opposite of this -- someone who is finely tuned to the needs and understandings of their audience. A good journalist or writer does the same.
评论 #33940028 未加载
评论 #33940027 未加载
评论 #33940536 未加载
评论 #34027892 未加载
评论 #33941072 未加载
评论 #33940040 未加载
评论 #33940769 未加载
评论 #33940764 未加载