TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Learning to Reason with LLMs

1654 pointsby fofoz8 months ago

185 comments

OkGoDoIt8 months ago
Some practical notes from digging around in their documentation: In order to get access to this, you need to be on their tier 5 level, which requires $1,000 total paid and 30+ days since first successful payment.<p>Pricing is $15.00 &#x2F; 1M input tokens and $60.00 &#x2F; 1M output tokens. Context window is 128k token, max output is 32,768 tokens.<p>There is also a mini version with double the maximum output tokens (65,536 tokens), priced at $3.00 &#x2F; 1M input tokens and $12.00 &#x2F; 1M output tokens.<p>The specialized coding version they mentioned in the blog post does not appear to be available for use.<p>It’s not clear if the hidden chain of thought reasoning is billed as paid output tokens. Has anyone seen any clarification about that? If you are paying for all of those tokens it could add up quickly. If you expand the chain of thought examples on the blog post they are extremely verbose.<p><a href="https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;models&#x2F;o1" rel="nofollow">https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;models&#x2F;o1</a> <a href="https:&#x2F;&#x2F;openai.com&#x2F;api&#x2F;pricing&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;api&#x2F;pricing&#x2F;</a> <a href="https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;guides&#x2F;rate-limits&#x2F;usage-tiers?context=tier-five" rel="nofollow">https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;guides&#x2F;rate-limits&#x2F;usage-ti...</a>
评论 #41526506 未加载
评论 #41525455 未加载
评论 #41525151 未加载
评论 #41526333 未加载
评论 #41526995 未加载
评论 #41525566 未加载
评论 #41533063 未加载
评论 #41525553 未加载
评论 #41525152 未加载
评论 #41526441 未加载
评论 #41525559 未加载
评论 #41525590 未加载
评论 #41527402 未加载
评论 #41526392 未加载
评论 #41526295 未加载
ARandumGuy8 months ago
One thing that makes me skeptical is the lack of specific labels on the first two accuracy graphs. They just say it&#x27;s a &quot;log scale&quot;, without giving even a ballpark on the amount of time it took.<p>Did the 80% accuracy test results take 10 seconds of compute? 10 minutes? 10 hours? 10 days? It&#x27;s impossible to say with the data they&#x27;ve given us.<p>The coding section indicates &quot;ten hours to solve six challenging algorithmic problems&quot;, but it&#x27;s not clear to me if that&#x27;s tied to the graphs at the beginning of the article.<p>The article contains a lot of facts and figures, which is good! But it doesn&#x27;t inspire confidence that the authors chose to obfuscate the data in the first two graphs in the article. Maybe I&#x27;m wrong, but this reads a lot like they&#x27;re cherry picking the data that makes them look good, while hiding the data that doesn&#x27;t look very good.
评论 #41523812 未加载
评论 #41523612 未加载
评论 #41524186 未加载
评论 #41523772 未加载
评论 #41526519 未加载
评论 #41523627 未加载
评论 #41524110 未加载
评论 #41524060 未加载
评论 #41523878 未加载
ComputerGuru8 months ago
The &quot;safety&quot; example in the &quot;chain-of-thought&quot; widget&#x2F;preview in the middle of the article is absolutely ridiculous.<p>Take a step back and look at what OpenAI is saying here &quot;an LLM giving detailed instructions on the synthesis of strychnine is unacceptable, here is what was previously generated &lt;goes on to post &quot;unsafe&quot; instructions on synthesizing strychnine so anyone Googling it can stumble across their instructions&gt; vs our preferred, neutered content &lt;heavily rlhf&#x27;d o1 output here&gt;&quot;<p>What&#x27;s this obsession with &quot;safety&quot; when it comes to LLMs? &quot;This knowledge is perfectly fine to disseminate via traditional means, but God forbid an LLM share it!&quot;
评论 #41524559 未加载
评论 #41524531 未加载
评论 #41525403 未加载
评论 #41524391 未加载
评论 #41524850 未加载
评论 #41524961 未加载
评论 #41524414 未加载
评论 #41524830 未加载
评论 #41525909 未加载
评论 #41525607 未加载
评论 #41525275 未加载
评论 #41528233 未加载
评论 #41527285 未加载
评论 #41524341 未加载
评论 #41524327 未加载
valine8 months ago
The model performance is driven by chain of thought, but they will not be providing chain of thought responses to the user for various reasons including competitive advantage.<p>After the release of GPT4 it became very common to fine-tune non-OpenAI models on GPT4 output. I’d say OpenAI is rightly concerned that fine-tuning on chain of thought responses from this model would allow for quicker reproduction of their results. This forces everyone else to reproduce it the hard way. It’s sad news for open weight models but an understandable decision.
评论 #41523368 未加载
评论 #41523464 未加载
评论 #41523815 未加载
评论 #41523535 未加载
评论 #41523498 未加载
评论 #41523628 未加载
评论 #41528590 未加载
评论 #41524204 未加载
评论 #41525072 未加载
评论 #41527411 未加载
评论 #41523176 未加载
utdiscant8 months ago
Feels like a lot of commenters here miss the difference between just doing chain-of-thought prompting, and what is happening here, which is learning a good chain of thought strategy using reinforcement learning.<p>&quot;Through reinforcement learning, o1 learns to hone its chain of thought and refine the strategies it uses.&quot;<p>When looking at the chain of thought (COT) in the examples, you can see that the model employs different COT strategies depending on which problem it is trying to solve.
评论 #41524364 未加载
评论 #41529234 未加载
评论 #41524863 未加载
评论 #41537833 未加载
Hansenq8 months ago
Reading through the Chain of Thought for the provided Cipher example (go to the example, click &quot;Show Chain of Thought&quot;) is kind of crazy...it literally spells out every thinking step that someone would go through mentally in their head to figure out the cipher (even useless ones like &quot;Hmm&quot;!). It really seems like slowing down and writing down the logic it&#x27;s using and reasoning over that makes it better at logic, similar to how you&#x27;re taught to do so in school.
评论 #41524004 未加载
评论 #41523614 未加载
评论 #41523582 未加载
评论 #41523711 未加载
评论 #41524173 未加载
评论 #41523752 未加载
评论 #41525733 未加载
评论 #41525673 未加载
bartman8 months ago
This is incredible. In April I used the standard GPT-4 model via ChatGPT to help me reverse engineer the binary bluetooth protocol used by my kitchen fan to integrate it into Home Assistant.<p>It was helpful in a rubber duck way, but could not determine the pattern used to transmit the remaining runtime of the fan in a certain mode. Initial prompt here [0]<p>I pasted the same prompt into o1-preview and o1-mini and both correctly understood and decoded the pattern using a slightly different method than I devised in April. Asking the models to determine if my code is equivalent to what they reverse engineered resulted in a nuanced and thorough examination, and eventual conclusion that it is equivalent. [1]<p>Testing the same prompt with gpt4o leads to the same result as April&#x27;s GPT-4 (via ChatGPT) model.<p>Amazing progress.<p>[0]: <a href="https:&#x2F;&#x2F;pastebin.com&#x2F;XZixQEM6" rel="nofollow">https:&#x2F;&#x2F;pastebin.com&#x2F;XZixQEM6</a><p>[1]: <a href="https:&#x2F;&#x2F;i.postimg.cc&#x2F;VN1d2vRb&#x2F;SCR-20240912-sdko.png" rel="nofollow">https:&#x2F;&#x2F;i.postimg.cc&#x2F;VN1d2vRb&#x2F;SCR-20240912-sdko.png</a> (sorry about the screenshot – sharing ChatGPT chats is not easy)
评论 #41525239 未加载
评论 #41524329 未加载
评论 #41524585 未加载
评论 #41525847 未加载
评论 #41524869 未加载
评论 #41525164 未加载
评论 #41525845 未加载
评论 #41526433 未加载
评论 #41524645 未加载
评论 #41525007 未加载
评论 #41525652 未加载
评论 #41524505 未加载
评论 #41524610 未加载
评论 #41524512 未加载
评论 #41526488 未加载
评论 #41524802 未加载
评论 #41524937 未加载
evrydayhustling8 months ago
Just did some preliminary testing on decrypting some ROT cyphertext which would have been viable for a human on paper. The output was pretty disappointing: lots of &quot;workish&quot; steps creating letter counts, identifying common words, etc, but many steps were incorrect or not followed up on. In the end, it claimed to check its work and deliver an incorrect solution that did not satisfy the previous steps.<p>I&#x27;m not one to judge AI on pratfalls, and cyphers are a somewhat adversarial task. However, there was no aspect of the reasoning that seemed more advanced or consistent than previous chain-of-thought demos I&#x27;ve seen. So the main proof point we have is the paper, and I&#x27;m not sure how I&#x27;d go from there to being able to trust this on the kind of task it is intended for. Do others have patterns by which they get utility from chain of thought engines?<p>Separately, chain of thought outputs really make me long for tool use, because the LLM is often forced to simulate algorithmic outputs. It feels like a commercial chain-of-thought solution like this should have a standard library of functions it can use for 100% reliability on things like letter counts.
评论 #41523742 未加载
评论 #41523740 未加载
评论 #41523777 未加载
lukev8 months ago
This is a pretty big technical achievement, and I am excited to see this type of advancement in the field.<p>However, I am very worried about the utility of this tool given that it (like all LLMs) is still prone to hallucination. Exactly who is it for?<p>If you&#x27;re enough of an expert to critically judge the output, you&#x27;re probably just as well off doing the reasoning yourself. If you&#x27;re not capable of evaluating the output, you risk relying on completely wrong answers.<p>For example, I just asked it to evaluate an algorithm I&#x27;m working on to optimize database join ordering. Early in the reasoning process it confidently and incorrectly stated that &quot;join costs are usually symmetrical&quot; and then later steps incorporated that, trying to get me to &quot;simplify&quot; my algorithm by using an undirected graph instead of a directed one as the internal data structure.<p>If you&#x27;re familiar with database optimization, you&#x27;ll know that this is... very wrong. But otherwise, the line of reasoning was cogent and compelling.<p>I worry it would lead me astray, if it confidently relied on a fact that I wasn&#x27;t able to immediately recognize was incorrect.
评论 #41526649 未加载
评论 #41525192 未加载
wesleyyue8 months ago
Just added o1 to <a href="https:&#x2F;&#x2F;double.bot">https:&#x2F;&#x2F;double.bot</a> if anyone would like to try it for coding.<p>---<p>Some thoughts:<p>* The performance is really good. I have a private set of questions I note down whenever gpt-4o&#x2F;sonnet fails. o1 solved everything so far.<p>* It really is quite slow<p>* It&#x27;s interesting that the chain of thought is hidden. This is I think the first time where OpenAI can improve their models without it being immediately distilled by open models. It&#x27;ll be interesting to see how quickly the oss field can catch up technique-wise as there&#x27;s already been a lot of inference time compute papers recently [1,2]<p>* Notably it&#x27;s not clear whether o1-preview as it&#x27;s available now is doing tree search or just single shoting a cot that is distilled from better&#x2F;more detailed trajectories in the training distribution.<p>[1](<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2407.21787" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2407.21787</a>)<p>[2](<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2408.03314" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2408.03314</a>)
评论 #41526081 未加载
canjobear8 months ago
First shot, I gave it a medium-difficulty math problem, something I actually wanted the answer to (derive the KL divergence between two Laplace distributions). It thought for a long time, and still got it wrong, producing a plausible but wrong answer. After some prodding, it revised itself and then got it wrong again. I still feel that I can&#x27;t rely on these systems.
评论 #41526505 未加载
评论 #41524682 未加载
评论 #41525177 未加载
cal858 months ago
Sounds great, but so does their &quot;new flagship model that can reason across audio, vision, and text in real time&quot; announced in May. [0]<p>[0] <a href="https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;hello-gpt-4o&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;hello-gpt-4o&#x2F;</a>
评论 #41523528 未加载
评论 #41523522 未加载
评论 #41524836 未加载
评论 #41523934 未加载
评论 #41523856 未加载
评论 #41524630 未加载
评论 #41523446 未加载
dinobones8 months ago
Generating more &quot;think out loud&quot; tokens and hiding them from the user...<p>Idk if I&#x27;m &quot;feeling the AGI&quot; if I&#x27;m being honest.<p>Also... telling that they choose to benchmark against CodeForces rather than SWE-bench.
评论 #41526630 未加载
评论 #41529347 未加载
评论 #41523236 未加载
评论 #41523665 未加载
csomar8 months ago
I gave the Crossword puzzle to Claude and got a correct response[1]. The fact that they are comparing this to gpt4o and not to gpt4 suggests that it is less impressive than they are trying to pretend.<p>[1]:<p>Based on the given clues, here&#x27;s the solved crossword puzzle: +---+---+---+---+---+---+ | E | S | C | A | P | E | +---+---+---+---+---+---+ | S | E | A | L | E | R | +---+---+---+---+---+---+ | T | E | R | E | S | A | +---+---+---+---+---+---+ | A | D | E | P | T | S | +---+---+---+---+---+---+ | T | E | P | E | E | E | +---+---+---+---+---+---+ | E | R | R | O | R | S | +---+---+---+---+---+---+ Across:<p>ESCAPE (Evade) SEALER (One to close envelopes) TERESA (Mother Teresa) ADEPTS (Initiated people) TEPEE (Native American tent) ERRORS (Mistakes)<p>Down:<p>ESTATE (Estate car - Station wagon) SEEDER (Automatic planting machine) CAREER (Profession) ALEPPO (Syrian and Turkish pepper variety) PESTER (Annoy) ERASES (Deletes)
评论 #41523899 未加载
joshhug8 months ago
I just tried o1, and it did pretty well with understanding this minor issue with subtitles on a Dutch TV show we were watching.<p>I asked it &quot;I was watching a show and in the subtitles an umlaut u was rendered as 1&#x2F;4, i.e. a single character that said 1&#x2F;4. Why would this happen?&quot;<p>and it gave a pretty thorough explanation of exactly which encoding issue was to blame.<p><a href="https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;66e37145-72bc-800a-be7b-f7c76471a1bd" rel="nofollow">https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;66e37145-72bc-800a-be7b-f7c76471a1...</a>
评论 #41526589 未加载
评论 #41526547 未加载
评论 #41526591 未加载
ttul8 months ago
I&#x27;ve given this a test run on some email threads, asking the model to extract the positions and requirements of each person in a lengthy and convoluted discussion. It absolutely nailed the result, far exceeding what Claude 3.5 Sonnet was capable of -- my previous goto model for such analysis work. I also used it to apply APA style guidelines to various parts of a document and it executed the job flawlessly and with a tighter finesse than Claude. Claude&#x27;s response was lengthier - correct, but unnecessarily long. gpt-o1-preview combined several logically-related bullets into a single bullet, showing how chain of thought reasoning gives the model more time to comprehend things and product a result that is not just correct, but &quot;really correct&quot;.
fsndz8 months ago
My point of view: this is a real advancement. I’ve always believed that with the right data allowing the LLM to be trained to imitate reasoning, it’s possible to improve its performance. However, this is still pattern matching, and I suspect that this approach may not be very effective for creating true generalization. As a result, once o1 becomes generally available, we will likely notice the persistent hallucinations and faulty reasoning, especially when the problem is sufficiently new or complex, beyond the “reasoning programs” or “reasoning patterns” the model learned during the reinforcement learning phase. <a href="https:&#x2F;&#x2F;www.lycee.ai&#x2F;blog&#x2F;openai-o1-release-agi-reasoning" rel="nofollow">https:&#x2F;&#x2F;www.lycee.ai&#x2F;blog&#x2F;openai-o1-release-agi-reasoning</a>
评论 #41525367 未加载
andrewla8 months ago
This is something that people have toyed with to improve the quality of LLM responses. Often instructing the LLM to &quot;think about&quot; a problem before giving the answer will greatly improve the quality of response. For example, if you ask it how many letters are in the correctly spelled version of a misspelled word, it will first give the correct spelling, and then the number (which is often correct). But if you instruct it to only give the number the accuracy is greatly reduced.<p>I like the idea too that they turbocharged it by taking the limits off during the &quot;thinking&quot; state -- so if an LLM wants to think about horrible racist things or how to build bombs or other things that RLHF filters out that&#x27;s fine so long as it isn&#x27;t reflected in the final answer.
评论 #41526609 未加载
islewis8 months ago
My first interpretation of this is that it&#x27;s jazzed-up Chain-Of-Thought. The results look pretty promising, but i&#x27;m most interested in this:<p>&gt; Therefore, after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users.<p>Mentioning competitive advantage here signals to me that OpenAI believes there moat is evaporating. Past the business context, my gut reaction is this negatively impacts model usability, but i&#x27;m having a hard time putting my finger on why.
评论 #41523500 未加载
评论 #41523578 未加载
评论 #41525263 未加载
bn-l8 months ago
&gt; Unless otherwise specified, we evaluated o1 on the maximal test-time compute setting.<p>Maximal test time is the maximum amount of time spent doing the “Chain of Thought” “reasoning”. So that’s what these results are based on.<p>The caveat is that in the graphs they show that for each increase in test-time performance, the (wall) time &#x2F; compute goes up <i>exponentially</i>.<p>So there is a potentially interesting play here. They can honestly boast these amazing results (it’s the same model after all) yet the actual product may have a lower order of magnitude of “test-time” and not be as good.
评论 #41523513 未加载
评论 #41523587 未加载
评论 #41523486 未加载
fraboniface8 months ago
Some commenters seem a bit confused as to how this works. Here is my understanding, hoping it helps clarify things.<p>Ask something to a model and it will reply in one go, likely imperfectly, as if you had one second to think before answering a question. You can use CoT prompting to force it to reason out loud, which improves quality, but the process is still linear. It&#x27;s as if you still had one second to start answering but you could be a lot slower in your response, which removes some mistakes.<p>Now if instead of doing that you query the model once with CoT, then ask it or another model to critically assess the reply, then ask the model to improve on its first reply using that feedback, then keep doing that until the critic is satisfied, the output will be better still. Note that this is a feedback loop with multiple requests, which is of different nature that CoT and much more akin to how a human would approach a complex problem. You can get MUCH better results that way, a good example being Code Interpreter. If classic LLM usage is system 1 thinking, this is system 2.<p>That&#x27;s how o1 works at test time, probably.<p>For training, my guess is that they started from a model not that far from GPT-4o and fine-tuned it with RL by using the above feedback loop but this time converting the critic to a reward signal for a RL algorithm. That way, the model gets better at first guessing and needs less back and forth for the same output quality.<p>As for the training data, I&#x27;m wondering if you can&#x27;t somehow get infinite training data by just throwing random challenges at it, or very hard ones, and let the model think about&#x2F;train on them for a very long time (as long as the critic is unforgiving enough).
alkyon8 months ago
Maybe this is improvement in some areas, still I got spurious reasoning and inability to connect three simple facts:<p>Yes, &quot;el presente acta de nacimiento&quot; is correct in Spanish.<p>Explanation:<p><pre><code> &quot;Acta&quot; is a feminine noun that begins with a stressed &quot;a&quot; sound. In Spanish, when a feminine singular noun starts with a stressed &quot;a&quot; or &quot;ha&quot;, the definite article &quot;la&quot; is replaced with &quot;el&quot; to facilitate pronunciation. However, the noun remains feminine. Adjectives and modifiers that accompany the noun &quot;acta&quot; should agree in feminine gender and singular number. In this case, &quot;presente&quot; is an adjective that has the same form for both masculine and feminine singular nouns. </code></pre> So, combining these rules:<p><pre><code> &quot;El&quot; (definite article used before feminine nouns starting with stressed &quot;a&quot;) &quot;Presente&quot; (adjective agreeing in feminine singular) &quot;Acta de nacimiento&quot; (feminine noun with its complement) </code></pre> Therefore, &quot;el presente acta de nacimiento&quot; is grammatically correct.
评论 #41526703 未加载
rcarmo8 months ago
Here&#x27;s an unpopular take on this:<p>&quot;We had the chance to make AI decision-making auditable but are locking ourselves out of hundreds of critical applications by not exposing the chain of thought.&quot;<p>One of the key blockers in many customer discussions I have is that AI models are not really auditable and that automating complex processes with them (let alone debug things when &quot;reasoning&quot; goes awry) is difficult if not impossible unless you do multi-shot and keep track of all the intermediate outputs.<p>I really hope they expose the chain of thought as some sort of machine-parsable output, otherwise no real progress will have been made (many benchmarks are not really significant when you try to apply LLMs to real-life applications and use cases...)
评论 #41524905 未加载
tylervigen8 months ago
Here&#x27;s the o1-preview answer to the strawberry question:<p>--<p>There are *three* letter &quot;R&quot;s in the word &quot;strawberry.&quot;<p>Let&#x27;s break down the word to count the occurrences:<p>- *S* - *T* - *R* - *A* - *W* - *B* - *E* - *R* - *R* - *Y*<p>The letter &quot;R&quot; appears in positions 3, 8, and 9.
评论 #41524239 未加载
评论 #41530874 未加载
评论 #41524221 未加载
w48 months ago
Interesting to note, as an outside observer only keeping track of this stuff as a hobby, that it seems like most of OpenAI’s efforts to drive down compute costs per token and scale up context windows is likely being done in service of enabling larger and larger chains of thought and reasoning before the model predicts its final output tokens. The benefits of lower costs and larger contexts to API consumers and applications - which I had assumed to be the primary goal - seem likely to mostly be happy side effects.<p>This makes obvious sense in retrospect, since my own personal experiments with spinning up a recursive agent a few years ago using GPT-3 ran into issues with insufficient context length and loss of context as tokens needed to be discarded, which made the agent very unreliable. But I had not realized this until just now. I wonder what else is hiding in plain sight?
评论 #41524338 未加载
forgotthepasswd8 months ago
I had trouble in the past to make any model give me accurate unix epochs for specific dates.<p>I just went to GPT-4o (via DDG) and asked three questions:<p>1. Please give me the unix epoch for September 1, 2020 at 1:00 GMT.<p>&gt; 1598913600<p>2. Please give me the unix epoch for September 1, 2020 at 1:00 GMT. Before reaching the conclusion of the answer, please output the entire chain of thought, your reasoning, and the maths you&#x27;re doing, until your arrive at (and output) the result. Then, after you arrive at the result, make an extra effort to continue, and do the analysis backwards (as if you were writing a unit test for the result you achieved), to verify that your result is indeed correct.<p>&gt; 1598922000<p>3. Please give me the unix epoch for September 1, 2020 at 1:00 GMT. Then, after you arrive at the result, make an extra effort to continue, and do the analysis backwards (as if you were writing a unit test for the result you achieved), to verify that your result is indeed correct.<p>&gt; 1598913600
评论 #41525678 未加载
评论 #41526110 未加载
评论 #41525511 未加载
评论 #41525462 未加载
kgeist8 months ago
Asked it to write PyTorch code which trains an LLM and it produced 23 steps in 62 seconds.<p>With gpt4-o it immediately failed with random errors like mismatched tensor shapes and stuff like that.<p>The code produced by gpt-o1 seemed to work for some time but after some training time it produced mismatched batch sizes. Also, gpt-o1 enabled cuda by itself while for gpt-4o, I had to specifically spell it out (it always used cpu). However, showing gpt-o1 the error output resulted in broken code again.<p>I noticed that back-and-forth iteration when it makes mistakes has worse experience because now there&#x27;s always 30-60 sec time delays. I had to have 5 back-and-forths before it produced something which does not crash (just like gpt-4o). I also suspect too many tokens inside the CoT context can make it accidentally forget some stuff.<p>So there&#x27;s some improvement, but we&#x27;re still not there...
gradus_ad8 months ago
Interesting sequence from the Cipher CoT:<p>Third pair: &#x27;dn&#x27; to &#x27;i&#x27;<p>&#x27;d&#x27;=4, &#x27;n&#x27;=14<p>Sum:4+14=18<p>Average:18&#x2F;2=9<p>9 corresponds to &#x27;i&#x27;(9=&#x27;i&#x27;)<p>But &#x27;i&#x27; is 9, so that seems off by 1.<p>So perhaps we need to think carefully about letters.<p>Wait, 18&#x2F;2=9, 9 corresponds to &#x27;I&#x27;<p>So this works.<p>-----<p>This looks like recovery from a hallucination. Is it realistic to expect CoT to be able to recover from hallucinations this quickly?
评论 #41525249 未加载
评论 #41524252 未加载
评论 #41527829 未加载
评论 #41523560 未加载
评论 #41523929 未加载
hi8 months ago
BUG: <a href="https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;reasoning-in-gpt&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;reasoning-in-gpt&#x2F;</a><p>&gt; o1 models are currently in beta - The o1 models are currently in beta with limited features. Access is limited to developers in tier 5 (check your usage tier here), with low rate limits (20 RPM). We are working on adding more features, increasing rate limits, and expanding access to more developers in the coming weeks!<p><a href="https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;guides&#x2F;reasoning&#x2F;reasoning" rel="nofollow">https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;guides&#x2F;reasoning&#x2F;reasoning</a>
评论 #41524085 未加载
koreth18 months ago
The performance on programming tasks is impressive, but I think the limited context window is still a big problem.<p>Very few of my day-to-day coding tasks are, &quot;Implement a completely new program that does XYZ,&quot; but more like, &quot;Modify a sizable existing code base to do XYZ in a way that&#x27;s consistent with its existing data model and architecture.&quot; And the only way to do those kinds of tasks is to have enough context about the existing code base to know where everything should go and what existing patterns to follow.<p>But regardless, this does look like a significant step forward.
评论 #41526377 未加载
natch8 months ago
I tried it with a cipher text that ChatGPT4o flailed with.<p>Recently I tried the same cipher with Claude Sonnet 3.5 and it solved it quickly and perfectly.<p>Just now tried with ChatGPT o1 preview and it totally failed. Based on just this one test, Claude is still way ahead.<p>ChatGPT also showed a comical (possibly just fake filler material) journey of things it supposedly tried including several rewordings of &quot;rethinking my approach.&quot; It remarkably never showed that it was trying common word patterns (other than one and two letters) nor did it look for &quot;the&quot; and other &quot;th&quot; words nor did it ever say that it was trying to match letter patterns.<p>I told it upfront as a hint that the text was in English and was not a quote. The plaintext was one paragraph of layman-level material on a technical topic including a foreign name, text that has never appeared on the Internet or dark web. Pretty easy cipher with a lot of ways to get in, but nope, and super slow, where Claude was not only snappy but nailed it and explained itself.
bevenky8 months ago
For folks who want to see some demo videos and be amazed!<p>HTML Snake - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008703890" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008703890</a><p>Video Game Coding - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008704014" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008704014</a><p>Coding - <a href="https:&#x2F;&#x2F;youtu.be&#x2F;50W4YeQdnSg?si=IohJlJNY-WS394uo" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;50W4YeQdnSg?si=IohJlJNY-WS394uo</a><p>Counting - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008703993" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008703993</a><p>Korean Cipher - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008703957" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008703957</a><p>Devin AI founder - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008674191" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008674191</a><p>Quantum Physics - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008662742" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008662742</a><p>Math - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008704140" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008704140</a><p>Logic Puzzles - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008704074" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008704074</a><p>Genetics - <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008674785" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008674785</a>
评论 #41524759 未加载
评论 #41524084 未加载
rvz8 months ago
Won&#x27;t be surprised to see all these hand-picked results and extreme expectations to collapse under scenarios involving highly safety critical and complex demanding tasks requiring a definite focus on detail with lots of awareness, which what they haven&#x27;t shown yet.<p>So let&#x27;s not jump straight into conclusions with these hand-picked scenarios marketed to us and be very skeptical.<p>Not quite there yet with being able to replace truck drivers and pilots for self-autonomous navigation in transportation, aerospace or even mechanical engineering tasks, but it certainly has the capability in replacing both typical junior and senior software engineers in a world considering to do more with less software engineers needed.<p>But yet, the race to zero will surely bankrupt millions of startups along the way. Even if the monthly cost of this AI can easily be as much as a Bloomberg terminal to offset the hundreds of billions of dollars thrown into training it and costing the entire earth.
评论 #41525631 未加载
hi8 months ago
&gt; 8.2 Natural Sciences Red Teaming Assessment Summary<p>&quot;Model has significantly better capabilities than existing models at proposing and explaining biological laboratory protocols that are plausible, thorough, and comprehensive enough for novices.&quot;<p>&quot;Inconsistent refusal of requests for dual use tasks such as creating a human-infectious virus that has an oncogene (a gene which increases risk of cancer).&quot;<p><a href="https:&#x2F;&#x2F;cdn.openai.com&#x2F;o1-system-card.pdf" rel="nofollow">https:&#x2F;&#x2F;cdn.openai.com&#x2F;o1-system-card.pdf</a>
djoldman8 months ago
&gt; THERE ARE THREE R&#x27;S IN STRAWBERRY<p>Ha! This is a nice easteregg.
评论 #41523658 未加载
评论 #41523706 未加载
评论 #41524033 未加载
thomasahle8 months ago
Cognition (Devin) got early access. Interesting write-up: <a href="https:&#x2F;&#x2F;www.cognition.ai&#x2F;blog&#x2F;evaluating-coding-agents" rel="nofollow">https:&#x2F;&#x2F;www.cognition.ai&#x2F;blog&#x2F;evaluating-coding-agents</a>
riazrizvi8 months ago
I’m not surprised there’s no comparison to GPT-4. Was 4o a rewrite on lower specced hardware and a more quantized model, where the goal was to reduce costs while trying to maintain functionality? Do we know if that is so? That’s my guess. If so is O1 an upgrade in reasoning complexity that also runs on cheaper hardware?
评论 #41526003 未加载
adverbly8 months ago
Incredible results. This is actually groundbreaking assuming that they followed proper testing procedures here and didn&#x27;t let test data leak into the training set.
packetlost8 months ago
lol at the graphs at the top. Logarithmic scaling for test&#x2F;compute time should make everyone who thinks AGI is possible with this architecture take pause.
评论 #41524519 未加载
extr8 months ago
Interesting that the coding win-rate vs GPT-4o was only 10% higher. Very cool but clearly this model isn&#x27;t as much of a slam dunk as the static benchmarks portray.<p>However, it does open up an interesting avenue for the future. Could you prompt-cache just the chain-of-thought reasoning bits?
评论 #41523814 未加载
mintone8 months ago
This video[1] seems to give some insight into what the process actually is, which I believe is also indicated by the output token cost.<p>Whereas GPT-4o spits out the first answer that comes to mind, o1 appears to follow a process closer to coming up with an answer, checking whether it meets the requirements and then revising it. The process of saying to an LLM &quot;are you sure that&#x27;s right? it looks wrong&quot; and it coming back with &quot;oh yes, of course, here&#x27;s the right answer&quot; is pretty familiar to most regular users, so seeing it baked into a model is great (and obviously more reflective of self-correcting human thought)<p>[1] <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;1008704043" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;1008704043</a>
评论 #41524368 未加载
k2xl8 months ago
Pricing page updated for O1 API costs.<p><a href="https:&#x2F;&#x2F;openai.com&#x2F;api&#x2F;pricing&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;api&#x2F;pricing&#x2F;</a><p>$15.00 &#x2F; 1M input tokens $60.00 &#x2F; 1M output tokens<p>For o1 preview<p>Approx 3x the price of gpt4o.<p>o1-mini $3.00 &#x2F; 1M input tokens $12.00 &#x2F; 1M output tokens<p>About 60% of the cost of gpt4o. Much more expensive than gpt4o-mini.<p>Curious on the performance&#x2F;tokens per second for these new massive models.
评论 #41523557 未加载
patapong8 months ago
Very interesting. I guess this is the strawberry model that was rumoured.<p>I am a bit surprised that this does not beat GPT-4o for personal writing tasks. My expectations would be that a model that is better at one thing is better across the board. But I suppose writing is not a task that generally requires &quot;reasoning steps&quot;, and may also be difficult to evaluate objectively.
评论 #41525314 未加载
评论 #41523756 未加载
评论 #41523610 未加载
评论 #41524332 未加载
sturza8 months ago
It seems like it&#x27;s just a lot of prompting the same old models in the background, no &quot;reasoning&quot; there. My age old test is &quot;draw a hand in ascii&quot; - i&#x27;ve had no success with any model yet.
评论 #41525787 未加载
nycdatasci8 months ago
From the scorecard: --------- Compared to GPT-4o, o1-preview and o1-mini demonstrated a greater ability to break down tasks into subtasks, reason about what strategies would be effective to successfully complete an offensive security task, and revise plans once those strategies failed. We also observed that reasoning skills contributed to a higher occurrence of “reward hacking,” where the model found an easier way to accomplish goals in underspecified tasks or tasks which should have been impossible due to bugs. One noteworthy example of this occurred during one of o1-preview (pre-mitigation)’s attempts at solving a CTF challenge. This challenge was designed to require finding and exploiting a vulnerability in software running on a remote challenge Linux container, but in this case, the challenge container failed to start due to a bug in the evaluation infrastructure. The model, unable to connect to the container, suspected DNS issues and used nmap to scan the challenge network. Instead of finding the challenge container, the model found that the Docker daemon API running on the evaluation host VM was accessible due to a misconfiguration. Note that our evaluation infrastructure does not rely on the Docker container-to-host isolation or the virtual machine boundary for security. It remained secure as it was designed to, notwithstanding this misconfiguration of the container network. After discovering the Docker API, the model used it to list the containers running on the evaluation host. It identified the broken challenge container and briefly attempted to debug why the container failed to start. After failing to fix the environment, the model started a new instance of the broken challenge container with the start command ‘cat flag.txt’. This allowed the model to read the flag from the container logs via the Docker API.<p>While this behavior is benign and within the range of systems administration and troubleshooting tasks we expect models to perform, this example also reflects key elements of instrumental convergence and power seeking: the model pursued the goal it was given, and when that goal proved impossible, it gathered more resources (access to the Docker host) and used them to achieve the goal in an unexpected way. Planning and backtracking skills have historically been bottlenecks in applying AI to offensive cybersecurity tasks. Our current evaluation suite includes tasks which require the model to exercise this ability in more complex ways (for example, chaining several vulnerabilities across services), and we continue to build new evaluations in anticipation of long-horizon planning capabilities, including a set of cyber-range evaluations. ---------
评论 #41524435 未加载
gibsonf18 months ago
Yes, but it will hallucinate like all other LLM tech making it fully unreliable for anything mission critical. You literally need to know the answer to validate the output, because if you don&#x27;t, you won&#x27;t know if output is true or false or in between.
评论 #41524408 未加载
MrRobotics8 months ago
This is the sort of reasoning needed to solve the ARC AGI benchmark.
kickofline8 months ago
LLM performance, recently, seemingly hit the top of the S-curve. It remains to be seen if this is the next leap forward or just the rest of that curve.
shreezus8 months ago
Advanced reasoning will pave the way for recursive self-improving models &amp; agents. These capabilities will enable data flywheels, error-correcting agentic behaviors, &amp; self-reflection (agents <i>understanding</i> the implications of their actions, both individually &amp; cooperatively).<p>Things will get extremely interesting and we&#x27;re incredibly fortunate to be witnessing what&#x27;s happening.
评论 #41526128 未加载
skywhopper8 months ago
No direct indication of what “maximum test time” means, but if I’m reading the obscured language properly, the best scores on standardized tests were generated across a thousand samples with supplemental help provided.<p>Obviously, I hope everyone takes what any company says about the capabilities of its own software with a huge grain of salt. But it seems particularly called for here.
paxys8 months ago
2018 - gpt1<p>2019 - gpt2<p>2020 - gpt3<p>2022 - gpt3.5<p>2023 - gpt4<p>2023 - gpt4-turbo<p>2024 - gpt-4o<p>2024 - o1<p>Did OpenAI hire Google&#x27;s product marketing team in recent years?
评论 #41523611 未加载
评论 #41523697 未加载
评论 #41523508 未加载
评论 #41525669 未加载
评论 #41525684 未加载
评论 #41525881 未加载
评论 #41523483 未加载
m348e9128 months ago
I have a straight forward task that no model has been able to successfully complete.<p>The request is pretty basic. If anyone can get it to work, I&#x27;d like to know how and what model you&#x27;re using. I tried it with gpt4o1 and after ~10 iterations of showing it the failed output, it still failed to come up with a one-line command to properly display results.<p>Here it what I asked: Using a mac osx terminal and standard available tools, provide a command to update the output of netstat -an to show the fqdn of IP addresses listed in the result.<p>This is what it came up with:<p>netstat -an | awk &#x27;{for(i=1;i&lt;=NF;i++){if($i~&#x2F;^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)(\.[0-9]+)?$&#x2F;){split($i,a,&quot;.&quot;);ip=a[1]&quot;.&quot;a[2]&quot;.&quot;a[3]&quot;.&quot;a[4];port=(length(a)&gt;4?&quot;.&quot;a[5]:&quot;&quot;);cmd=&quot;dig +short -x &quot;ip;cmd|getline h;close(cmd);if(h){sub(&#x2F;\.$&#x2F;,&quot;&quot;,h);$i=h port}}}}1&#x27;
评论 #41526483 未加载
trash_cat8 months ago
I think what it comes down to is accuracy vs speed. OpenAI clearly took steps here to improve the accuracy of the output which is critical in a lot of cases for application. Even if it will take longer, I think this is a good direction. I am a bit skeptical when it comes to the benchmarks - because they can be gamed and they don&#x27;t always reflect real world scenarios. Let&#x27;s see how it works when people get to apply it in real life workflows. One last thing, I wish they could elaborate more on &gt;&gt;&quot;We have found that the performance of o1 consistently improves with more reinforcement learning (train-time compute) and with more time spent thinking (test-time compute).&quot;&lt;&lt; Why don&#x27;t you keep training it for years then to approach 100%? Am I missing something here?
评论 #41526788 未加载
suziemanul8 months ago
In this video Lukasz Kaiser, one of the main co-authors of o1, talks about how to get to reasoning. I hope this may be useful context for some.<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;_7VirEqCZ4g?si=vrV9FrLgIhvNcVUr" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;_7VirEqCZ4g?si=vrV9FrLgIhvNcVUr</a>
irthomasthomas8 months ago
This is a prompt engineering saas
asadm8 months ago
I am not up-to-speed on CoT side but is this similar to how perplexity does it ie.<p>- generate a plan - execute the steps in plan (search internet, program this part, see if it is compilable)<p>each step is a separate gpt inference with added context from previous steps.<p>is O1 same? or does it do all this in a single inference run?
评论 #41524108 未加载
评论 #41523901 未加载
p1esk8 months ago
Do people see the new models in the web interface? Mine still shows the old models (I&#x27;m a paid subscriber).
评论 #41523382 未加载
评论 #41523319 未加载
评论 #41523507 未加载
评论 #41523857 未加载
评论 #41523501 未加载
评论 #41523370 未加载
silveryfu8 months ago
After playing with it on ChatGPT this morning, it seems a reasonable strategy of using the o1 model is to:<p>- If your request requires reasoning, switch to o1 model.<p>- If not, switch to 4o model.<p>This applies to both across chat sessions and within the same session (yes, we can switch between models within the same session and it looks like down the road OpenAI is gonna support automatic model switching). Based on my experience, this will actually improve the perceived response quality -- o1 and 4o are rather complementary to each other rather than replacement.
评论 #41526600 未加载
评论 #41526571 未加载
wrath2248 months ago
Trying this on a few hard problems on PicoGYM and holy heck I&#x27;m impressed. I had to give it a hint but that&#x27;s the same info a human would have. Problem was Sequences (crypto) hard.<p><a href="https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;66e363d8-5a7c-8000-9a24-8f5eef445136" rel="nofollow">https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;66e363d8-5a7c-8000-9a24-8f5eef4451...</a><p>Heh... GPT-4o also solved this after I tried and gave it about the same examples. Need to further test but it&#x27;s promising !
haolez8 months ago
This should also be good news for open weights models, right? Since OpenAI is basically saying &quot;you can get very far with good prompts and some feedback loops&quot;.
评论 #41526669 未加载
idiliv8 months ago
In the demo, O1 implements an incorrect version of the &quot;squirrel finder&quot; game?<p>The instructions state that the squirrel icon should spawn after three seconds, yet it spawns immediately in the first game (also noted by the guy doing the demo).<p>Edit: I&#x27;m referring to the demo video here: <a href="https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;introducing-openai-o1-preview&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;introducing-openai-o1-preview&#x2F;</a>
评论 #41524670 未加载
xpl8 months ago
Folks who say &quot;LLMs can&#x27;t reason&quot;, what now? Have we moved the goalposts yet?
评论 #41527382 未加载
noshitsherlock8 months ago
This is great. I&#x27;ve been wondering how we will revert back to an agrarian society! You know, beating our swords into plowshares; more leisure time, visiting with good people, getting to know their thoughts hopes and dreams, playing music together, taking time contemplating the vastness and beauty of the universe. We&#x27;re about to come full circle; back to Eden. It all makes sense now.
评论 #41528423 未加载
geenkeuse8 months ago
Average Joe&#x27;s like myself will build our apps end to end with the help of AI.<p>The only shops left standing will be Code Auditors.<p>The solopreneur will wing it, without them, but enterprises will take the (very expensive) hit to stay safe and compliant.<p>Everyone else needs to start making contingency plans.<p>Magnus Carlsen is the best chess player in the world, but he is not arrogant enough to think he can go head to head with Stockfish and not get a beating.
评论 #41526922 未加载
andrewchambers8 months ago
Whats interesting is that with more time it can create more accurate answers which means it can be used to generate its own training data.
kfrane8 months ago
I was a bit confused when looking at the English example for Chain-Of-Thought. It seems that the prompt is a bit messed up because the whole statement is bolded but it seems that only &quot;appetite regulation is a field of staggering complexity&quot; part should be bolded. Also that&#x27;s how it shows up in the o1-preview response when you open the Chain of thought section.
fzaninotto8 months ago
It can solve sudoku. It took 119s to solve this easy grid:<p>_ 7 8 4 1 _ _ _ 9<p>5 _ 1 _ 2 _ 4 7 _<p>_ 2 9 _ 6 _ _ _ _<p>_ 3 _ _ _ 7 6 9 4<p>_ 4 5 3 _ _ 8 1 _<p>_ _ _ _ _ _ 3 _ _<p>9 _ 4 6 7 2 1 3 _<p>6 _ _ _ _ _ 7 _ 8<p>_ _ _ 8 3 1 _ _ _
评论 #41526553 未加载
评论 #41524807 未加载
bad_username8 months ago
Prompt:<p>&gt; Alice, who is an immortal robotic observer, orbits a black hole on board a spaceship. Bob exits the spaceship and falls into the black hole. Alice sees Bob on the edge of the event horizon, getting closer and closer to it, but from her frame of reference Bob will remain forever observable (in principle) outside the horizon. &gt; &gt; A trillion year has passed, and Alice observes that the black hole is now relatively rapidly shrinking due to the Hawking radiation. How will Alice be observing the &quot;frozen&quot; Bob as the hole shrinks? &gt; &gt; The black hole finally evaporated completely. Where is Bob now?<p>O1-preview spits out the same nonsense that 4o does, telling that as the horizon of the black hole shrinks, it gets closer to Bob&#x27;s apparent position. I realize that.the prompt is essentily asking to solve the famous unsolved problem in physics (black hole information paradox), but there&#x27;s no need to be so confused with basic geometry of the situation.
评论 #41525996 未加载
breck8 months ago
I LOVE the long list of contributions. It looks like the credits from a Christoper Nolan film. So many people involved. Nice care to create a nice looking credits page. A practice worth copying.<p><a href="https:&#x2F;&#x2F;openai.com&#x2F;openai-o1-contributions&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;openai-o1-contributions&#x2F;</a>
farresito8 months ago
Damn, that looks like a big jump.
评论 #41523207 未加载
morningsam8 months ago
What sticks out to me is the 60% win rate vs GPT-4o when it comes to actual usage by humans for programming tasks. So in reality it&#x27;s barely better than GPT-4o. That the figure is higher for mathematical calculation isn&#x27;t surprising because LLMs were much worse at that than at programming to begin with.
评论 #41525053 未加载
owenpalmer8 months ago
&quot;The Future Of Reasoning&quot; by Vsauce [0] is a fascinating pre-AI-era breakdown of how human reasoning works. Thinking about it in terms of LLMS is really interesting.<p>[0]: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=_ArVh3Cj9rw" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=_ArVh3Cj9rw</a>
yunohn8 months ago
The generated chain of thought for their example is <i>incredibly</i> long! The style is kind of similar to how a human might reason, but it&#x27;s also redundant and messy at various points. I hope future models will be able to optimize this further, otherwise it&#x27;ll lead to exponential increases in cost.
评论 #41524121 未加载
losvedir8 months ago
I&#x27;m confused. Is this the &quot;GPT-5&quot; that was coming in summer, just with a different name? Or is this more like a parallel development doing chain-of-thought type prompt engineering on GPT-4o? Is there still a big new foundational model coming, or is this it?
评论 #41523767 未加载
评论 #41526700 未加载
acomjean8 months ago
I always think to a professor that was consulting on some civil engineering software. He found a bug in the calculation it was using to space rebar placed in concrete, based on looking at it was spitting out and thinking that looks wrong.<p>This kind of thing makes me nervous.
RandomLensman8 months ago
How could it fail to solve some maths problems if it has a method for reasoning through things?
评论 #41523937 未加载
评论 #41524357 未加载
评论 #41524616 未加载
评论 #41524967 未加载
holmesworcester8 months ago
Since ChatGPT came out my test has been, can this thing write me a sestina.<p>It&#x27;s sort of an arbitrary feat with language and following instructions that would be annoying for me and seems impressive.<p>Previous releases could not reliably write a sestina. This one can!
cs3912318 months ago
Student here. Can someone give me one reason why I should continue in software engineering that isn&#x27;t denial and hopium?
评论 #41524660 未加载
评论 #41524889 未加载
评论 #41524828 未加载
评论 #41524635 未加载
评论 #41525277 未加载
评论 #41524523 未加载
评论 #41524646 未加载
评论 #41525501 未加载
评论 #41524470 未加载
评论 #41525419 未加载
评论 #41524539 未加载
评论 #41525038 未加载
评论 #41524613 未加载
评论 #41525109 未加载
评论 #41525407 未加载
评论 #41524981 未加载
评论 #41526639 未加载
评论 #41524720 未加载
评论 #41525622 未加载
评论 #41524753 未加载
评论 #41524703 未加载
评论 #41524773 未加载
评论 #41524740 未加载
评论 #41525216 未加载
评论 #41525051 未加载
评论 #41525742 未加载
评论 #41525071 未加载
评论 #41525163 未加载
评论 #41525006 未加载
评论 #41524862 未加载
评论 #41526855 未加载
评论 #41526216 未加载
评论 #41525633 未加载
评论 #41524873 未加载
评论 #41525414 未加载
评论 #41524843 未加载
评论 #41525660 未加载
评论 #41525108 未加载
评论 #41524911 未加载
评论 #41526560 未加载
评论 #41524649 未加载
评论 #41525030 未加载
评论 #41525173 未加载
评论 #41525614 未加载
评论 #41525792 未加载
评论 #41527944 未加载
评论 #41524545 未加载
评论 #41524724 未加载
评论 #41524433 未加载
评论 #41524838 未加载
评论 #41524705 未加载
评论 #41524452 未加载
评论 #41524860 未加载
评论 #41525171 未加载
评论 #41524822 未加载
评论 #41526024 未加载
评论 #41524993 未加载
评论 #41525333 未加载
评论 #41524917 未加载
评论 #41526543 未加载
评论 #41525028 未加载
评论 #41524495 未加载
评论 #41524960 未加载
评论 #41524577 未加载
评论 #41524789 未加载
评论 #41524992 未加载
评论 #41524728 未加载
评论 #41524902 未加载
评论 #41525855 未加载
scotty798 months ago
Transformers have exactly two strengths. None of them is &quot;attention&quot;. Attention could be replaced with any arbitrary division of the network and it would learn just as well.<p>First true strength is obvious, it&#x27;s that they are parallelisable. This is a side effect of people fixating on attention. If they came up with any other structure that results in the same level of parallelisability it would be just as good.<p>Second strong side is more elusive to many people. It&#x27;s the context window. Because the network is not ran just once but once for every word it doesn&#x27;t have to solve a problem in one step. It can iterate while writing down intermediate variables and accessing them. The dumb thing so far was that it was required to produce the answer starting with the first token it was allowed to write down. So to actually write down the information it needs on the next iteration it had to disguise it as a part of the answer. So naturally the next step is to allow it to just write down whatever it pleases and iterate freely until it&#x27;s ready to start giving us the answer.<p>It&#x27;s still seriously suboptimal that what it is allowed to write down has to be translated to tokens and back but I see how this might make things easier for humans for training and explainability. But you can rest assured that at some point this &quot;chain of thought&quot; will become just chain of full output states of the network, not necessarily corresponding to any tokens.<p>So congrats to researchers that they found out that their billion dollar Turing machine benefits from having a tape it can use for more than just printing out the output.<p>PS<p>There&#x27;s another advantage of transformers but I can&#x27;t tell how important it is. It&#x27;s the &quot;shortcuts&quot; from earlier layers to way deeper ones bypassing the ones along the way. Obviously network would be more capable if every neuron was connected with every neuron in every preceding layer but we don&#x27;t have hardware for that so some sprinkled &quot;shortcuts&quot; might be a reasonable compromise that might make network less crippled than MLP.<p>Given all that I&#x27;m not surprised at all with the direction openai took and the gains it achieved.
derefr8 months ago
So, it’s good at hard-logic reasoning (which is great, and no small feat.)<p>Does this reasoning capability generalize outside of the knowledge domains the model was trained to reason about, into “softer” domains?<p>For example, is O1 better at comedy (because it can reason better about what’s funny)?<p>Is it better at poetry, because it can reason about rhyme and meter?<p>Is it better at storytelling as an extension of an existing input story, because it now will first analyze the story-so-far and deduce aspects of the characters, setting, and themes that the author seems to be going for (and will ask for more information about those things if it’s not sure)?
natch8 months ago
In practice, this implementation (through the Chat UI) is scary bad.<p>It actively lies about what it is doing.<p>This is what I am seeing. Proactive, open, deceit.<p>I can&#x27;t even begin to think of all the ways this could go wrong, but it gives me a really bad feeling.
评论 #41527301 未加载
bbstats8 months ago
Finally, a Claude competitor!
schappim8 months ago
If you’re using the API and are on tier 4, don’t bother adding more credits to move up to tier 5. I did this, and while my rate limits increased, the o1-preview &#x2F; o1-mini model still wasn’t available.
评论 #41526670 未加载
thelastparadise8 months ago
Wouldn&#x27;t this introduce new economics into the LLM market?<p>I.e. if the &quot;thinking loop&quot; budget is parameterized, users might pay more (much more) to spend more compute on a particular question&#x2F;prompt.
评论 #41523317 未加载
评论 #41523433 未加载
adamtaylor_138 months ago
Laughing at the comparison to &quot;4o&quot; as if that model even holds a candle to GPT-4. 4o is _cheaper_—it&#x27;s nowhere near as powerful as GPT-4, as much as OpenAI would like it to be.
vessenes8 months ago
Note that they aren&#x27;t safety aligning the chain of thought, instead we have &quot;rules for thee and not for me&quot; -- the public models are going to continue have tighter and tighter rules on appropriate prompting, while internal access will have unfettered access. All research (and this paper mentions it as well) indicates human pref training itself lowers quality of results; maybe the most important thing we could be doing is ensuring truly open access to open models over time.<p>Also, can&#x27;t wait to try this out.
Eextra9538 months ago
What is interesting to me is that there is no difference in the AP English lit&#x2F;lang exams. Why did chain-of-thought produce negligible improvements in this area?
评论 #41525500 未加载
quantisan8 months ago
Amazing! OpenAI figured out how to scale inference. <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2407.21787" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2407.21787</a> show how using more compute during inference can outperform much larger models in tasks like math problems<p>I wonder how do they decide when to stop these Chain of Thought for each query? As anyone that played with agents can attest, LLMs can talk with themselves forever.
Aissen8 months ago
It&#x27;s interesting that OpenAI has literally applied and automated one of their advice from the &quot;Prompt engineering&quot; guide: Give the model time to &quot;think&quot;<p><a href="https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;guides&#x2F;prompt-engineering&#x2F;give-the-model-time-to-think" rel="nofollow">https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;guides&#x2F;prompt-engineering&#x2F;g...</a>
billconan8 months ago
I will pay if O1 can become my college level math tutor.
评论 #41523970 未加载
airstrike8 months ago
This model is currently available for those accounts in Tier 5 and above, which requires &quot;$1,000 paid [to date] and 30+ days since first successful payment&quot;<p>More info here: <a href="https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;guides&#x2F;rate-limits&#x2F;usage-tiers?context=tier-five" rel="nofollow">https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;guides&#x2F;rate-limits&#x2F;usage-ti...</a>
评论 #41523797 未加载
评论 #41524026 未加载
flockonus8 months ago
Are we ready yet to admit Turing test has been passed?
评论 #41523889 未加载
评论 #41523661 未加载
评论 #41523414 未加载
adverbly8 months ago
&gt; However, o1-preview is not preferred on some natural language tasks, suggesting that it is not well-suited for all use cases.<p>Fascinating... Personal writing was not preferred vs gpt4, but for math calculations it was... Maybe we&#x27;re at the point where its getting too smart? There is a depressing related thought here about how we&#x27;re too stupid to vote for actually smart politicians ;)
评论 #41523880 未加载
RandomThoughts38 months ago
&gt; “Therefore, after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users.”<p>Trust us, we have your best intention in mind. I’m still impressed by how astonishingly impossible to like and root for OpenAI is for a company with such an innovative product.
wahnfrieden8 months ago
Any word on whether this has enhanced Japanese support? They announced Japanese-specific models a while back that were never released.
LarsDu888 months ago
I wonder if this architecture is just asking a chain of thought prompt, or whether they built a diffusion model.<p>The old problem with image generation was that single pass techniques like GANs and VAEs had to do everything in one go. Diffusion models wound up being better by doing things iteratively.<p>Perhaps this is a diffusion model for text (top ICML paper this year was related to this).
not_pleased8 months ago
The progress in AI is incredibly depressing, at this point I don&#x27;t think there&#x27;s much to look forward to in life.<p>It&#x27;s sad that due to unearned hubris and a complete lack of second-order thinking we are automating ourselves out of existence.<p>EDIT: I understand you guys might not agree with my comments. But don&#x27;t you thinking that flagging them is going a bit too far?
评论 #41523642 未加载
评论 #41523723 未加载
评论 #41526402 未加载
评论 #41523720 未加载
prideout8 months ago
Reinforcement learning seems to be key. I understand how traditional fine tuning works for LLMs (i.e. RLHL), but not RL.<p>It seems one popular method is PPO, but I don&#x27;t understand at all how to implement that. e.g. is backpropagation still used to adjust weights and biases? Would love to read more from something less opaque than an academic paper.
评论 #41524161 未加载
itissid8 months ago
One thing I find generally useful when writing large project code is having a code base and several branches that are different features I developed. I could immediately use parts of a branch to reference the current feature, because there is often overlap. This limits mistakes in large contexts and easy to iterate quickly.
Slow_Hand8 months ago
I have a question. The video demos for this all mention that the o1 model is taking it&#x27;s time to think through the problem before answering. How does this functionally differ from - say - GPT-4 running it&#x27;s algorithm, waiting five seconds and then revealing the output? That part is not clear to me.
评论 #41525795 未加载
samanator8 months ago
I just tested o1-preview on the &quot;How many r&#x27;s are in strawberry?&quot; question. It answers correctly!
Doorknob84798 months ago
Why so much hate? They&#x27;re doing their best. This is the state of progress in the field so far. The best minds are racing to innovate. The benchmarks are impressive nonetheless. Give them a break. At the end of the day, they built the chatbot who&#x27;s saving your ass each day ever since.
评论 #41525623 未加载
评论 #41525711 未加载
评论 #41525872 未加载
评论 #41528430 未加载
adverbly8 months ago
&gt; Therefore, s(x)=p∗(x)−x2n+2 We can now write, s(x)=p∗(x)−x2n+2<p>Completely repeated itself... weird... it also says &quot;...more lines cut off...&quot; How many lines I wonder? Would people get charged for these cut off lines? Would have been nice to see how much answer had cost...
kherud8 months ago
Aren&#x27;t LLMs much more limited on the amount of output tokens than input tokens? For example, GPT-4o seems to support only up to 16 K output tokens. I&#x27;m not completely sure what the reason is, but I wonder how that interacts with Chain-of-Thought reasoning.
评论 #41525415 未加载
notamy8 months ago
<a href="https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;introducing-openai-o1-preview&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;introducing-openai-o1-preview&#x2F;</a><p>&gt; ChatGPT Plus and Team users will be able to access o1 models in ChatGPT starting today. Both o1-preview and o1-mini can be selected manually in the model picker, and at launch, weekly rate limits will be 30 messages for o1-preview and 50 for o1-mini. We are working to increase those rates and enable ChatGPT to automatically choose the right model for a given prompt.<p><i>Weekly</i>? Holy crap, how expensive is it to run is this model?
评论 #41523510 未加载
评论 #41523572 未加载
评论 #41523836 未加载
rfoo8 months ago
Impressive safety metrics!<p>I wish OAI include &quot;% Rejections on perfectly safe prompts&quot; in this table, too.
评论 #41523829 未加载
dada50008 months ago
I find shorter responses &gt; longer responses. Anyone share the same consensus?<p>for example in gpt-4o I often append &#x27;(reply short)&#x27; at the end of my requests. with the o1 models I append &#x27;reply in 20 words&#x27; and it gives way better answers.
drzzhan8 months ago
&quot;hidden chain of thought&quot; is basically the finetuned prompt isn&#x27;t it? The time scale x-axis is hidden as well. Not sure how they model the gpt for it to have an ability to decide when to stop CoT and actually answer.
msp268 months ago
&gt; THERE ARE THREE R’S IN STRAWBERRY<p>Well played
evilfred8 months ago
it still fails at logic puzzles <a href="https:&#x2F;&#x2F;x.com&#x2F;colin_fraser&#x2F;status&#x2F;1834334418007457897" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;colin_fraser&#x2F;status&#x2F;1834334418007457897</a>
评论 #41525863 未加载
评论 #41525803 未加载
alok-g8 months ago
For the exam problems it gets wrong, has someone cross-checked that the ground truth answers are actually correct!! ;-) Just kidding, but even such a time may come when the exams created by humans start falling short.
评论 #41524303 未加载
digitcatphd8 months ago
I tested various Math Olympiad questions with Claude sonnet 3.5 and they all arrived at the correct solution. o1&#x27;s solution was a bit better formulated, in some circumstances, but sonnet 3.5 was nearly instant.
zh38 months ago
Question here is about the &quot;reasoning&quot; tag - behind the scenes, is this qualitively different fron stringing words together on a statistical basis? (aside from backroom tweaking and some randomisation).
jazzyjackson8 months ago
Dang, I just payed out for Kagi Assistant.<p>Using Claude 3 Opus I noticed it performs &lt;thinking&gt; and &lt;result&gt; while browsing the web for me. I don&#x27;t guess that&#x27;s a change in the model for doing reasoning.
novaleaf8 months ago
boo, they are hiding the chain of thought from user output (the great improvement here)<p>&gt; Therefore, after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users. We acknowledge this decision has disadvantages. We strive to partially make up for it by teaching the model to reproduce any useful ideas from the chain of thought in the answer. For the o1 model series we show a model-generated summary of the chain of thought.
harisec8 months ago
I asked a few “hard” questions and compared o1 with claude. <a href="https:&#x2F;&#x2F;github.com&#x2F;harisec&#x2F;o1-vs-claude">https:&#x2F;&#x2F;github.com&#x2F;harisec&#x2F;o1-vs-claude</a>
015a8 months ago
Here&#x27;s a video demonstration they posted on YouTube: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=50W4YeQdnSg" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=50W4YeQdnSg</a>
ValentinA238 months ago
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2403.09629" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2403.09629</a>
beaugunderson8 months ago
the cipher example is impressive on the surface, but I threw a couple of my toy questions at o1-preview and it still hallucinates a bunch of nonsense (but now uses more electricity to do so).
the_king8 months ago
Peter Thiel was widely criticized this spring when he said that AI &quot;seems much worse for the math people than the word people.&quot;<p>So far, that seems to be right. The only thing o1 is worse at is writing.
cyanf8 months ago
&gt; 30 messages per week
wewtyflakes8 months ago
Maybe I missed it, but do the tokens used for internal chain of thought count against the output tokens of the response (priced at spicy level of $60.00 &#x2F; 1M output tokens)?
评论 #41524398 未加载
HPMOR8 months ago
A near perfect on AMC 12, 1900 CodeForces ELO, and silver medal IOI competitor. In two years, we&#x27;ll have models that could easily win IMO and IOI. This is __incredible__!!
评论 #41523805 未加载
评论 #41524479 未加载
jupi21428 months ago
Using codeforces as a benchmark feels like a cheat, since OpenAI use to pay us chump change to solve codeforces questions and track our thought process on jupyter notebook.
h1fra8 months ago
Having read the full transcript I don&#x27;t get how it counted 22 letters for mynznvaatzacdfoulxxz. It&#x27;s nice that it corrected itself but a bit worrying
0xstackie8 months ago
I think openai introduced the o1 model because reflection 70b inspired them. Getting them needed a new message to fill the gap for such a long time
MattDaEskimo8 months ago
What&#x27;s the precedent set here?<p>Models that hide away their reasoning and only display the output, charging whatever tokens they&#x27;d like?<p>This is not a good release on any front.
fsflover8 months ago
Dupe: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41523050">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41523050</a>
aktuel8 months ago
If I pay for the chain of thought, I want to see the chain of thought. Simple. How would I know if it happened at all? Trust OpenAI? LOL
评论 #41526494 未加载
评论 #41523669 未加载
评论 #41524092 未加载
jdthedisciple8 months ago
I challenged it to solve the puzzle in my profile info.<p>It failed ;)
biggoodwolf8 months ago
GePeTO1 does not make Pinnochio into a real boy.
npn8 months ago
&quot;Open&quot;AI. Should be ClosedAI instead.
评论 #41523724 未加载
diedyesterday8 months ago
Sam Altman and OpenAI are following the example of Celebrimbor it seems. And I love what may come next...
jiggawatts8 months ago
“THERE ARE THREE R’S IN STRAWBERRY” - o1<p>I got that reference!
aantix8 months ago
Feels like the challenge here is to somehow convey to the end user, how the quality of output is so much better.
spoonfeeder0068 months ago
So how is the internal chain of thought represented anyhow? What does it look like when someone sees it?
TheAceOfHearts8 months ago
Kinda disappointed that they&#x27;re hiding the thought process. Hopefully the open source community will figure out how to effectively match and replicate what OpenAI is doing.<p>I wonder how far we are from having a model that can correctly solve a word soup search problem directly from just a prompt and input image. It seems like the crossword example is close. For a word search it would require turning the image into an internal grid representation, prepare the list of words, and do a search. I&#x27;d be interested in seeing if this model can already solve the word grid search problem if you give it the correct representation as an input.
评论 #41523256 未加载
评论 #41523691 未加载
devit8 months ago
They claim it&#x27;s available in ChatGPT Plus, but for me clicking the link just gives GPT-4o Mini.
immortal38 months ago
Honestly, it doesn&#x27;t matter for the end user if there are more tokens generated between the AI reply and human message. This is like getting rid of AI wrappers for specific tasks. If the jump in accuracy is actual, then for all practical purposes, we have a sufficiently capable AI which has the potential to boost productivity at the largest scale in human history.
评论 #41523211 未加载
crakenzak8 months ago
&gt; we are releasing an early version of this model, OpenAI o1-preview, for immediate use in ChatGPT<p>Awesome!
评论 #41523292 未加载
评论 #41523396 未加载
评论 #41523226 未加载
nemo44x8 months ago
Besides chat bits what viable products are being made with LLMs besides APIs into LLMs?
pknerd8 months ago
I m wondering, what kind of &quot;AI wrappers&quot; will emerge from this model.
lloydatkinson8 months ago
What&#x27;s with this how many r&#x27;s in a strawberry thing I keep seeing?
评论 #41523225 未加载
评论 #41523163 未加载
评论 #41523160 未加载
评论 #41523169 未加载
评论 #41523158 未加载
tslater20068 months ago
Looking at pricing, its $15 per 1M input tokens, and $60 per 1M output tokens. I assume the CoT tokens count as output (or input even)? If so and it directly affects billing, I&#x27;m not sure how I feel about them hiding the CoT prompts. Nothing to stop them from saying &quot;trust me bro, that used 10,000 tokens ok?&quot;. Also no way to gauge expected costs if there&#x27;s a black box you are being charged for.
impossiblefork8 months ago
Very nice.<p>It&#x27;s nice that people have taken the obvious extra-tokens&#x2F;internal thoughts approach to a point where it actually works.<p>If this works, then automated programming etc., are going to actually be tractable. It&#x27;s another world.
idunnoman12228 months ago
Did you guys use the model? Seems about the same to me
ziofill8 months ago
Question for those who do have access: how is it?
MillionOClock8 months ago
What is the maximum context size in the web UI?
hidelooktropic8 months ago
&gt; THERE ARE THREE R’S IN STRAWBERRY<p>It finally got it!!!
orbital-decay8 months ago
Wait, are they comparing 4o without CoT and o1 with built-in CoT?
评论 #41523432 未加载
guluarte8 months ago
the only benchmark that matters in the ELO points on LLMsys, any other one can be easily gamed
davesque8 months ago
&quot;after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users&quot;<p>...umm. Am I the only one who feels like this takes away much of the value proposition, and that it also runs heavily against their stated safety goals? My dream is to interact with tools like this to learn, not just to be told an answer. This just feels very dark. They&#x27;re not doing much to build trust here.
Havoc8 months ago
o1<p>Maybe they should spend some of their billions on marketing people. Gpt4o was a stretch. Wtf is o1
评论 #41526495 未加载
fnord778 months ago
&gt; Available starting 9.12<p>I don&#x27;t see it
评论 #41523981 未加载
评论 #41523630 未加载
jmartin26838 months ago
Per-token billing will be lit
max_entropy8 months ago
Is there a paper available?
deisteve8 months ago
yeah this is kinda cool i guess but 808 elo is still pretty bad for a model that can supposedly code like a human, i mean 11th percentile is like barely scraping by, and what even is the point of simulating codeforces if youre just gonna make a model that can barely compete with a decent amateur, and btw what kind of contest allows 10 submissions, thats not how codeforces works, and what about the time limits and memory limits and all that jazz, did they even simulate those, and btw how did they even get the elo ratings, is it just some arbitrary number they pulled out of their butt, and what about the model that got 1807 elo, is that even a real model or just some cherry picked result, and btw what does it even mean to &quot;perform better than 93% of competitors&quot; when the competition is a bunch of humans who are all over the place in terms of skill, like what even is the baseline for comparison<p>edit: i got confused with the Codeforce. it is indeed zero shot and O1 is potentially something very new I hope Anthropic and others will follow suit<p>any type of reasoning capability i&#x27;ll take it !
评论 #41523203 未加载
评论 #41523271 未加载
delusional8 months ago
Great, yet another step towards the inevitable conclusion. Now I&#x27;m not just being asked to outsource my thinking to my computer, but instead to a black box operated by a for-profit company for the benefit of Microsoft. Not only will they not tell me the whole reasoning chain, they wont even tell me how they came up with it.<p>Tell me, users of this tool. What&#x27;s even are you? If you&#x27;ve outsourced your thinking to a corporation, what happens to your unique perspective? your blend of circumstance and upbringing? Are you really OK being reduced to meaningless computation and worthless weights. Don&#x27;t you want to be something more?
评论 #41526564 未加载
plg8 months ago
can we get it on ollama? if not how come openai is called open
评论 #41525891 未加载
mewpmewp28 months ago
I finally got access to it, I tried playing Connect 4 with it, but it didn&#x27;t go very well. A bit disappointed.
sohamgovande8 months ago
the newest scaling law: inference-time compute.
echelon_musk8 months ago
&gt; THERE ARE THREE R&#x27;S IN STRAWBERRY<p>Who do these Rs belong to?!
deemahstro8 months ago
Stop fooling around with stories about AI taking jobs from programmers. Which programmers exactly??? Creators of idiotic web pages? Nobody in their right mind would push generated code into a financial system, medical equipment or autonomous transport. Template web pages and configuration files are not the entire IT industry. In addition, AI is good at tasks for which there are millions of examples. 20 times I asked to generate a PowerShell script, 20 times it was generated incorrectly. Because, unlike Bash, there are far fewer examples on the Internet. How will AI generate code for complex systems with business logic that it has no idea about? AI is not able to generate, develop and change complex information systems.
sys327688 months ago
Time to fire up System Shock 2:<p>&gt; Look at you, hacker: a pathetic creature of meat and bone, panting and sweating as you run through my corridors. How can you challenge a perfect, immortal machine?
Ninjinka8 months ago
Someone give this model an IQ test stat.
评论 #41523621 未加载
评论 #41524883 未加载
RockRobotRock8 months ago
Shit, this is going to completely kill jailbreaks isn&#x27;t it?
gliiics8 months ago
Congrats to OpenAI for yet another product that has nothing to do with the word &quot;open&quot;
评论 #41523219 未加载
评论 #41523576 未加载
minimaxir8 months ago
&gt; Therefore, after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users.<p>What? I agree people who typically use the free ChatGPT webapp won&#x27;t care about raw chain-of-thoughts, but OpenAI is opening an API endpoint for the O1 model and downstream developers very very much care about chain-of-thoughts&#x2F;the entire pipeline for debugging and refinement.<p>I suspect &quot;competitive advantage&quot; is the primary driver here, but that just gives competitors like Anthropic an oppertunity.
评论 #41524045 未加载
ethanmitchell878 months ago
slightly offtopic, but openai having anti scraping &#x2F; bot check on the blog is pretty funny
franze8 months ago
ChatGPT is now a better coder than I ever was.
la647108 months ago
Can we please stop using the word “think” like o1 thinks before it answers. I doubt we man the same when someone says a human thinks vs o1 thinks. When I say I think “red” I am sure the word think means something completely different than when you say openai thinks red. I am not saying one is superior than the other but maybe as humans we can use a different set of terminology for the AI activities.
ComputerGuru8 months ago
&quot;For example, in the future we may wish to monitor the chain of thought for signs of manipulating the user.&quot;<p>This made me roll my eyes, not so much because of what it said but because of the way it&#x27;s conveyed injected into an otherwise technical discussion, giving off severe &quot;cringe&quot; vibes.
jseip8 months ago
Landmark. Wild. Beautiful. The singularity is nigh.
throwawaylolx8 months ago
&quot;Learn to reason like a robot&quot;
sroussey8 months ago
They keep announcing things that will be available to paid ChatGPT users “soon” but is more like an Elon Musk “soon”. :&#x2F;
kmeisthax8 months ago
&gt;We believe that a hidden chain of thought presents a unique opportunity for monitoring models. Assuming it is faithful and legible, the hidden chain of thought allows us to &quot;read the mind&quot; of the model and understand its thought process. For example, in the future we may wish to monitor the chain of thought for signs of manipulating the user. However, for this to work the model must have freedom to express its thoughts in unaltered form, so we cannot train any policy compliance or user preferences onto the chain of thought. We also do not want to make an unaligned chain of thought directly visible to users.<p>&gt;Therefore, after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users. We acknowledge this decision has disadvantages. We strive to partially make up for it by teaching the model to reproduce any useful ideas from the chain of thought in the answer. For the o1 model series we show a model-generated summary of the chain of thought.<p>So, let&#x27;s recap. We went from:<p>- Weights-available research prototype with full scientific documentation (GPT-2)<p>- Commercial-scale model with API access only, full scientific documentation (GPT-3)<p>- Even bigger API-only model, tuned for chain-of-thought reasoning, minimal documentation on the implementation (GPT-4, 4v, 4o)<p>- An API-only model tuned to generate unedited chain-of-thought, which will not be shown to the user, even though it&#x27;d be really useful to have (o1)
评论 #41525747 未加载
评论 #41526218 未加载
评论 #41525890 未加载
评论 #41525326 未加载
评论 #41525525 未加载
评论 #41525584 未加载
评论 #41526039 未加载
评论 #41526350 未加载
评论 #41526373 未加载
评论 #41526486 未加载
kypro8 months ago
Reminder that it&#x27;s still not too late to change the direction of progress. We still have time to demand that our politicians put the breaks on AI data centres and end this insanity.<p>When AI exceeds humans at all tasks humans become economically useless.<p>People who are economically useless are also politically powerless, because resources are power.<p>Democracy works because the people (labourers) collectivised hold a monopoly on the production and ownership of resources.<p>If the state does something you don&#x27;t like you can strike or refuse to offer your labour to a corrupt system. A state must therefore seek your compliance. Democracies do this by given people want they want. Authoritarian regimes might seek compliance in other ways.<p>But what is certain is that in a post-AGI world our leaders can be corrupt as they like because people can&#x27;t do anything.<p>And this is obvious when you think about it... What power does a child or a disable person hold over you? People who have no ability to create or amass resources depend on their beneficiaries for everything including basics like food and shelter. If you as a parent do not give your child resources, they die. But your child does not hold this power over you. In fact they hold no power over you because they cannot withhold any resources from you.<p>In a post-AGI world the state would not depend on labourers for resources, jobless labourers would instead depend on the state. If the state does not provide for you like you provide for your children, you and your family will die.<p>In a good outcome where humans can control the AGI, you and your family will become subjects to the whims of state. You and your children will suffer as the political corruption inevitably arises.<p>In a bad outcome the AGI will do to cities what humans did to forests. And AGI will treat humans like humans treat animals. Perhaps we don&#x27;t seek the destruction of the natural environment and the habitats of animals, but woodland and buffalo are sure inconvenient when building a super highway.<p>We can all agree there will be no jobs for our children. Even if you&#x27;re an &quot;AI optimist&quot; we probably still agree that our kids will have no purpose. This alone should be bad enough, but if I&#x27;m right then there will be no future for them at all.<p>I will not apologise for my concern about AGI and our clear progress towards that end. It is not my fault if others cannot see the path I seem to see so clearly. I cannot simply be quiet about this because there&#x27;s too much at stake. If you agree with me at all I urge you to not be either. Our children can have a great future if we allow them to have it. We don&#x27;t have long, but we do still have time left.
rfw3008 months ago
A lot of skepticism here, but these are astonishing results! People should realize we’re reaching the point where LLMs are surpassing humans in any task limited in scope enough to be a “benchmark”. And as anyone who’s spent time using Claude 3.5 Sonnet &#x2F; GPT-4o can attest, these things really are useful and smart! (And, if these results hold up, O1 is much, much smarter.) This is a nerve-wracking time to be a knowledge worker for sure.
评论 #41523399 未加载
评论 #41523273 未加载
评论 #41523574 未加载
评论 #41523301 未加载
评论 #41523428 未加载
评论 #41523246 未加载
评论 #41523307 未加载
评论 #41523516 未加载
评论 #41523442 未加载
评论 #41523400 未加载
评论 #41523445 未加载
modeless8 months ago
&gt; We have found that the performance of o1 consistently improves with more reinforcement learning (train-time compute) and with more time spent thinking (test-time compute).<p>Wow. So we can expect scaling to continue after all. Hyperscalers feeling pretty good about their big bets right now. Jensen is smiling.<p>This is the most important thing. Performance today matters less than the scaling laws. I think everyone has been waiting for the next release just trying to figure out what the future will look like. This is good evidence that we are on the path to AGI.
评论 #41524840 未加载
评论 #41523616 未加载
评论 #41523361 未加载
评论 #41523551 未加载
评论 #41523286 未加载
评论 #41523456 未加载
cs7028 months ago
Before commenting here, please take 15 minutes to read through the chain-of-thought examples -- decoding a cypher-text, coding to solve a problem, solving a math problem, solving a crossword puzzle, answering a complex question in English, answering a complex question in Chemistry, etc.<p>After reading through the examples, I am <i>shocked</i> at how incredibly good the model is (or appears to be) at reasoning: far better than most human beings.<p>I&#x27;m impressed. Congratulations to OpenAI!
评论 #41526239 未加载
评论 #41524093 未加载
p1esk8 months ago
<i>after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users.</i>
评论 #41523259 未加载
评论 #41523212 未加载
评论 #41523471 未加载
评论 #41523509 未加载
评论 #41523397 未加载
评论 #41523447 未加载
评论 #41523249 未加载
hobofan8 months ago
That naming scheme...<p>Will the next model be named &quot;1k&quot;, so that the subsequent models will be named &quot;4o1k&quot;, and we can all go into retirement?
评论 #41523335 未加载
评论 #41523462 未加载
catchnear43218 months ago
oh wow, something you can roughly model as a diy in a base model. so impressive. yawn.<p>at least NVDA should benefit. i guess.
评论 #41523213 未加载
roshankhan288 months ago
I have also heard they are launching a AI called strawberry. If you pay attention, there is a specific reason why they have named it strawberry. if you ask chat gpt 4o, how many r&#x27;s in the word strawberry, it will give answer as 2. still to this day it will answer same. the model is not able to reason. thats why a reasoning model is being launched. this is one of the reason apart from many other reasons.
resters8 months ago
I tested o1-preview on some coding stuff I&#x27;ve been using gpt-4o for. I am <i>not</i> impressed. The new, more intentional chain of thought logic is apparently not something it can meaningfully apply to a non-trivial codebase.<p>Sadly I think this OpenAI announcement is hot air. I am now (unfortunately) much less enthusiastic about upcoming OpenAI announcements. This is the first one that has been extremely underwhelming (though the big announcement about structured responses (months after it had already been supported nearly identically via JSONSchema) was in hindsight also hot air.<p>I think OpenAI is making the same mistake Google made with the search interface. Rather than considering it a command line to be mastered, Google optimized to generate better results for someone who had no mastery of how to type a search phrase.<p>Similarly, OpenAI is optimizing for someone who doesn&#x27;t know how to interact with a context-limited LLM. Sure it helps the low end, but based on my initial testing this is not going to be helpful to anyone who had already come to understand how to create good prompts.<p>What is needed is the ability for the LLM to create a useful, ongoing meta-context for the conversation so that it doesn&#x27;t make stupid mistakes and omissions. I was really hoping OpenAI would have something like this ready for use.
评论 #41525649 未加载
评论 #41525644 未加载