Jeez, I'm a Sam Altman skeptic but this is just another level. How about instead of attacking this guy and literally deconstructing a single-sentence tweet to make him out as an evil bogeyman, we make some constructive arguments for these so-called 'more reliable' ways of building AI and why the current approach is 'deeply flawed'?
The manoeuvre to become a for-profit organisation after years without paying taxes tells something deeper about OpenAI true way of operating.<p>Accepting to do deals with people who have no shame for pulling such trick is to accept they will do it again in a much bigger scale.
For what it's worth, try to take it easy on each other in this thread: I cannot think of a worse messenger than Gary Marcus for this, he's directly equivalent to Sam, but on the downside.<p>AI-as-meme has been around long enough now that its being force-interpreted into two camps, "its all a scam, at best it only knows how to reproduce exact training data" and "it's glory shall have us working 0 hours by 2030". This article won't shed light on reality, the in-between.<p>ex. lets walk through the opening:<p>1) "How do you convince the world your ideas and business might ultimately be worth $7 trillion dollars" -- he's referring to an unsourced rumor, that never made any sense, that Mr. Altman approached Saudis for $7 trillion to <i>build GPUs</i>. Even the nonsensical gossip rag rumor explicitly has nothing to do with OpenAI, part of the "intrigue" was the perfidy of Sam doing it separate from OpenAI.<p>2) "Sam Altman is on a tour to raise money and raise valuations...at some of top universities in the world", figuratively, maybe, but not actually - they just raised a round in December and this isn't a company that needs to do PR at universities to catch investor attention.<p>3) "A few days ago at Stanford, Sam promised that AGI will be worth it, no matter how much it costs" -- actual quote: "I don't care if we burn $50 billion a year, we're building AGI and it's going to be worth it" -- that's not a "promise" nor "no matter how much it costs" -- yes, $50B is functionally 'no matter how much', so I'd give Gary charity of interpretation on that too - except as long as we're doing that, why are we over-reading Sam?
Long time HN-er<p>I'm flagging this. It's not because I give a hoot about sama but because this kind of crap is posted only to lead to endless discussion.<p>Get to work. Focus. Whatever the hell Sam is can wait for another day.
GPT-4 is absolutely incredible and even if we never get beyond it the world is a much better place with it than without it. It makes total sense to bet on the team that made this being the best placed people in the world to advance it.
Sam has always been very skilled at dealing with the media: making grand pronouncements, apocalyptic statements, save the word predictions, rag to riches hero stories. Which is fine. It's hard to get anyone to pay attention to anything, and he's got a nice playbook (and product) to get people to pay attention.<p>The more interesting question, I think: is OpenAI actually a good business? Can they generate the resources they need to meet their goals and keep control, without selling to big tech companies that will derail their plans? Do they have enough of a moat and can they benefit from network effect to make their products more valuable over time, without getting copied? They realised they need a lot more capital then they initially thought. Time will tell if Microsoft is able to take over - see what recently happened with Inflection AI: After raising $1.3B, Inflection is eaten alive by its biggest investor, Microsoft <a href="https://techcrunch.com/2024/03/19/after-raising-1-3b-inflection-got-eaten-alive-by-its-biggest-investor-microsoft/" rel="nofollow">https://techcrunch.com/2024/03/19/after-raising-1-3b-inflect...</a>
>(again without presenting evidence that historically extremely difficult problems are close to being solved)<p>This tells you everything you need to know about the author. Anyone that has solved difficult problems knows that evidence of being close to a solution is not a thing. In fact, by the time you're close, proving you're close is _harder_ than finishing the solution.
Gary Marcus has become attention seeking lately. I unfollowed him. Most of his posts were attacks on other people instead of genuine contributions on how we can make AI actually better and safer.<p>Easy to criticize, much harder to offer effective solutions.
Sam is a kind of man who has a pair of 2 in hand and he's boasting he will make all-in if someone raise above his 20$ after 4-7-9 appears in flop.
Hard to take Gary seriously calling out "outlandish claims" with "no substance" when he does the same thing in the opposite direction.<p>Garbage article for clicks to pay for his lifestyle, now that he's grifted his way into being an "AI Expert" paid to pontificate with no skin in the game.
The current AI wave is a perfect application of Conway's law: the bullshit industry has generated the perfect bullshit machines, pretending to show intelligence when they only parrot what they've heard elsewhere - badly but convincingly.
If you know about the author of this post Gary Marcus you can just as easily ascribe accusations of fear, The Denial of Uncertainties, Hype and self-promotion/grifting
This post is not going to age well. In fact, when GPT-5 comes out soon it's going to look positively dumb.<p>Also GPT-4 is still leaps and bounds more capable than the competition. Anyone pushing large-language models to their limits today can easily attest to that. Claude 3 Opus comes close, but is significantly more expensive and much harder to do function calling with.
Approximately 8 billion people have accomplished less in their lives than Sam Altman... maybe criticize them instead? And so what if he's selling a vision of the future? That's a large part of entrepreneurship.
He can go ahead and create openai, the second he creates it however, it should be taken away from him and become public domain or we need to have a change in system, a true AGI should be able to automate every single industry if so then, I think everyone knows what system is the only one left to implement once humans are no longer required to work and every single factory/company produces much higher output