TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Before Altman’s ouster, OpenAI’s board was divided and feuding

304 点作者 vthommeret超过 1 年前

36 条评论

pasltd超过 1 年前
Here you go: <a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;11&#x2F;21&#x2F;technology&#x2F;openai-altman-board-fight.html?unlocked_article_code=1.AU0.0Zli.ur35-H19FBLR&amp;smid=url-share" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;11&#x2F;21&#x2F;technology&#x2F;openai-altman-...</a>
评论 #38372934 未加载
neonate超过 1 年前
<a href="https:&#x2F;&#x2F;archive.ph&#x2F;eN5PY" rel="nofollow noreferrer">https:&#x2F;&#x2F;archive.ph&#x2F;eN5PY</a>
gwern超过 1 年前
None of the comments thus far seem to clearly explain why this matters. Let me summarize the implications:<p>Sam Altman expelling Toner with the pretext of an inoffensive page (<a href="https:&#x2F;&#x2F;cset.georgetown.edu&#x2F;wp-content&#x2F;uploads&#x2F;CSET-Decoding-Intentions.pdf#page=30" rel="nofollow noreferrer">https:&#x2F;&#x2F;cset.georgetown.edu&#x2F;wp-content&#x2F;uploads&#x2F;CSET-Decoding...</a>) in a paper no one read* would have given him a temporary majority with which to appoint a replacement director, and then further replacement directors. These directors would, naturally, agree with Sam Altman, and he would have a full, perpetual board majority - the board, which is the only oversight on the OA CEO. Obviously, as an extremely experienced VC and CEO, he knew all this and how many votes he (thought he) had on the board, and the board members knew this as well - which is why they had been unable to agree on replacement board members all this time.<p>So when he &#x27;reprimanded&#x27; her for her &#x27;dangerous&#x27; misconduct and started talking seriously about how &#x27;inappropriate&#x27; it was for a &#x27;board member&#x27; to write anything which was not cheerleading, and started leading discussions about &quot;whether Ms Toner should be removed&quot;...<p>* I actually read CSET papers, and I still hadn&#x27;t bothered to read this one, nor would I have found anything remarkable about that page, which Altman says was so bad that she needed to be expelled immediately from the board.
评论 #38374227 未加载
评论 #38374504 未加载
评论 #38374412 未加载
评论 #38374589 未加载
评论 #38375054 未加载
评论 #38374893 未加载
评论 #38374381 未加载
评论 #38374623 未加载
评论 #38373762 未加载
评论 #38373974 未加载
评论 #38374151 未加载
CSMastermind超过 1 年前
The only specific accusation made in the article is that Sam criticized Helen Toner for writing a paper: <a href="https:&#x2F;&#x2F;cset.georgetown.edu&#x2F;publication&#x2F;decoding-intentions&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;cset.georgetown.edu&#x2F;publication&#x2F;decoding-intentions&#x2F;</a><p>That says Anthropic has a better approach to AI safety than OpenAI.<p>Sam apparently said she should have come to him directly if she had concerns about the company&#x27;s approach and pointed out that as a board member her words have weight at a time when he was trying to navigate a tricky relationship with the FTC. She apparently told him to kick rocks and he started to look for ways to get her off the board.<p>All of that ... seems completely reasonable?<p>Like I&#x27;ve heard a lot of vague accusations thrown at Sam over the last few days and yet based on this account I think he reacted the exact same way any CEO would.<p>I&#x27;m much more interested in how Helen managed to get on this board at all.
评论 #38372769 未加载
评论 #38373299 未加载
评论 #38372712 未加载
评论 #38372946 未加载
评论 #38373350 未加载
评论 #38372663 未加载
评论 #38373467 未加载
评论 #38373552 未加载
评论 #38373110 未加载
评论 #38372705 未加载
评论 #38374409 未加载
评论 #38373692 未加载
评论 #38372666 未加载
评论 #38374101 未加载
评论 #38373419 未加载
评论 #38373649 未加载
评论 #38373924 未加载
评论 #38374355 未加载
评论 #38374298 未加载
评论 #38372639 未加载
评论 #38373276 未加载
winenbug超过 1 年前
<a href="https:&#x2F;&#x2F;openai.com&#x2F;our-structure" rel="nofollow noreferrer">https:&#x2F;&#x2F;openai.com&#x2F;our-structure</a><p>This whole thing was so, SO poorly executed, but the independent people on the board were gathered <i>specifically</i> to prioritize humanity &amp; AI safety <i>over</i> OpenAI. It sounds like Sam forgot just that when he criticized Helen for her research (given how many people were posting ways to &quot;get around&quot; ChatGPT&#x27;s guardrails, she probably had some firm grounds to stand on).<p>Yes, Sam made LLMs mainstream and is the face of AI, but if the board believes that that course of action could destroy humanity it&#x27;s literally the board&#x27;s mission to stop it — whether that means destroying OpenAI or not.<p>What this really shows us is that this &quot;for-profit wrapper around a non-profit&quot; shenanigans was doomed to fail in the first place. I don&#x27;t think either side is purely in the wrong here, but they&#x27;re two sides of an incredibly badly thought-of charter.
评论 #38373366 未加载
评论 #38374115 未加载
评论 #38373540 未加载
评论 #38373151 未加载
评论 #38373819 未加载
clnq超过 1 年前
A lot of people in tech say that executives are excessively diplomatic and do not speak their truth. But this is what happens when they do too much, too ardently, too often. This is why diplomacy and tact is so important in these roles.<p>Things do not go well if everyone keeps poking each other with sticks and cannot let their own frame of reference go for the sake of the bigger picture.<p>Ultimately, I don’t think Altman doesn’t believe ethics and safety is important. And I don’t think Toner fails to realize that OpenAI is only in a place to dictate what AI will be due to its commercial principles. And they probably both agree that there is a conflict there. But what tactful leadership would have done is found a solution behind closed doors. Yet from their communication, it doesn’t even look like they defined the problem statement — everyone offers a different idea of the problem that they had to face together. It looks more like it was more like immature people shouting past each other for a year (not saying it was that, but it looks that way).<p>Moral of the story: tact, grace, and diplomacy are important. So is speaking one’s truth, but there is a tactful time, place, and manner. And also, no matter how brilliant someone is, if they can’t develop these traits, they end up rocking the boat a lot.
评论 #38374362 未加载
lwneal超过 1 年前
The relevant passage from the paper co-written by board member Helen Toner:<p>&quot;OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to &quot;jailbreaks&quot; that allow users to bypass safety controls...<p>A different approach to signaling in the private sector comes from Anthropic, one of OpenAI&#x27;s primary competitors. Anthropic&#x27;s desire to be perceived as a company that values safety shines through across its communications, beginning from its tagline: &quot;an AI safety and research company.&quot; A careful look at the company&#x27;s decision-making reveals that this commitment goes beyond words.&quot;<p>[1] <a href="https:&#x2F;&#x2F;cset.georgetown.edu&#x2F;publication&#x2F;decoding-intentions&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;cset.georgetown.edu&#x2F;publication&#x2F;decoding-intentions&#x2F;</a>
评论 #38374735 未加载
评论 #38374746 未加载
评论 #38375039 未加载
PepperdineG超过 1 年前
&gt;Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that mission would be fulfilled.<p>Now we know where that came from
评论 #38372814 未加载
评论 #38372582 未加载
评论 #38372668 未加载
评论 #38372652 未加载
评论 #38372777 未加载
评论 #38372767 未加载
jafitc超过 1 年前
Sam Altman&#x27;s Actions<p>- Sam complained that Helen Toner&#x27;s research paper criticized OpenAI&#x27;s safety approach and praised Anthropic&#x27;s, seeing it as dangerous for OpenAI<p>- He reprimanded her for the paper, saying as a board member she should have brought concerns to him first rather than publishing something critical right when OpenAI was in a tricky spot with the FTC<p>- He then began looking for ways to remove her from the board in response to the paper<p>---<p>Helen Toner&#x27;s Perspective<p>- She believes the board&#x27;s mission is to ensure OpenAI makes AI that benefits humanity, so destroying the company would fulfill that mission<p>- This suggests she prioritizes that mission over the company itself, seeing humanitarian concerns as more important than OpenAI&#x27;s success<p>---<p>Microsoft Partnership<p>- The Microsoft partnership concentrated too much power in one company and went against the mission of OpenAI being &quot;open&quot;<p>- It gave Microsoft full access to OpenAI&#x27;s core technologies against the safety-focused mission<p>---<p>Governance Issues<p>- The conflict shows the adversarial tensions inherent in OpenAI&#x27;s structure between nonprofit and for-profit sides<p>- The board&#x27;s mandate to act as a check and balance on OpenAI seems to be working as intended in this case<p>---<p>Criticisms of Players<p>- Altman appears reckless in his actions, while Toner seems naive about consequences of destroying OpenAI<p>- Their behavior calls into question whether anyone should have this kind of power over the development of AI<p>---<p>Future of AI Development<p>- Attempts at alignment and safeguards by companies like OpenAI may be ineffective if other actors are developing AI without such considerations<p>- Who controls advanced AI is more important than whether the AI is friendly.<p>- Nationalization of AI projects may occur
评论 #38375117 未加载
评论 #38374591 未加载
评论 #38374014 未加载
jessenaser超过 1 年前
&gt; Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.<p>&gt; OpenAI&#x27;s board of directors approached rival Anthropic&#x27;s CEO about replacing chief Sam Altman and potentially merging the two AI startups, according to two people briefed on the matter. (<a href="https:&#x2F;&#x2F;www.reuters.com&#x2F;technology&#x2F;openais-board-approached-anthropic-ceo-about-top-job-merger-sources-2023-11-21&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.reuters.com&#x2F;technology&#x2F;openais-board-approached-...</a>)<p>It all makes sense.
评论 #38374111 未加载
评论 #38373231 未加载
zoogeny超过 1 年前
&gt; Mr. Sutskever’s frustration with Mr. Altman echoed what had happened in 2021 when another senior A.I. scientist left OpenAI to form the company Anthropic. That scientist and other researchers went to the board to try to push Mr. Altman out.<p>So Altman faced another similar challenge to his authority and prevailed. I recall hearing that Anthropic started because the people who had left were unhappy with OpenAIs track record on AI safety.<p>&gt; In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company,<p>&gt; Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe<p>&gt; Senior OpenAI leaders [...] later discussed whether Ms. Toner should be removed<p>That paints a pretty bleak picture that isn&#x27;t favorable to Altman. Two times he was challenged about OpenAIs safety and both times he worked to purge those who opposed him.<p>I can&#x27;t tell if this is a contention between accelerationism and deccelerationism or if it is a contention between safety and recklessness. Is Altman ignoring the warnings his employees&#x2F;peers are giving him and retaliating against them? Or is he facing insubordination?<p>I wish OpenAI would split neatly into two. But based on the heightened emotions caused by the insane amount of PR I only see two outcomes. Altman returns and OpenAI stays unified vs. Altman stays at MS and OpenAI is obliterated. I am guessing Altman is hoping that senior management will choose a unified OpenAI at all costs, including ignoring the red flags above. He has engineered a situation where the only way OpenAI remains unified is if he returns.
评论 #38375001 未加载
评论 #38375602 未加载
throw20away超过 1 年前
Ms. Toner’s motive is now clearer for being criticized about her research. Mr. D’Angelo had a competing commercialization product with Poe. Mr. Sutskever seems easy to manipulate emotionally and is in constant ideological battles. Mrs. McCauley AKA Joseph Gordon-Levitt’s wife, what’s her motive?
评论 #38372794 未加载
评论 #38373561 未加载
hintymad超过 1 年前
&gt;Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that mission would be fulfilled.<p>Doesn’t this sound like Toner herself is playing god? That is, only she is the gatekeeper of humanity and only she knows what is good for the entire humanity, when many of us believe that OpenAI’s tech is perfectly safe and amazingly beneficial?
评论 #38374137 未加载
评论 #38373541 未加载
评论 #38375639 未加载
评论 #38373071 未加载
tempusalaria超过 1 年前
This claims that Anthropic founders also tried to throw Sam out before leaving. They claim three sources but not clear how strong - technically the current board could be three sources
评论 #38372951 未加载
评论 #38372655 未加载
评论 #38372614 未加载
jdprgm超过 1 年前
I really don&#x27;t understand why the three board members have just been completely silent online since things hit the fan with no activity or privating profiles. You would think if this was some sort of ideological based coup and pre-meditated you would have a full PR plan in place and pushing your message aggressively.<p>This whole thing has just been bizarre and it still feels like there has to be some big key piece missing that somehow nobody has revealed.
评论 #38374887 未加载
评论 #38374055 未加载
fnordpiglet超过 1 年前
“””Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.”””<p>This is remarkable and unhinged. I gave the board the benefit of the doubt at first, but as this unfolds, it becomes clear that the board is held hostage by ideological zealots who can’t compromise. The story also seems to paint Ilya as a manipulated figure, being bent against his beliefs by crazies like Toner exploiting his concerns about AI safety. What an absolute shame. I am entirely ambivalent about Altman overall, but he becomes more sympathetic as the days go by.
评论 #38374070 未加载
评论 #38374148 未加载
评论 #38374634 未加载
评论 #38374126 未加载
QuantumGood超过 1 年前
It sounds like the board got ever more concentrated towards less sanity over time.
breadwinner超过 1 年前
The main takeaway from this whole saga is, be careful who you allow on your board. Nearly everyone on OpenAI board is young, inexperienced and unqualified to be on the board of one of the most important companies on the planet.<p>Here&#x27;s Ms. Toner&#x27;s linkedin: <a href="https:&#x2F;&#x2F;www.linkedin.com&#x2F;in&#x2F;helen-toner-4162439a&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.linkedin.com&#x2F;in&#x2F;helen-toner-4162439a&#x2F;</a>
评论 #38373083 未加载
评论 #38373620 未加载
coffeeshopgoth超过 1 年前
Having served on boards, the one thing I have not seen that would be really key (and maybe I just have not been lucky in my searches), is something that answers the question of what is the language around releasing information regarding the company to the press or in a research capacity? Typically there are rules that are set up to give the board notice of anything going out to the public, be it an article or research paper, that allows for a heads up or gives time to discuss the implications of such a thing. While whistleblowers are necessary, was there a need to be a sort of whistleblower in this case? Was there adequate board discussion around the subject and the paper before the release? If not, nobody needs a rogue board member like that and it was definitely not in the interest of the company - she is the one at fault here. If that process did happen, she definitely did the right thing and shame on the board for not getting out in front of it.
foota超过 1 年前
Definitely seems like Helen is emerging as the main leader of the coup, &quot;Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.&quot;
评论 #38374437 未加载
评论 #38373981 未加载
评论 #38376114 未加载
brigandish超过 1 年前
As an aside, I must say that I&#x27;m fascinated by this use of the word <i>ouster</i>. In British English I&#x27;ve not heard it used this way, we&#x27;d use <i>ousting</i> in that place and the ouster would be the one doing the ousting, i.e. the person doing the pushing.<p>I thought I&#x27;d check with a quick search of the Guardian, and on two different days it used both in the same sense:<p>From the 18th of November edition, technology section:[1]<p>&gt; The crisis at OpenAI deepened this weekend when the company said Altman’s <i>ousting</i> was for allegedly misleading the board.<p>From 17th of November edition, also technology section[2]:<p>&gt; The announcement blindsided employees, many of whom learned of the sudden <i>ouster</i> from an internal announcement and the company’s public facing blog.<p>I could only find one instance in the Telegraph prior to Altman&#x27;s erm, ousting[3]:<p>&gt; Johnson dropped into the COP27 UN climate change conference in October, joking unusual summer heat had played a part in his <i>ouster</i>, and has vowed to keep championing Ukraine.<p>They stick to <i>ousting</i> every other time.<p>I wonder if it may be an artefact of newspapers using news services to get copy from, or a stylistic rule for international editions, as the Guardian does use <i>ouster</i> several other times but always in stories regarding US news.<p>Also fascinating that we still don&#x27;t know who that is. Neither do most of the staff, and even the board, apparently. A true mastermind!<p>Maybe it is GPT5 after all… &lt;cue ominous music, or perhaps, a Guns n&#x27; Roses album?&gt;<p>[1] <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2023&#x2F;nov&#x2F;18&#x2F;earthquake-at-chatgpt-developer-as-senior-staff-quit-after-sacking-of-boss-sam-altman" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2023&#x2F;nov&#x2F;18&#x2F;earthquak...</a><p>[2] <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2023&#x2F;nov&#x2F;17&#x2F;openai-ceo-sam-altman-fired" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2023&#x2F;nov&#x2F;17&#x2F;openai-ce...</a><p>[3] <a href="https:&#x2F;&#x2F;www.telegraph.co.uk&#x2F;news&#x2F;2022&#x2F;12&#x2F;27&#x2F;boris-johnson-will-make-political-comeback-2023&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.telegraph.co.uk&#x2F;news&#x2F;2022&#x2F;12&#x2F;27&#x2F;boris-johnson-wi...</a>
alex_young超过 1 年前
This is the most amazing part of the article imo:<p><pre><code> Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman. </code></pre> Corporate suicidal ideation.
评论 #38376226 未加载
评论 #38374535 未加载
didibus超过 1 年前
&gt; Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission<p>We&#x27;re in true sci-fi land, when we&#x27;re discussing it might be best to destroy SkyNet before it&#x27;s too late
1vuio0pswjnm7超过 1 年前
&quot;In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.&quot;
Bukhmanizer超过 1 年前
I think it’s getting to the point where the noise is overwhelming the signal in this story.
评论 #38373195 未加载
oraphalous超过 1 年前
Fascinating! Apparently central to the complete farce going on at OpenAI - is this paper by one of the board members:<p><a href="https:&#x2F;&#x2F;cset.georgetown.edu&#x2F;publication&#x2F;decoding-intentions&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;cset.georgetown.edu&#x2F;publication&#x2F;decoding-intentions&#x2F;</a><p>It&#x27;s about - my favourite topic: Costly Signaling!<p>How curious - that such a topic would be the possible catalyst for one of the most extraordinary collapses of a leading tech company.<p>Specifically - the paper is about using costly signalling as a means to align (or demonstrate algnment) various kinds of AI interested entities (governments, private corporations, etc) with the public good.<p>The gist of costly signalling - to try and convince others you really mean what you say - you use a signal that is very expensive to you in some respect. You don&#x27;t just ask a girl to marry you, you buy a big diamond ring! The idea being - cheaters are much less likely to suffer such expense.<p>Apparently the troubles at OpenAI escalated when one of the board members - Helen Toner - published this paper. It is critical of OpenAI, and Sam Altman was pissed at the reputational damage to the company and wanted her removed. The board instead removed him. The gist of the paper&#x27;s criticisms is that while OpenAI has invested in some costly signals to indicate its alignment with AI safety, overall, it judges those signals were ultimately rather weak (or cheap).<p>Now here is what I find fascinating about all this: up until reading this paper I had found the actions of the OpenAI board completely baffling, but now suddenly their actions make a kind of insane sense. They are just taking their thinking on costly signalling to its logical conclusion. By putting the entire fate of the company and its amazing market position at risk - they are sending THE COSTLIEST SIGNAL possible, relatively speaking: willingness to suffer self-annihilation.<p>Academics truly are wondrous people... that they can lose themselves in a system of thought so deeply, in a way regular people can&#x27;t. I can&#x27;t help but have a genuine, sublime appreciation for this, even while thinking they are some of the silliest people on this planet.<p>Here&#x27;s where I feel they went wrong. Costly signals by and large should be without explicit intention. If you are consciously sending various signals that are costly - you are probably a weirdo. Systems of costly signalling work because they are implicit, shared and in many respects, innate. That&#x27;s why even insects can engage in costly signalling. But these folk see costly signals as an explicit activity to be engaged in as part of explicit public policy - and unsurprisingly, see it riddled with ambiguity. Of course it would be - individual agents can&#x27;t just make signals up, and expect the system to understand them. Semiotics biatch....<p>But rather than reflect on this they double down on signalling as an explicit policy choice. How do they propose to reduce ambiguity? Even costlier signals! It&#x27;s no wonder then they see it as entirely rational to accept self destruction as a possibility. That&#x27;s how they escape the horrid existential dread of being doubted by the other. In biology though, no creatures survived in the long-run to reproduce where they invested in costly signals that that didn&#x27;t confer at least as much, if not more benefit to them in excess of what they paid in the first place.<p>Those that ignore this basic cost-benefit analysis in their signalling will suffer the ignomony of being perceived as ABSOLUTE NUTTERS. Which is exactly how the world is thinking about the OpenAI board. The world doesn&#x27;t see a group of highly safety aligned AI leaders.<p>The world sees a bunch of disfunctional crazy people.
评论 #38375015 未加载
jacquesm超过 1 年前
I&#x27;m beginning to strongly associate Effective Altruism with out-of-control naivety.
6gvONxR4sf7o超过 1 年前
tldr: The new part of the story this adds is that Altman’s firing was partly in response to him trying to kick out one of the other board members.
1vuio0pswjnm7超过 1 年前
&quot;OpenAI was started in 2015 with an ambitious plan to one day create a superintelligent automated system that can do everything a human brain can do.&quot;<p>Sounds absurd. No one even knows everything the human brain does. It is poorly understood.
crmd超过 1 年前
I thought the drug dealers in my hometown were ruthless sociopaths until I had to start doing deals with change-the-world mild mannered old navy-wearing Stanford Silicon Valley dudes.
jejeyyy77超过 1 年前
Looks like the board did the right thing after all.
jibal超过 1 年前
why are so many people so keen on this prepper?
kragen超过 1 年前
keep in mind that the lead reporter on this, cade metz, is the one who wrote the character assassination piece on scott alexander repeatedly insinuating he was a neo-nazi (and de-anonymizing him, costing him his job due to some kind of weird psychiatrist ethical code)<p>so while probably nothing in here is literally <i>false</i>, it&#x27;s quite likely calculated to give false impressions; read with caution<p>(well, it&#x27;s literally false that &quot;Greg Brockman (...) quit his role[] as (...) board chairman&quot; but only slightly; that was the role he was fired from, as explained in the next paragraph of the article; that&#x27;s not the kind of lies to watch out for)
评论 #38372884 未加载
评论 #38372758 未加载
Uptrenda超过 1 年前
How do I filter news of this.
webappguy超过 1 年前
How can anyone not think Adam D&#x27;Angelo is upset about open AI crushing his shitty Poe
GreedClarifies超过 1 年前
Sounds like Sam was attempting to fix the board membership.<p>OpenAI had grown massively since some of the board members were installed. Some of them were simply not the caliber of people that one would have running such a prestigious institution, especially not with the weight they had due to the board being depopulated. Sam realized this and <i>maybe</i> was attempting to address the issue.<p>Some of the members (ahem, Helen, Tasha and to a lesser extent Adam) liked their positions and struck first, probably convincing poor Ilya that this was about AI safety.<p>Being lightweights they did not do any pre-work or planning, they just plowed ahead. They didn&#x27;t think through that Sam and Greg have added tremendous value to the company and that the company would favor them far over the board that added zero value. They didn&#x27;t think through that tech in general would see rain makers and value creators being cut loose and side with them instead of figureheads. They didn&#x27;t think that partners and customers, who dealt with Sam and Greg daily would be find the move disconcerting (at a minimum). They didn&#x27;t even think through who would be the next CEO.<p>Maybe they didn&#x27;t think it through since they didn&#x27;t care. There was only upside for them since Sam was going to get rid of them sooner or later. They didn&#x27;t see that having been on the OpenAI board was an honor and enormous career boost. Or maybe their ambition was so great that nothing mattered but controlling OpenAI.<p>Further, they thought that if they slandered Sam that he would be cowed and that they would retain their power. I wonder how many times they had pulled this stunt in the past and it worked?