TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Open Philanthropy Project awards a grant of $30M to OpenAI

236 点作者 MayDaniel大约 8 年前

11 条评论

hawkice大约 8 年前
There&#x27;s lots of concern about the bizarre relationship disclosure. But perhaps even more bizarre is that this deal has a structure closer to a strategic move than actual philanthropy. Am I massively misreading this?<p>This page details how their main goal with the $30M isn&#x27;t to increase OpenAI&#x27;s pledged funds by 3%, thereby reducing the marginal &quot;AI Risk&quot; by less than 3%. The goal is to have a seat on the board (basically -- they use a lot more words to say this in the announcement). What on earth is going on where a charitable organization with Open in its name feels it needs to buy its way onto the board of a prominent non-profit in order to:<p>&quot;Improve our understanding of the field of AI research&quot;<p>&quot;[get] opportunities to become closely involved with any of the small number of existing organizations in “industry”&quot;<p>and &quot;Better position us to generally promote the ideas and goals that we prioritize&quot;<p>Isn&#x27;t the whole point of &quot;open philanthropy&quot; that you can direct funds to organizations more open about what&#x27;s going on?!
idlewords大约 8 年前
Scroll to the end. This is a $30M grant to the guy&#x27;s roommate and future brother-in-law.<p>Unbelievable.
评论 #14013017 未加载
评论 #14013653 未加载
评论 #14011173 未加载
评论 #14010221 未加载
评论 #14010400 未加载
评论 #14010525 未加载
评论 #14011526 未加载
评论 #14012139 未加载
评论 #14011381 未加载
评论 #14010414 未加载
评论 #14013268 未加载
评论 #14010582 未加载
vpontis大约 8 年前
That&#x27;s awesome! Open Philanthropy reminds me of <a href="https:&#x2F;&#x2F;80000hours.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;80000hours.org&#x2F;</a>.<p>In their relationship disclosure:<p>&gt; OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.<p>This is so tangled. I don&#x27;t mean it as a criticism as I&#x27;m sure a lot of SV investments would have a much longer Relationship Disclosure sections. So props to them for including this.
评论 #14009356 未加载
评论 #14009541 未加载
dilemma大约 8 年前
Two organizations that exploit the implications of the word &quot;Open&quot; as it is used in the world of technology to market their own private companies and organizations.
评论 #14011537 未加载
评论 #14010162 未加载
jonmc12大约 8 年前
When OpenAI was announced, they mentioned having $1B in funding. Why the additional $30M?
评论 #14009660 未加载
评论 #14009703 未加载
评论 #14010591 未加载
frik大约 8 年前
Can someone explain why both orgs contain the world &quot;Open&quot; - I would say pretty misleading.<p>OpenAI hasn&#x27;t released any open code or anything open.<p>And is OpenAI even about A.I.? (as several other here mentioned it&#x27;s not AI)
评论 #14013332 未加载
itchyjunk大约 8 年前
If some of the comments I read on other AI related articles here on HN are correct,<p>1 mil &#x2F; year per expert * 10 experts per year = 30 Mil in 3 years<p>Maybe $30 mil isn&#x27;t as much as we think it is in AI business?
评论 #14011515 未加载
评论 #14009909 未加载
t3io大约 8 年前
Assume for a minute that AGI is being developed and in no way shape or form does it function or is it formed in a manner that mainstream AI efforts focus on...<p>That hypothetical could very well be the reality on the horizon.<p>What of Safety&#x2F;Control research that has fundamentally nothing to do with such a system or even its philosophy that the broad majority of these institutions or ventures are centered on? What of deep learning centric methodologies that are incompatible?<p>Safety&#x2F;control software and systems development isn&#x27;t a research topic. It&#x27;s an engineering practice that is most suited for well qualified and practiced engineers who design safety critical systems that are present all around you.<p>Safety&#x2F;Control Engineering isn&#x27;t a &#x27;lab experiment&#x27;. If one were aiming to secure, control and ensure the safety of a system, they&#x27;d likely hire a grey bearded team of engineers who are experts and have proven careers doing so. A particular systems design can be imparted on well qualified engineers. This happens everyday.<p>Without a systems design or even a systems philosophy these efforts are just intellectual shots in the dark. Furthermore, has anyone even stopped to consider that these problems would get worked out naturally during the development of such a technology?<p>Modern day AI algorithms and solutions center on mathematical optimization.<p>AGI centers are far deeper and elusive constructs. One can ignore this all to clear truth all they like.<p>So... If one&#x27;s real concern is about the development of AGI and understanding therein, I think its fine time to admit that it might not come from the race horses everybody&#x27;s betting on. As such, it is much more worth one&#x27;s penny to start funding a diverse range of people and groups pursuing it who have sound ideas and solid approaches.<p>This advice can continue to be ignored such as it currently is and has been for a number of years. It can persist across rather narrow hiring practices....<p>The closed&#x2F;open door will or wont swing both ways.
WillyOnWheels大约 8 年前
Reading Bloomberg news too much makes me think AI research will only be used to classify ads and make more efficient securities trading algorithms.<p>I will love to be proven wrong though.
seagreen大约 8 年前
&quot; When OpenAI launched, it characterized the nature of the risks - and the most appropriate strategies for reducing them - in a way that we disagreed with. In particular, it emphasized the importance of distributing AI broadly; our current view is that this may turn out to be a promising strategy for reducing potential risks, but that the opposite may also turn out to be true (for example, if it ends up being important for institutions to keep some major breakthroughs secure to prevent misuse and&#x2F;or to prevent accidents). Since then, OpenAI has put out more recent content consistent with the latter view, and we are no longer aware of any clear disagreements. &quot;<p>Really, really happy to see this being carefully considered. Good job to the Open Philanthropy folks!<p>EDIT: That Slate Star link is amazing: &quot;Both sides here keep talking about who is going to “use” the superhuman intelligence a billion times more powerful than humanity, as if it were a microwave or something.&quot;
评论 #14009690 未加载
mankash666大约 8 年前
I think there are more important causes than &quot;reducing potential risks from advanced AI&quot;. Honest to god, $30M will go a long way in saving lives TODAY. Flint, MI anyone?
评论 #14008950 未加载
评论 #14010220 未加载
评论 #14009712 未加载
评论 #14009263 未加载
评论 #14009456 未加载
评论 #14008915 未加载
评论 #14011412 未加载
评论 #14008931 未加载
评论 #14009476 未加载
评论 #14009455 未加载