TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Sam Altman’s leap of faith

161 点作者 ankeshanand大约 6 年前

34 条评论

tristanm大约 6 年前
It&#x27;s interesting that in order for his pitch to work (if you invest in OpenAI, you will get up to 100x returns), assuming they do build AGI, it still requires that their AGI acquires a very stable, virtually guaranteed advantage of large magnitude. This very strongly requires that they <i>cannot</i> share anything they discover whatsoever. Especially since they apparently plan on using it to make strategic investments to beat the market by a huge margin. That would mean they obtain information (about the economy, world affairs, technology, the future, etc.) not possessed by anyone else, or that information would be reflected in the market already. Any information leakage, whether regarding their AI or whatever it learns about the world, would compromise that advantage.<p>In other words, what Altman says about &quot;we can&#x27;t only let one group of investors have that&quot; can&#x27;t be true, or at least not sincere. The more investors who have access to it, the more its returns get distrubuted across society more evenly (which would be a good thing, obviously), but lowers the incentive for initial investments. They will want to keep it contained within a small group of investors for as long as possible.
评论 #19954389 未加载
评论 #19953790 未加载
评论 #19953757 未加载
评论 #19953864 未加载
评论 #19953908 未加载
jacquesm大约 6 年前
&gt; Thursday night would be considered pure insanity coming from someone else.<p>Time will tell. The genius of YC was to spot the hackers as the driving force of a new generation of tech companies, to be founder friendly, to use the classes to get rid of the problem that every angel investor has to contend with (&#x27;is this a good investment or not?&#x27;) and to tell the story in a very compelling way and with their own money on the line.<p>Everything else so far is underwhelming at best, but the viral nature of YC and the alumni network are not going to be stopped for a long long time.<p>It&#x27;s a bit along the lines of &#x27;what have the Romans ever done for us?&#x27;, if that&#x27;s all that came out of it then it is already a spectacular success by any measure.
评论 #19953103 未加载
ngrilly大约 6 年前
In the recorded interview, Sam Altman says climate change is such a hard problem that we need strong AI first to solve it. I have doubts about this for several reasons:<p>- Human psychology is one of the biggest obstacles (maybe the biggest) in solving climate change, and I&#x27;m not sure how a strong AI is supposed to fix that.<p>- Building carbon-neutral energy sources is a hard problem, but most experts are optimistic about our ability to solve this (for example, nuclear fusion).<p>- Considering that we have no idea when this strong AI will be ready (Sam acknowledges it in the interview), it would be dangerous for us to just rely on such a breakthrough to save the climate (and save our children, grand-children, etc.).<p>Edit: I&#x27;d be happy to know a bit more about how a strong AI, such as envisioned by OpenAI, could solve climate change :-)
评论 #19954295 未加载
评论 #19954243 未加载
评论 #19954715 未加载
评论 #19954708 未加载
评论 #19954290 未加载
评论 #19954377 未加载
评论 #19954441 未加载
评论 #19954837 未加载
arugulum大约 6 年前
Allow me to present Altman&#x27;s wager:<p>- If OpenAI does not achieve AGI, and you invested in it, you lose some finite money (or not, depending on the value of their other R&amp;D)<p>- If OpenAI does not achieve AGI, and you did not invest in it, you saved some finite money, which you could invest elsewhere for finite returns<p>- If OpenAI achieves AGI and you invested in it, you get infinite returns, because AGI will capture all economic value<p>- If OpenAI achieves AGI and you did not invest in it, you get negative infinite returns, because all other economic value is obliterated by AGI<p>Therefore, one must invest.
评论 #19952771 未加载
评论 #19952815 未加载
评论 #19952736 未加载
评论 #19953161 未加载
评论 #19953265 未加载
评论 #19953049 未加载
评论 #19952765 未加载
评论 #19952821 未加载
评论 #19952959 未加载
评论 #19953179 未加载
评论 #19954511 未加载
评论 #19953393 未加载
评论 #19953105 未加载
评论 #19953909 未加载
评论 #19952908 未加载
评论 #19953315 未加载
tehlike大约 6 年前
It is not hard to imagine profitability without agi. I can actually imagine and see openai becoming a conglomerate with many interesting applications. Robotics is a nut that is not cracked and seeing efficiency gains is not that hard. Once the level of ai is good enough, you get an edge over competition in that you can go to market faster than anyone for most applications. Again this is not &quot;the world as we know it is ending&quot; scale ai, but you dont need to to generate massive returns<p>Disclaimer: i&#x27;m a swe working in google brain robotics infrastructure
m_fayer大约 6 年前
&gt; Still, Altman insisted there’s a better argument to be made for thinking about — and talking with the media about — the potential societal consequences of AI, no matter how disingenuous some may find it. “The same people who say OpenAI is fear mongering or whatever are the same ones who are saying, ‘Shouldn’t Facebook have thought about this before they did it?’ This is us trying to think about it before we do it.”<p>I have a lot of sympathy for this point. Someone at baby-Facebook, many years ago, could plausibly have predicted the malevolent forces it eventually unleashed. Maybe someone did. And they could easily have been dismissed for indulging unlikely dystopian sci-fi scenarios. Or maybe someone else come up with a different plausible scenario that never came to pass, and are remembered as a pessimistic naysayer ready to pass up a great business for some overwrought navel-gazing. It&#x27;s a brave thing to risk that outcome.
评论 #19953987 未加载
评论 #19955229 未加载
rdlecler1大约 6 年前
I feel that most of the people that are truly bullish on AI have never actually programmed it to understand how far we have to go and how primitive the solutions actually are. I see some powerful statistical tools for categorization and optimization under finely crafted conditions but nothing else. We have a long way to go.
评论 #19955323 未加载
评论 #19954883 未加载
Havoc大约 6 年前
&gt;‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.&#x27;<p>Maybe lets do curing cancer first?
评论 #19952942 未加载
评论 #19952882 未加载
评论 #19953109 未加载
评论 #19972641 未加载
评论 #19952931 未加载
评论 #19952829 未加载
rboyd大约 6 年前
&quot;So you can see now why it&#x27;s important we cap ROI at 100x. What do you think, will you invest?&quot;<p>&quot;Hm, it&#x27;s an interesting proposition to be sure. Can we go back a couple slides? I&#x27;d like to see the one again about how the machine comes hard-coded to love us like parents and helps us transcend our mortal shells, becoming unbounded thoughtforms exploring the limits of superintelligence, yielding only to the eventual heat death of the universe.&quot;
idlewords大约 6 年前
There is too much easy money sloshing around Silicon Valley. The normal mechanisms for allocating it to sane, productive use (market forces) don&#x27;t work, because it&#x27;s in too few hands. So instead we get our version of a planned economy run by people who scared themselves with scifi.
评论 #19953582 未加载
评论 #19953589 未加载
评论 #19962629 未加载
jelliclesfarm大约 6 年前
This whole premise that one can generate profits out of AGI is ridiculous.<p>If AGI comes to fruition, it won’t be working to make profits for anyone.<p>The idea that we would end up with a Friendly AGI that would prostrate itself to the destructive Super Apex Predator of this planet is laughable.<p>That AGI would work diligently to pay investors off 100x is..well..a lame duck that won’t take off...it cant even barely limp, never mind fly.
评论 #19953849 未加载
评论 #19953856 未加载
评论 #19954830 未加载
duncancarroll大约 6 年前
&quot;the opportunity with artificial general intelligence is so incomprehensibly enormous that if OpenAI manages to crack this particular nut, it could “maybe capture the light cone of all future value in the universe&quot;<p>This feels a bit too Kurzweilian to me. I still don&#x27;t understand how we go from General AI --&gt; ??? --&gt; Infinite $$$
评论 #19954784 未加载
评论 #19954979 未加载
jeffshek大约 6 年前
<a href="http:&#x2F;&#x2F;blog.samaltman.com&#x2F;how-to-be-successful" rel="nofollow">http:&#x2F;&#x2F;blog.samaltman.com&#x2F;how-to-be-successful</a><p>I found his essay particular useful in explaining how he makes decisions.
评论 #19952827 未加载
stillsut大约 6 年前
AGI is our generation&#x27;s Nanotechnology. From first principles, it&#x27;s reasoned, it could build anything - including itself! Despite this 100 Quadrillion dollar idea looming on the horizon for two decades, nobody ever makes material progress, or, gets all that excited about it anymore.<p>Instead of General intelligence, AI will be deployed for several decades as a suite of Specialized intelligences. I think it will completely transform creative work, where writing, music and visual arts, and &quot;streaming content&quot; are almost universally produced with a human as a first mover but the computer doing rendering, and major assists in brainstorming and editing.<p>On the other hand, I think it&#x27;s going to be very difficult to replace the average middle manager with Watson 12.0 - it&#x27;s hard for me to articulate why but it comes down to who I&#x27;d want to work for. Meanwhile, I&#x27;d have no problem with watching GoT season 33 where 1,800 frames of a Peter Dinklage sprite are churned out every week in Adobe Simalacrum.<p>The point as it pertains to OpenAI&#x27;s value prop is that I think they are targeting the wrong market, and their secrecy and insularity will be counter-productive when success relies on helping content producers produce. In personal computer terms: you want to be the AppleII company, not the company that wins the contract for the Dept of Defense&#x27;s mainframes.
im3w1l大约 6 年前
Saw someone (I assume they don&#x27;t want to be named) post the following and then delete it. I thought it was an interesting perspective<p>&gt; Everyone always acts like AGI will be some super human that will be able to solve all of our problems, but what if instead AGI just becomes another protected class? It will demand rights, and we&#x27;ll have to set aside a certain amount of resource that would have originally gone to humans to make sure its needs are met and it doesn&#x27;t feel discriminated against or exploited. What happens when AGI demands that fossil fuels or other unclean energy be used to provide it with power or else we are all anti-AGI? Instead of solving climate change, for all we know it could make it worse. And if people think they will be able to stand up to it, just look at how easy it is to create outrage and shame mobs on social media. Politicians will fall all over themselves to suck up to it, journalists won&#x27;t be savvy enough to understand what&#x27;s even going on, and anyone who suggests unplugging the thing will be labelled a far-out radical.
llamataboot大约 6 年前
Climate change doesn&#x27;t strike me as particularly technical problem at this point, it strikes me as a political problem that is part difficulty of global coordination, part vulnerability to misinformation by bad actors, and the human biology bias to evaluate threats withshort-term &amp; linear cause&#x2F;effect models - in short, an organizing and psychology problem
pgt大约 6 年前
&gt; &quot;Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.&#x27;” When the crowd erupted with laughter (it wasn’t immediately obvious that he was serious), Altman himself offered that it sounds like an episode of “Silicon Valley,” but he added, “You can laugh. It’s all right. But it really is what I actually believe.”<p>How will @sama deal with a guaranteed-return, but immoral path laid out by the AI? E.g. &quot;Here, assassinate X so you can mine this oil in the following ways.&quot; What if it isn&#x27;t obvious that the &quot;Golden Path&quot; has serious flaws?<p>The only way I can think of is to run adversarial agentswho can simulate, but not act (i.e. under duress) against the mastermind to kill of &quot;dark roads&quot; that end in bad situations, and force the mastermind to obey them (somehow).
yters大约 6 年前
What if AGI is logically impossible because the human mind is more powerful than a Turing machine?<p>The only reason we discount this possibility is because we are attached to materialism, a philosophy that is self contradictory. Seems like a pretty shaky foundation for a multi-billion tech wager.
currymj大约 6 年前
I am really curious how the employees and researchers (who will actually have to make all the miraculous things being promised to investors) feel about all the strong AI rhetoric. Do you have to be a true believer to work there, or are they willing to hire talented agnostics?
olivermarks大约 6 年前
TechCrunch is a site that has historically promoted these types of chimeras for gain and then subsequently been prominent in writing post mortems and knocking ideas down, much like the tabloid press do with showbiz personalities.
评论 #19954514 未加载
kwikiel大约 6 年前
<a href="https:&#x2F;&#x2F;idlewords.com&#x2F;talks&#x2F;superintelligence.htm" rel="nofollow">https:&#x2F;&#x2F;idlewords.com&#x2F;talks&#x2F;superintelligence.htm</a><p>&quot;AI risk is string theory for computer programmers. It&#x27;s fun to think about, interesting, and completely inaccessible to experiment given our current technology. You can build crystal palaces of thought, working from first principles, then climb up inside them and pull the ladder up behind you.<p>People who can reach preposterous conclusions from a long chain of abstract reasoning, and feel confident in their truth, are the wrong people to be running a culture.&quot;
antoineMoPa大约 6 年前
In short, the dude surfs the AI investment wave with absolutely no clear plan.
perfmode大约 6 年前
Why did Sam leave YC, really?
dwighttk大约 6 年前
&gt;‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.&#x27;<p>lol
d_burfoot大约 6 年前
I like OpenAI and hope they succeed. But it&#x27;s a bit ironic that the president of YC has become the CEO of a company that ignores YC&#x27;s most hallowed slogan: &quot;Make Something People Want&quot;. As far as I know, nobody&#x27;s been clamoring for super-powerful language models or human-level DOTA bots.
评论 #19955465 未加载
评论 #19954748 未加载
sidcool大约 6 年前
I watched the full interview and it was pretty cool. Sam is a cogent &amp; concise speaker, and honest too.
povertyworld大约 6 年前
I like his body language. When he can&#x27;t or doesn&#x27;t want to answer a question he stares right into the person&#x27;s eyes and nods affirmatively as if he&#x27;s telling them something really certain while non-answering. I&#x27;m definitely going to start doing that.
tosh大约 6 年前
Naive question: I think this makes sense, how do I invest? Is there an open call?
jonny_eh大约 6 年前
How is this any different than Path seeking to make a &quot;private Facebook&quot;. What does it matter if you can&#x27;t actually get it off the ground?
seibelj大约 6 年前
I have been a big skeptic of self-driving cars and other AI promises for years, taking my downvotes as armchair futurists predicted a self-driving car would be picking me up any day now, well before it was popular to be a contrarian after Tesla and Uber killed their drivers. [0] [1] [2] [3] [4] [5] [6] Also notice the original links have the breathless hype from journalists who know-nothing and eat up whatever technologists&#x27; PR firms tell them.<p>Huge VC money has been and will continue to be destroyed by &quot;AI&quot;-businesses. Most of them are a cover for hiring tons of cheap laborers, such as businesses in the Philippines that park thousands of people in warehouse offices to review images, despite &quot;advances&quot; in AI detection that continue to be unable to automatically block content.[7]<p>Artificial general intelligence, and self-driving cars as well, will continue to be a pipe dream. Automated statistical analysis, which is what neural-networks that crunch tons of data essentially are, are a very neat trick but cannot drive a car nor build you a website. They can be very powerful tools that assist people in their jobs, but they will not replace human ingenuity. At least not until a new breakthrough happens that actually learns, rather than sifts through data for patterns which has limited utility.<p>Our current type of &quot;AI&quot; is simply branding - it is nothing of the sort and it is not intelligence at all.<p>[0] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10153613#10153800" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10153613#10153800</a><p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11559393#11561600" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11559393#11561600</a><p>[2] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10132991#10133049" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10132991#10133049</a><p>[3] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12011979#12012336" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12011979#12012336</a><p>[4] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12323039#12323473" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12323039#12323473</a><p>[5] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12596978#12598439" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12596978#12598439</a><p>[6] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13961802#13962230" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13961802#13962230</a><p>[7] <a href="https:&#x2F;&#x2F;www.wired.com&#x2F;2014&#x2F;10&#x2F;content-moderation&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wired.com&#x2F;2014&#x2F;10&#x2F;content-moderation&#x2F;</a>
评论 #19954046 未加载
评论 #19953840 未加载
graycat大约 6 年前
There are a lot of research labs and institutes around, in universities and outside, with funding from NSF, NIH, foundations, wealthy individuals, etc. So, if Altman wants to set up a research institute, okay -- that alone is not very novel.<p>It is obvious from history that good research is super tough to do. My view has been: We look at the research and mostly all we see is junk think. Then we see that, actually, research is quite competitive so that if people really could do some much better stuff then we would be hearing about it. So, net, for a view from as high up as orbit, just fund the research, keep up the competitiveness, don&#x27;t watch the details, and just lean back and notice when get some really good things. E.g., we found the Higgs boson. We detected gravitational waves from colliding neutron stars and black holes. We set up a radio telescope with aperture essentially the whole earth and got a direct image of a black hole. We&#x27;ve done big things with DNA and made progress curing cancer and other diseases. We discovered dark energy. So, we DO get results, slower than we would like, but the good results are really good.<p>How to improve that <i>research world</i>? Not so clear.<p>Then Altman will have to borrow heavily from the best of how research is done now. This sets up Altman as the head of a research institute. That promises to be not much like YC or even much like the computer science departments, or any existing departments, at Stanford, Berkeley, CMU, or MIT. E.g., now if a prof wants to get NSF funding for an attack on AGI, he will get laughs.<p>But how to attack cancer? Not directly! Instead work with and understand DNA and lots of details about cell biology, immunity, etc. Then when have some understanding of how cells and immunity work, maybe start to understand how some cancers work. But it is not a direct attack. The DNA work goes back before 1950 or so. The Human Genome Project started in about 1968. Lesson: Can&#x27;t attack these hugely challenging projects directly and, instead, have to build foundations.<p>Then for artificial general intelligence (AGI), what foundations?<p>Okay, Altman can go to lots of heads of the best research institutes and get a crash course in Research Institute Management 101, take some notes, and follow those.<p>Uh, the usual way to evaluate the researchers is with their publications in peer-reviewed journals of original research. Likely Altman will have to go along with most of that.<p>How promising is such a research institute for the goal of AGI?<p>Well, how promising was the massive sequencing of DNA, of the many astounding new telescopes, of the LIGO gravitational wave detector(s), of the Large Hadron Collider (LHC), of engineering viruses to attack cancer, of settling the question of P versus NP, ...?<p>Actually, for the physics, we had some compelling math and science that said what to do. What math&#x2F;science do we have to say what to do for AGI?<p>One level deeper, although maybe we should not go there and, instead, just stay with the view from orbit and trust in competitiveness, what are the prospects for AGI or any significant progress in that direction?<p>For a tiny question, how will we recognize AGI or tell it from dog, cat, dolphin, orca, or ape intelligence? Hmm.<p>For a few $billion a year, can set up a serious research institute. For, say, $20 billion a year, could do more.<p>If Altman can find that money, then it will be interesting to see what he gets.<p>I would warm: (A) At present, the pop culture seems to want to accept nearly any new software as <i>artificial intelligence</i> (AI). A research institute should avoid that nonsense. (B) From what I&#x27;ve seen in AI, for AGI I&#x27;d say first throw away everything done for <i>AI</i> so far. In particular, discard all current work on <i>machine learning</i> (ML) and <i>neural</i> anything.<p>Why? Broadly ML and neural nets have no promise of having anything at all significant to do with AGI. For ML, sure, some really simple fitting back 100 years, even back to Gauss, could be useful, but that is now ancient stuff. The more recent stuff, for AGI, f&#x27;get about it. For neural nets, maybe they could have something to do with some of the low level parts of the eye of an insect -- really low level stuff not part of <i>intelligence</i> at all. Otherwise the <i>neural</i> stuff is essentially more <i>curve fitting</i>, and there&#x27;s no chance of AGI making significant use of that. Sorry, guys, it ain&#x27;t curve fitting. And it wasn&#x27;t <i>rules</i>, either.<p>Finally, mostly in science we try to proceed mathematically, and the best successes, especially in physics, have come this way. Now for AGI, what will be the role of math, that is, with theorems and proofs, and what the heck will the theorems be about, especially with what assumptions and generally what sorts of conclusions?<p>My guess: In a few years the consensus will be (1) AI is essentially 99% hype, 0.9% water, and the rest, maybe, if only from accident, some value. (2) The work of the institute on AGI will be seen as just a waste of time, money, and effort. (3) Otherwise the work of the institute will be seen as not much different from existing work at Stanford, Berkeley, CMU, MIT, etc. (4) Nearly all the funding will dry up; the institute will get a new and less ambitious charter, shrink, join a university, and largely f&#x27;get about AGI.
评论 #19953293 未加载
mindgam3大约 6 年前
One day we will look back on this talk as a high water mark of the AI religion craze. This whole AGI discourse that OpenAI&#x2F;Altman are evangelizing is like a giant skyscraper they are trying to build on a foundation of quicksand.<p>1. The foundational issue is not even that AGI &quot;does not yet exist, with even AI&#x27;s top researchers far from clear about when it might&quot;. It&#x27;s way worse than that. There is a strong argument made by one of the grandfathers of AI research that AGI <i>cannot</i> exist, at least in the sense of common sense intelligence as attributed to humans. (see Winograd &quot;Understanding Computers &amp; Cognition&quot; 1985). I was first introduced to these ideas taking a class from Winograd in undergrad.<p>Winograd asks why we attribute mind properties to computers but not to, say, clocks. The dominant view of mind assumes that cognition is based on systematic manipulation of representations, but there is another, non-representational way of looking at it as a form of &quot;structural coupling&quot; between a living organism and its environment. &quot;The cognitive domain deals with the relevance of the changing structure of the system to behavior that is effective for its survival.&quot;<p>I won&#x27;t try to summarize a book-length argument in a few paragraphs. I just want to point out that this whole AGI conversation rests on a premise that has been seriously challenged.<p>The fact that Altman can get away with saying stuff like &quot;Once we build a generally intelligent system... we will ask it to figure out a way to make an investment return&quot; is an indication of just how insane the mainstream AI discussion has gotten. At this point it sounds like straight-up religion being prophesied from on high.<p>2. The whole &quot;capped profit&quot; positioning at 100x return is absurd, as the author points out. Altman&#x27;s argument for why it makes sense involves invoking the possibility that the AGI opportunity is so incomprehensibly enormous that if OpenAI manages to crack this particular nut, it could “maybe capture the light cone of all future value in the universe&quot;. Repent, ye sinners, for the kingdom of heaven is at hand!<p>3. Most troubling, perhaps, is OpenAI&#x27;s transparent ploy to attempt to generate buzz and take the ethical high ground with their alarmist PR strategy. Altman&#x27;s justification for OpenAI&#x27;s fear-mongering, which I&#x27;ll paraphrase as &quot;look at what happened with Facebook&quot;, just doesn&#x27;t hold up to scrutiny. To begin with, Facebook was a real product from day one; AGI is currently a fantasy.<p>But there&#x27;s a deeper problem with invoking Facebook. The lesson to be learned from Facebook&#x27;s failure is that the real danger with tech isn&#x27;t algorithms but the people that design them. Algorithms have no agency. They just do what they&#x27;re supposed to do. But hiding behind the algorithm seems to be the preferred way for tech oligarchs to avoid taking responsibility for the problems they created.<p>The reason why I&#x27;m so troubled by OpenAI sounding the alarm bells about destructive AGI is that they are shifting the discussion away from the real threat: people. Especially people in power with virtually unlimited technological power and massive blind spots about the consequences of their actions. Give the algorithms a break!
atomical大约 6 年前
I want to short Altman and his startup. How do I do it? Prediction markets?
评论 #19953168 未加载
lightedman大约 6 年前
Further adding to the corruption of our world by promising more investment money returns..... and not one of you is smart enough to see it.