TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Why does no one seem to care that AI gives wrong answers?

70 点作者 arduinomancer10 个月前
If you had a piece of code or software that sometimes produces totally wrong output we would consider that a bug.<p>Yet it seems like with AI all the investors&#x2F;founders&#x2F;PMs don’t really care and just ship a broken product anyway<p>I feel like I’m going crazy seeing all the AI stuff ship in products that gives straight up wrong outputs<p>It’s like a big collective delusion where we just ignore it or hand wave that it’ll get fixed eventually magically

50 条评论

0x00_NULL10 个月前
My graduate research was in this area. My lab group developed swarm robots for various terrestrial and space exploration tasks. I spent a lot of time probing why our swarm robots developed pathological behavioral breakdowns - running away from construction projects, burying each other, etc... The issue was so fundamental to our machine learning methods that we never found a way to reliably address it—by the time I left, anyway. No matter how we reconfigured the neural networks, trained, punished, deprived, or implemented forced forgetting or fine-tuned, nothing seemed to eliminate the catastrophic behavioral edge cases—nothing except for dramatically simplifying the neural networks.<p>Once I started seeing these behaviors in our robots, their appearance became much more pronounced every time I dug deeply into proposed ML systems: autonomous vehicles, robotic assistants, chatbots, and LLMs.<p>As I&#x27;ve had time to reflect on our challenges, I think that neural networks very quickly tend to overfit, and deep neural networks are incomparably overfitted. That condition makes them sensitive to hidden attractors that cause the system to break down when it is near these areas - catastrophically.<p>How do we define &quot;near&quot;? That would have to be determined using some topological method. But these systems are so complicated that we can&#x27;t analyze their networks&#x27; topology or even brute-force probe their activations. Further, the larger, deeper, and more highly connected the network, the more challenging these hidden attractors are to find.<p>I was bothered by this topic a decade ago, and nothing I have seen today has alleviated my concern. We are building larger, deeper, and more connected networks on the premise that we&#x27;ll eventually get to a state so unimaginably overfitted that it becomes stable again. I am unnerved by this idea and by the amount of money flowing in that direction with reckless abandon.
评论 #40933161 未加载
chankstein3810 个月前
Personally, I and people I&#x27;ve spoken with use LLMs less and less because of how often they&#x27;re wrong. The other day I asked ChatGPT about a specific built-in method in Java and it told me that it couldn&#x27;t do one specific thing. I was already using it in that context so I pushed back and it said &quot;Oh yeah you&#x27;re right, sorry&quot;<p>I feel like I can&#x27;t trust anything it says. Mostly I use it to parse things I don&#x27;t understand and then do my own verification that it&#x27;s correct.<p>All that to say, from my perspective, they&#x27;re losing some small amount of ground. The other side is that the big corps that run them don&#x27;t want their golden gooses to be cooked. So they keep pushing them and shoving them into everything unnecessarily and we just have to eat it.<p>So I think it&#x27;s a perception thing. The corps want us to think it&#x27;s super useful so it continues to give them record profits. While the rest of us are slowly waking up to how useless they are if they will confidently tell us incorrect answers and are moving away from it.<p>So you may just be seeing sleezy marketing at work here.
评论 #40931790 未加载
评论 #40931747 未加载
yaj5410 个月前
LLMs would be better nomenclature than AI in this context.<p>LLMs are not factual databases. They are not trained to retrieve or produce factual statements.<p>LLMs give you the most likely word after some prior words. They are <i>incredibly</i> accurate at estimating the probabilities of the next word.<p>It is a weird accident that you can use auto-regressive next word prediction to make a chat bot. It&#x27;s even weirder that you can ask the chatbot questions and give it requests and it appears to produce coherent answers and responses.<p>LLMs are best thought of as language generators (or &quot;writers&quot;) not as repositories of knowledge and facts.<p>LLM chatbots were a happy and fascinating (and for some, very helpful) accident. But they were not designed to be &quot;factually correct&quot; they were designed to predict words.<p>People don&#x27;t care about (or are willing to accept) the &quot;wrong answers&quot; because there are enough use cases for &quot;writing&quot; that don&#x27;t require factual accuracy. (see for instance, the entire genre of fiction writing)<p>I would argue that it is precisely LLMs ability to escape the strict accuracy requirements of the rest of CS and just write&#x2F;hallucinate some fiction that is actually what makes this tech fascinating and uniquely novel.
评论 #40935508 未加载
评论 #40969084 未加载
评论 #40933781 未加载
rossdavidh10 个月前
So, if you dealt with a person who knew all the vocabulary related to a field, and could make well-constructed sentences about that field, and sounded confident, it would almost always mean they had spent a lot of time studying that field. That tends to mean that, although they may occasionally make a mistake, they will usually be correct. People apply the same intuition to LLMs, and because it&#x27;s not a person (and it&#x27;s not intelligent), this intuition is way off.<p>There is, additionally, the fact that there is no easy (or even medium difficult) way to fix this aspect of LLM&#x27;s, and it means that the choices are either: 1) ship it now anyway and hope people pay for it regardless 2) admit that this is a niche product, useful in certain situations but not for most<p>Option 1 means you get a lot of money (at least for a little while). Option 2 doesn&#x27;t.
评论 #40931989 未加载
评论 #40935195 未加载
评论 #40935647 未加载
taylodl10 个月前
I haven&#x27;t found a human that answers every single question correctly, either. You know whom to ask a question based off that person&#x27;s domain of expertise. Well, AI&#x27;s domain of expertise is <i>everything</i> (supposedly).<p>What gets difficult is evaluating the response, but let&#x27;s not pretend that&#x27;s any easier to do when interacting with a human. Experts give wrong answers all the time. It&#x27;s generally other experts who point out wrong answers provided by one of their peers.<p>My solution? Query multiple LLMs. I&#x27;d like to have three so I can establish a quorum on an answer, but I only have two. If they agree then I&#x27;m reasonably confident the answer is correct. If they don&#x27;t agree - well, that&#x27;s where some digging is required.<p>To your point, nobody is expecting these systems to be infallible because I think we intuitively understand that nothing knows everything. Wouldn&#x27;t be surprised if someone wrote a paper on this very topic.
评论 #40931758 未加载
评论 #40932965 未加载
评论 #40931553 未加载
threeseed10 个月前
&gt; investors&#x2F;founders&#x2F;PMs don’t really care<p>Garry Tan from YC is a great example of this.<p>It&#x27;s not that he doesn&#x27;t care. It&#x27;s just that he believes that the <i>next model</i> will be the one that fixes it. And companies that jump on board now can simply update their model and be in prime position. Similar to how Tesla FSD is always 2 weeks away from perfection and when it happens they will dominate the market.<p>And because companies are experimenting with how to apply AI these startups are making money. So investors jump in on the optimism.<p>The problem is that for many use cases e.g. AI agents, assistance, search, process automation etc. they very much do care about accuracy. And they are starting to run out of patience for the empty promises. So there is a reckoning coming for AI in the coming year or two and it will be brutal. Especially in this fundraising environment.
评论 #40931706 未加载
f0e4c2f710 个月前
When I was a kid there was this new thing that came out called Wikipedia. I couldn&#x27;t convince anyone it was useful though because they pointed out it was wrong sometimes. Eventually they came around though.<p>AI is like that right now. It&#x27;s only right sometimes. You need to use judgement. Still useful though.
评论 #40931947 未加载
评论 #40931887 未加载
rfjimen10 个月前
Your point is valid if you believe LLM&#x2F;Generative AI is deterministic; it is not. It is inference-based, and thus it provides different answers even given the same input at times.<p>The question then becomes, &quot;How wrong can it be and still be useful?&quot; This depends on the use case. It is much harder for applications that require high deterministic output but less important for those that do not. So yes, it does provide wrong outputs, but it depends on what the output is and the tolerance for variation. In the context of Question and Answer, where there is only one right answer, it may seem wrong, but it could also provide the right answer in three different ways. Therefore, understanding your tolerance for variation is most important, in my humble opinion.
评论 #40935538 未加载
评论 #40932033 未加载
minimaxir10 个月前
Lots of people care.<p>From a coding perspective, proper technical systems already have checks and balances (e.g. test cases) to catch bad code, and is something that&#x27;s important to have regardless of generative AI usage.<p>From a creative&#x2F;informational perspective, there are stories every day of hallucinations and the tech companies are correctly dunked on because of it. That&#x27;s more product management error than AI error.<p>AI hallucination isn&#x27;t a showstopper issue, it just has to be worked around.
评论 #40931284 未加载
评论 #40931072 未加载
评论 #40931088 未加载
ungreased067510 个月前
I’m also curious about this. This morning I needed to generate summaries of news articles when I noticed Bing AI was inserting facts that weren’t present in the source article. (Not correct at all) It really hurts the potential of what AI could do if I have to double check everything it generates. We wouldn’t accept a spreadsheet program that required double checking with a calculator, why do LLMs get a pass?
评论 #40931659 未加载
评论 #40931061 未加载
评论 #40934174 未加载
chrisjj10 个月前
The reason they don&#x27;t care is the typical user doesn&#x27;t notice. He asks the bot questions to which he does not know the answer, leaving him unable to detect when the bot answer is wrong.
nunez10 个月前
My completely baseless theory is that there is an unbelievable amount of astroturfing happening in defense of this technology at a rate never seen before because of how badly people with capital want this to work and how &quot;close&quot; the tech is to achieving that.<p>If this is correct, then it&#x27;s less of &quot;people don&#x27;t care&quot; and more &quot;the hype is louder than them.&quot;<p>That said: I, too, am completely perplexed by people within the tech community using LLMs heavily in making software while, unironically, saying that they have to keep an eye on it since it might produce incorrect work.
bpp10 个月前
I work in AI product eng for a larger company. The honest answer is that with good RAG and few-shot prompting, we can consider actual incorrect output to be a serious and reproducible bug. This means that when we call LLMs in production, we get about the same wrong-answer rate as we do any other kind of product engineering bug.
nashashmi10 个月前
Think of it as a system returning complex results after a search. Think of it as a synthetic search result. Think of it as a result where you still have to evaluate the source for reliability. Think of it as a junior engineer making mistakes. Think of it as a reason why you will have job security for complex tasks because the easy tasks can be done by dumber project managers. Think of it like you are now a senior engineer with a junior engineer doing the mundane stuff.<p>Do the mundane stuff in school&#x2F;college&#x2F;boot camp. Do the cool stuff at work.
评论 #40935561 未加载
geor9e10 个月前
In 1942, atom bombs didn&#x27;t work at all. Does that mean nobody on the Manahattan Project cared? In my mind, when I hear that nobody on earth has achieved something, but a massive bubble of people are feverishly working on it, my conclusion is that a lot of people care a lot. I assume by how your post was phrased, you&#x27;re using the term AI to mean LLM. You noticed that 100% of LLMs, every last one of them, in research labs and commercial businesses, hallucinate and give wrong answers when prompted for something very different from their training data. Thousands are this way. None at all exist that are the other way. A very peculiar property of an entire technology. But your conclusion wasn&#x27;t that it&#x27;s inherent property of LLMs (a statistics machine), or that we need to move beyond LLMs to achieve AGI. It wasn&#x27;t that LLMs have a lot of powerful uses when kept inside the narrow scope of their training. Your conclusion was that, across the entire earth, without exception, &quot;investors&#x2F;founders&#x2F;PMs don’t really care&quot;. I&#x27;m sorry, I am not following — perhaps if you elaborate more on each logical steps you took to get to that conclusion, we can shed more light on what you&#x27;re missing.
评论 #40932735 未加载
ratherbefuddled10 个月前
So far this year the following companies have asked me for money for their new &quot;AI&quot; features:<p><pre><code> - slack - github - microsoft - google - atlassian - notion - clickup - hubspot </code></pre> So ask yourself: Who benefits from the hype? And who would benefit from a better general understanding of the flaws?
评论 #40932085 未加载
jrflowers10 个月前
Hype. Every so many years some sort of semi-novel software gets invented&#x2F;improved and some dude puts 100mg of 2c-b in his Huel shake and realizes that he’s invented the godhead. This dude invariably has buddies that do VC.<p>It’s the same reason why we heard about blockchain for years despite it having near zero practical uses
评论 #40931697 未加载
QuantumGood10 个月前
Isn&#x27;t this just about unrealistic expectations? Lots of people derive lots of value from AI, but it isn&#x27;t good for everything, and can&#x27;t be trusted on its own for many things.
b2000010 个月前
Because everyone incorrectly assumes there is intelligence at work and because people don&#x27;t want to critically evaluate answers because that takes a lot of time to do.
great_psy10 个月前
I found LLMs to be quite useful, so the time saved is worth the effort to double check answers.<p>Granted it might have to do with how I use LLMs. If you just blindly ask a question you increase the chance of hallucinations. If you give a lengthy input, and the output is highly dependent on the input than you will get better results. Think email re-writing, summarizing, translation.
big-green-man10 个月前
Think about this: when was the last time SV or the broader tech industry brought a revolutionary innovation to the consumer that improved their lives? The smartphone? Video streaming? I can&#x27;t think of a single thing since. And those were both over a decade ago.<p>The tech industry is an environment composed almost entirely of companies running a loss to prove viability (and don&#x27;t see that as ironic) to raise more funding from investors. AI is just the latest in a long series of empty hype to keep the gravy train running, last year it was VR, and it looks like at this point that the whole thing is teetering on a cliff. It&#x27;s a bunch of MBAs scrambling for a sales pitch.<p>LLMs are useful. But &quot;extremely lossy compression of documents with natural language lookup built in&quot; doesn&#x27;t sell endless subscriptions as well as &quot;we created a mind.&quot; So they sell hype, which of course they cannot live up to because LLMs aren&#x27;t minds.
uyzstvqs10 个月前
Because 99% of people which have this issue ask a question to just a LLM. What you do is you add RAG to it, or you make it an agent which can retrieve information, and suddenly it&#x27;s <i>very</i> accurate.<p>LLMs are language models, not magical information models with all information in the world somehow fit into several gigabytes. Use them right.
scottLobster10 个月前
If people are paying for their product, why should they care?<p>As for why people are paying for a product that returns incorrect results, could be any number of reasons:<p>- People buy into the hype&#x2F;marketing and actually think AI-thing is going to replace some part of their workflow<p>- People want to experiment and see how well it does at replacing part of their workflow<p>- Whatever the AI-thing does for a customer is not reliant on it being correct, therefore producing incorrect output simply doesn&#x27;t matter<p>A good example would be my company&#x27;s corporate IT AI bot that is effectively a very poor search engine for internal corporate wiki self-help articles on IT and HR related stuff. The actual IT&#x2F;HR portal has a traditional search that, if you know the terms to search for, does a much better job. So most people ignore the AI, but I&#x27;m pretty sure we bought the engine from someone.
评论 #40935597 未加载
kardos10 个月前
The LLMs are basically useless for anything requiring reasoning or to help solve real problems, but the big use-case is that LLMs are competing with google search. Google search is so polluted with low quality ads-first SEO garbage that the may-be-hallucinating-LLM is a more effective way to find some sorts of information than manually slogging through the garbage and possibly getting the wrong answer anyway.<p>I suppose that there is also some hope that the hallucination problem will erode as more effort&#x2F;compute is poured into the training. There may need to be a paradigm shift though, the current structure around generating tokens based on probabilities seems like it will forever be a &#x27;regurgitator&#x27;.
brigadier13210 个月前
It can give wrong answers sometimes and still be useful. Also there are many tasks where it almost always gives correct answers, text to speech with function calling is 100x better now than it was 2 years ago. And in some spaces, correctness is a fuzzy concept anyway (creative spaces).
runjake10 个月前
Of course most everyone cares, but the value proposition is high enough that people aren&#x27;t going to hold off on using it until it is perfect.<p>Nonetheless, as with autopilot, you don&#x27;t want to substitute paying attention with it. &quot;Trust, but verify&quot; as Reagan said.
brokenalgo10 个月前
Probably for the same reason that tech hasn&#x27;t improved anything in a long time.<p>Tolerable Pizza delivery is ruined. The Internet is a walled wasteland now. Far too much &quot;content&quot; that doesn&#x27;t need to exist. Everything is an ad.<p>None of our lives have been improved by software.
评论 #40931672 未加载
gremgoth10 个月前
We built a correctness checker for LLM-generated SQL code for the military before LLMs were commercially available, it is going live soon on <a href="http:&#x2F;&#x2F;sql.ai" rel="nofollow">http:&#x2F;&#x2F;sql.ai</a> . Some people do care about this problem, but it is hard to solve; even for SQL alone, this requires significant computer algebra, automated theorem proving, having to define what &#x27;correct&#x27; even means, and much else etc.
评论 #40931727 未加载
BenFranklin10010 个月前
To paraphrase Upton Sinclair, it is difficult to get a man to care about something, when his salary depends on his not caring about it.<p>A lot of money has poured into AI, money potentially well in excess of the return on investment over the next several years. The field, from investors to CEOs and downwards to developers, is in a state of collective suspension of disbelief. There is going to be a lot of people out of work when reality reasserts itself.
Vanit10 个月前
It&#x27;s not apparent if you&#x27;re not already an expert in the domain you&#x27;re querying, so users trust its answers, especially because it&#x27;s delivered with an air of confidence (until you challenge it).<p>Unfortunately that&#x27;s good enough for a lot of people, especially when you don&#x27;t actually care and just need an output to give to someone else (office jobs etc).
proc010 个月前
There are use cases where it doesn&#x27;t matter, i.e. creative writing. Additionally, I don&#x27;t think AI engineers have even figured out the path for LLMs to be hallucination free and extremely accurate. It&#x27;s better to ship something that is not perfect (or even not great) now, and that way the industry gains experience and the tools slowly but surely get better.
评论 #40931543 未加载
donpark10 个月前
It&#x27;s because AI is useful enough despite its current limitations.<p>Developers work with what we have on the table, not what we may have years later.
bartimus10 个月前
It depends on how you look at it. The creative process isn&#x27;t just a piece of code. It usually involves trying, tweaking, testing, and tuning before an optimal solution is reached. In both the real world and with software development, achieving a perfect result in a single shot is more the exception than the rule.
woodruffw10 个月前
&quot;Show me the incentive, and I&#x27;ll show you the outcome.&quot;<p>There is a belief, cynical or otherwise, that AI will make (a very small number of) people extraordinarily wealthy. The drive to stuff it into every facet of the digital experience reflects this belief.
mattmaroon10 个月前
I’ll be honest, every day people tell me the magical things all these new AI tools can do and I try them and usually find the results useless.<p>Every AI chatbot I’ve ever interacted with has been unable to help me. The things I’ve had them write do usually pass the Turing Test, but are rarely even close to as good as what I could write myself. (I admit, being self-employed for a long time, I can just avoid a lot of busy work that many people cannot, so I may be missing lots of great use cases there. I never find myself having to write something that isn’t great and wanting to just get it over with. AI might be great if you do. )<p>I’ve been trying to use image&#x2F;video creation to do lots of other things and I’ve not even come close to getting anything usable.<p>I appreciate certain things (ability to summarize, great voice to text transcription, etc.) but find a lot of it to be not very useful and overhyped in its current form.
PaulHoule10 个月前
(1) Some problems are probabilistic either in theory or practice. For instance there could be a sentiment analysis problem where the state of the art was 67% accuracy 5 years ago and with an LLM it is easy to get 85% accuracy. 100% accuracy is going to be impossible anyway because sometimes you really can&#x27;t figure how somebody feels.<p>(2) It&#x27;s a big topic that could be addressed in different ways but I&#x27;ll boil it down to &quot;people are sloppy&quot; and that many people become uncomfortable with complex problems that have high stakes answers and will trade correctness for good vibes.<p>(3) LLMs are good at seducing people. To take an example, I know that I was born the same day as a famous baseball player who was also born exactly a year before an even more famous cricket player. I tried to get Microsoft&#x27;s Copilot to recognize this situation but it struggled, thinking they were born on the same day or a day apart rather than a whole year. Once I laid it out explicitly and my own personal connection it had effusive praise and said I must be really happy to be connected to some sports legends like that, which I am. That kind of praise works on people.<p>(4) A lot of people think that fixing LLMs is going to be easy. For instance I&#x27;ll point out that Copilot is completely unable to put items in orders that aren&#x27;t excessively easy (like US states in reverse alphabetical order) and others will point out that Copilot could just write a Python program that does the sorting.<p>That&#x27;s right and it is part of the answer, but it just puts off the problem. What&#x27;s really irksome about Copilot&#x27;s inability to sort is that it doesn&#x27;t know that it can&#x27;t sort, if you ask it what the probability is that it will sort a list in the right order it will tell you that it is very high. It&#x27;s not so easy to know what is possible in terms of algorithms either, see<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Collatz_conjecture" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Collatz_conjecture</a><p>as evidence that it&#x27;s (practically) impossible to completely understand very simple programs. See the book<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;G%C3%B6del,_Escher,_Bach" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;G%C3%B6del,_Escher,_Bach</a><p>for interesting meditations on what a chatbot can and can&#x27;t do. My take is that LLMs as we know them will reach an asymptote and not improve explosively with more investment, but who knows?
bitshiftfaced10 个月前
You&#x27;d consider it a bug because you can fix a bug. Doesn&#x27;t seem like the tech&#x27;s there yet, but it&#x27;s still good enough to be useful.
thiht10 个月前
Honestly I find LLMs to be a great tool, when using them right, and with the sufficient skills to know when they’re wrong. And for some problems, you don’t need a 100% right answer.<p>Earlier today I asked ChatGPT to give me a Go script to parse a Go codebase (making heavy use of Go AST libraries which I never use normally) and it gave me a 90% good solution which saved me a lot of time. To be clear the solution was non functional on its own, but it still saved me from doing exploration work and gave me a quick overview of the APIs I would need.<p>A few days ago it helped me generate code for some obscure AWS API using aws-sdk-go-v2. It was again almost fully working, and better than the examples I could find online.<p>I have examples like this every week. It’s not as amazing as some people say, but still pretty useful. I rejected AI stuff at first but don’t regret adding LLMs to my toolbelt.
tim33310 个月前
In my usage it&#x27;s competing with Google searches often those bring up a lot of nonsense. You have to filter it.
exabrial10 个月前
Another question, why does nobody care about the enormous and absurd energy cost to train and run models?
评论 #40931592 未加载
评论 #40931682 未加载
评论 #40931699 未加载
threeseed10 个月前
Does anyone know why this post no longer appears on HN.<p>It hasn&#x27;t been flagged.
Ekaros10 个月前
Because point often is not in providing a working solution. Instead it is selling just a solution. Just look at so many software projects in history. Did they provide correctly working solution? Or did they generate lot of billable work?
CamperBob210 个月前
I didn&#x27;t care too much at first because it seemed that the rate of improvement was sufficient to cover a multitude of sins. But I&#x27;m starting to, because it is becoming clear that progress hit an absolute brick wall with GPT4. If anything it has has gone backward since then.<p>Just today, ChatGPT4o screwed up a rudimentary arithmetic problem ( <a href="https:&#x2F;&#x2F;i.imgur.com&#x2F;2jNXPBF.png" rel="nofollow">https:&#x2F;&#x2F;i.imgur.com&#x2F;2jNXPBF.png</a> ) that I&#x27;d swear the previous GPT4 model would have gotten right.<p>And then there&#x27;s <i>this</i> shitshow: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40894167">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40894167</a> Which is still happening as of this morning, only now all my previous history is gone. Nothing left but links to other peoples&#x27; chats. If someone at OpenAI still cares what they are doing, it&#x27;s not obvious.
add-sub-mul-div10 个月前
I think this is the community that&#x27;s an aberration. More often I see that &quot;AI&quot; has become synonymous with hallucinations and slop.
talldayo10 个月前
Blind optimism
kwertyoowiyop10 个月前
Are you saying the emperor <i>has no clothes</i>!?
saomcomrad5610 个月前
Everyone I know that doesn&#x27;t work in tech is disappointed with AI assistants blurting out wrong answers. Investors will catch on soon.
islon10 个月前
&quot;It is difficult to get a man to understand something when his NVIDIA shares depend upon his not understanding it&quot;
r0ckarong10 个月前
IMHO because over the last two decades we have become so accustomed to lies and brokenness being the standard that it just doesn&#x27;t matter to most people.<p>Move fast and break things, and don&#x27;t pay anyone, but when you do that long enough and burn billions in VC money, you end up rich. Why does that work?<p>Why can someone like Trump lie and lie and lie and be convicted for felonies and turn up on the worst people list and nobody seems to care?<p>There are no more consequences. You break software, people don&#x27;t care if it&#x27;s the only thing available in the walled garden. You fuck up games, people don&#x27;t care if you shove a TB worth of updates down their pipes later. You rugpull millions of dollars and walk out unscathed, as long as someone made a profit they will keep praising you.<p>You used to be actually shunned and driven out of the village for shit behavior. Not anymore. We find all kinds of ways to justify being terrible at stuff.<p>So along comes tech that costs us barely anything to use and produces meh results most of the time. That&#x27;s amazing. It used to take thousands of talentless hacks to come up with all that mediocre wrong shit and they all wanted a paycheck. It&#x27;s progress in a world where nothing means anything anymore.
JSDevOps10 个月前
It’s the grift. Doesn’t matter. Just slap AI on ANYTHING as fast as you can and hopefully no one will notice and hopefully someone else will have fixed it. Oh did I mention AI!