TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Sell ∀ ∃ as ∃ ∀ Scam

207 pointsby jmountabout 2 years ago

30 comments

ftxbroabout 2 years ago
In my opinion it&#x27;s a weak take that only got so upvoted and commented on hacker news because it has the existential and universal logical quantifier symbols in the title and also because it uses the chiasmus rhetoric device both in the title and at the end of the article.<p>The argument is that several technologies don&#x27;t work &#x27;out of the box&#x27; and you have to tweak their settings for each problem that you face, and that this means it&#x27;s a scam. For example you have to change prompts in LLMs or change hyperparameters in other machine learning solutions. This argument is some combination of not insightful and not true.
评论 #35676736 未加载
评论 #35671845 未加载
评论 #35672664 未加载
评论 #35671979 未加载
评论 #35676004 未加载
评论 #35672445 未加载
analog31about 2 years ago
&gt;&gt;&gt; Build a system that solves problems, but with an important user-facing control. ...<p>&gt;&gt;&gt; Convince the user that it is their job to find a instantiation or setting of this control to make the system work for their tasks.<p>By golly, you just described playing the cello.
评论 #35671871 未加载
评论 #35671883 未加载
评论 #35671876 未加载
评论 #35671751 未加载
badloginagainabout 2 years ago
If I understand correctly, the meat of the argument is &quot;that is a system for every (∀) task, there exists (∃) a setting that gives the correct answer for that one task.&quot;<p>My understanding of this (correct me if I&#x27;m wrong) is that the scam is convincing users that GPT-X can do anything with say, the correct prompts.<p>This argument misses the mark for me. It&#x27;s not that it solves all the problems, it&#x27;s that the problems it does solve is economically impactful. Significantly economically impactful in some cases- obvious examples of call centers and first-line customer support.
评论 #35671343 未加载
评论 #35671854 未加载
评论 #35671235 未加载
评论 #35671456 未加载
评论 #35671228 未加载
TZubiriabout 2 years ago
Are compilers a scam as well?<p>There exists a program every problem you have, you just have to find the code.
评论 #35671381 未加载
评论 #35671862 未加载
评论 #35674266 未加载
alanbernsteinabout 2 years ago
My biggest takeaway from the article is my new favorite word: cryptomorphic, meaning equivalent, but not obviously so.
评论 #35671426 未加载
评论 #35671573 未加载
评论 #35671756 未加载
skybrianabout 2 years ago
I&#x27;ll try to explain this better.<p>Sometimes you hope to get a system that works without constant tweaking. Once you&#x27;ve adjusted it properly, it should work without adjusting the settings. So there&#x27;s <i>one</i> setting that works <i>for all</i> the inputs you care about.<p>But instead you have a system that you have to adjust all the time. For every input, you can get it to work. But you&#x27;re never done tweaking.<p>This is actually fine if you&#x27;re expecting the user to do some work. Consider doing a Google search, seeing what you get, and then modifying your query when you didn&#x27;t get what you want.<p>It sucks if you&#x27;re hoping to go into fully-automatic mode. You didn&#x27;t get that, you got an interactive application. (Or maybe the tweaks aren&#x27;t reasonable for the user to make themselves, so you got nothing?)<p>So the scam, if there is one, is selling something as fully-automated when it&#x27;s not. It&#x27;s a tool that requires a person to work with a machine, and you&#x27;re never going to be able to take the human out of the loop. (Let&#x27;s say you&#x27;re hoping for a driverless car but you never quite get there.)<p>Often, it&#x27;s not the end of the world. An interactive tool can be very valuable! But you need to set your expectations appropriately.
scrameabout 2 years ago
&gt; Also, as they the hyper-parameter selection fully to the users, they become not falsifiable. If they didn’t work, it is because you didn’t pick the right hyper-parameters or training procedures.<p>See Also: Scrum certification.
afro88about 2 years ago
The author says that the builders of these systems are also the ones that run the scam. And they deliberately build it in a fashion that enables the scam. Seriously?<p>This is a gripe with sales and marketing. A tale as old as time.
_Microftabout 2 years ago
Since there are certainly people unfamiliar with the symbols, here is the Wikipedia article on them:<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Quantifier_(logic)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Quantifier_(logic)</a>
评论 #35671107 未加载
评论 #35671464 未加载
adamnemecekabout 2 years ago
This argument is disingenuous. Hyperparameter optimization is not in the same category as prompt engineering, like at all.<p>Also no one claims like half of the things the article claims people clain.
评论 #35671134 未加载
PaulHouleabout 2 years ago
ChatGPT itself is <i>in on the scam</i>. The way i think about it is that ChatGPT is already superhuman at bullshitting, many people want to give it credit for being more capable than it really is.<p>it is interesting to postulate if it is the “most likely word” heuristic that leads to this behavior (e.g. never says anything that startles people) or HFRL training systematically teaching it to say what people want to hear.
评论 #35671611 未加载
评论 #35688987 未加载
dvtabout 2 years ago
I can walk and chew bubble gum at the same time: on one hand, yes, there&#x27;s certainly a lot of Kool-Aid being drank by the AI folks. Even on HN, I constantly argue with people that genuinely think LLMs are some kind of magical black box that contain &quot;knowledge&quot; or &quot;intelligence&quot; or &quot;meaning&quot; when in reality, it&#x27;s just a very fancy Markov chain. And on the other hand, I think that language interfaces are probably the next big leap in how we interact with our computers, but more to the point of the article:<p>&gt; To conclude: one must have different standards for developing systems than for testing, deploying, or using systems.<p>In my opinion, you unfortunately will never (and, in fact <i>could</i> never) have reliable development and testing standards when designing purely stochastic systems like, e.g., large language models. Intuitively, the fact that these are stochastic systems is <i>why</i> we need things like hyper-parameters, fiddling with seeds, and prompt engineering.
评论 #35671242 未加载
评论 #35671570 未加载
评论 #35671181 未加载
scandoxabout 2 years ago
I think there has always been a great deal of Stone Soup in the software world. We are often promising solutions that will work as long as all the ingredients are supplied in the form of processes delivered by the customer.<p>Strangely though the very promise of the magic stone does allow solutions (soup) to emerge.
评论 #35671865 未加载
ummonkabout 2 years ago
The author is saying ! ∃ ∀ and selling it as ! ∀ ∃. He&#x27;s the one scamming.<p>AI researchers are rather upfront about the need to use fine-tuning and prompt engineering for current AI models.<p>As to random forest models, wasn&#x27;t the whole point of hyperparameter sweeps to remove the need for manual hyperparameter selection?
simondotauabout 2 years ago
&gt; <i>Convince the user that it is their job to find a instantiation or setting of this control to make the system work for their tasks.</i><p>As opposed to convincing the user that it is their job to brief a suitably qualified contractor or employee to make a company perform the required work?<p>∴ humans are a scam
CapsAdminabout 2 years ago
Are general purpose tools scams?<p>An operating system does many things. A general purpose language when contrasted with a domain specific language. A human programmer thinking that they&#x27;re a wizard that can solve all problems.<p>However I get the sentiment I think, it&#x27;s a little bit like snake oil and alternative medicine. They promise to solve all your problems.<p>What&#x27;s interesting to think about is who it is to &quot;blame&quot;. The developers of the product, the sales people of the product, the wild imagination of the buyer thinking the product can solve everything, or maybe the wild imagination of the developers behind the buyer. However if the seller can set off a wild imagination in the buyer, maybe you&#x27;ve successfully scammed them.
评论 #35671879 未加载
评论 #35671897 未加载
rgbrennerabout 2 years ago
Wait, how does this scam work for OpenAI? The product is free to use.<p>Also they haven&#x27;t claimed GPT is an AGI or that it can solve all your problems.
precomputeabout 2 years ago
It is difficult to conclude this is true, and it will be even more difficult in the future because tech like &quot;AI&quot; has the ability to almost completely saturate the amount of data any human can ingest. The relationship will soon be symbiotic, with everything showing the extent of our progress... in the same manner we can date movies by the kinds of phones they use. Many plots will become stale, many worldviews will be condensed to something that supports &quot;AI&quot; and the offending branches will be snipped. The only way to really forget this limitation is to be myopic enough to disregard everything else. With the way the internet is going, I&#x27;m sure one day these LLMs will be heralded as &quot;free&quot; and &quot;open&quot; media, their fuzzy recollections the only records we will have of the past, and extensive use will essentially morph civilization in their own image.
bitLabout 2 years ago
This has always been present in subfields of AI. For example, in classical computer vision, one had to figure out specific parameters working just for a single image or video scene by hand. Machine learning can in theory at least make these parameters learnable at the cost of complexity.
renewiltordabout 2 years ago
Ultimately, this is a tool like Google. If using a search engine doesn&#x27;t work for you for some task, then it&#x27;s not the tool for the job.<p>You even get to trial it for free.<p>You shouldn&#x27;t use LLMs if you can&#x27;t make them work. I can make them work for me for many things. That makes them useful for me.
mmaunderabout 2 years ago
“Or: testing on your training data is a common way to cheat, but so is training on your test data”<p>The need to hold back training data for testing, and issues around testing using variants of training data versus real world data are well known.
sebzim4500about 2 years ago
&gt;To conclude: one must have different standards for developing systems than for testing, deploying, or using systems. Or: testing on your training data is a common way to cheat, but so is training on your test data.<p>Isn&#x27;t this already a solved problem? Every reasonable paper on ML separates their test data from their validation data already.
评论 #35671119 未加载
1023bytesabout 2 years ago
So as long as it doesn&#x27;t automagically solve any given problem it&#x27;s a scam?
bvoqabout 2 years ago
In the second part of the essay the writer mentions the statisticians fallacy. &quot;Adding more hyperparameters leads to overfitting.&quot; This is not true with large neural networks anymore and is part of the magic.
henearkrabout 2 years ago
To be fair, on the pure logic side (forgetting about what the link is really about, i.e. selling scams), it is true that any ∀ ∃ can be turned into a ∃ ∀. It&#x27;s called skolemization.
LastTrainabout 2 years ago
And convince the user to validate the solutions to other user’s problems while they’re at it.
bfleschabout 2 years ago
This is a clever argument
评论 #35671580 未加载
评论 #35671281 未加载
8noteabout 2 years ago
The sell for all there exists as there exists for scam?<p>What? Better title?
coding123about 2 years ago
This is like a horrible way to describe something.
version_fiveabout 2 years ago
It&#x27;s a good point, I&#x27;d consider it (overfitting) a pitfall or common mistake in ML rather than the only mode. I&#x27;d agree that most ML models and almost all state-of-the-art are over-fit to the point of being useless, but that&#x27;s not an inevitability.