TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI founders will learn the bitter lesson

337 点作者 gsky4 个月前

54 条评论

CharlieDigital4 个月前
There&#x27;s only one core problem in AI worth solving for most startups building AI powered software: context.<p>No matter how good the AI gets, it can&#x27;t answer about what it doesn&#x27;t know. It can&#x27;t perform a process for which it doesn&#x27;t know the steps or the rules.<p>No LLM is going to know enough about some new drug in a pharma&#x27;s pipeline, for example, because it doesn&#x27;t know about the internal resources spread across multiple systems in an enterprise. (And if you&#x27;ve ever done a systems integration in any sufficiently large enterprise, you know that this is a &quot;people problem&quot; and usually not a technical problem).<p>I think the startups that succeed will understand that it all comes down to classic ETL: identify the source data, understand how to navigate systems integration, pre-process and organize the knowledge, train or fine-tune a model or have the right retrieval model to provide the context.<p>There&#x27;s fundamentally no other way. AI is not magic; it can&#x27;t know about trial ID 1354.006 except for what it was trained on and what it can search for. Even coding assistants like Cursor are really solving a problem of ETL&#x2F;context and will always be. The code generation is the smaller part; getting it right requires providing the appropriate context.
评论 #42673533 未加载
评论 #42673236 未加载
评论 #42673504 未加载
评论 #42673252 未加载
评论 #42673722 未加载
评论 #42673593 未加载
评论 #42673907 未加载
评论 #42673100 未加载
评论 #42674649 未加载
评论 #42680617 未加载
评论 #42674626 未加载
评论 #42673692 未加载
评论 #42674762 未加载
评论 #42678009 未加载
评论 #42677207 未加载
评论 #42674609 未加载
评论 #42674284 未加载
评论 #42673976 未加载
评论 #42674012 未加载
评论 #42673033 未加载
评论 #42673549 未加载
jonnycat4 个月前
I think this argument only makes sense if you believe that AGI and&#x2F;or unbounded AI agents are &quot;right around the corner&quot;. For sure, we will progress in that direction, but when and if we truly get there–who knows?<p>If you believe, as I do, that these things are a lot further off than some people assume, I think there&#x27;s plenty of time to build a successful business solving domain-specific workflows in the meantime, and eventually adapting the product as more general technology becomes available.<p>Let&#x27;s say 25 years ago you had the idea to build a product that can now be solved more generally with LLMs–let&#x27;s say a really effective spam filter. Even knowing what you know now, would it have been right at the time to say, &quot;Nah, don&#x27;t build that business, it will eventually be solved with some new technology?&quot;
评论 #42673372 未加载
评论 #42673295 未加载
评论 #42673306 未加载
评论 #42673678 未加载
timabdulla4 个月前
I think one thing ignored here is the value of UX.<p>If a general AI model is a &quot;drop-in remote worker&quot;, then UX matters not at all, of course. I would interact with such a system in the same way I would one of my colleagues and I would also give a high level of trust to such a system.<p>If the system still requires human supervision or works to augment a human worker&#x27;s work (rather than replace it), then a specific tailored user interface can be very valuable, even if the product is mostly just a wrapper of an off-the-shelf model.<p>After all, many SaaS products could be built on top of a general CRM or ERP, yet we often find a vertical-focused UX has a lot to offer. You can see this in the AI space with a product like Julius.<p>The article seems to assume that most of the value brought by AI startups right now is adding domain-specific reliability, but I think there&#x27;s plenty of room to build great experiences atop general models that will bring enduring value.<p>If and when we reach AGI (the drop-in remote worker referenced in the article), then I personally don&#x27;t see how the vast majorities of companies - software and others - are relevant at all. That just seems like a different discussion, not one of business strategy.
评论 #42673472 未加载
评论 #42673267 未加载
评论 #42681183 未加载
评论 #42678121 未加载
评论 #42673362 未加载
NameError4 个月前
I think the core problem at hand for people trying to use AI in user-facing production systems is &quot;how can we build a reliable system on top of an unreliable (but capable) model?&quot;. I don&#x27;t think that&#x27;s the same problem that AI researchers are facing, so I&#x27;m not sure it&#x27;s sound to use &quot;bitter lesson&quot; reasoning to dismiss the need for software engineering outright and replace it with &quot;wait for better models&quot;.<p>The article sits on an assumption that if we just wait long enough, the unreliability of deep learning approaches to AI will just fade away and we&#x27;ll have a full-on &quot;drop-in remote worker&quot;. Is that a sound assumption?
9dev4 个月前
Well. We were working on a search engine for industry suppliers since before the whole AI hype started (even applied to YC once), and hit a brick wall at some point were it got too hard to improve search result quality algorithmically. To understand what that means: We gathered lots of data points from different sources, tried to reconcile that into unified records, then find the best match for a given sourcing case based on that. But in a lot of cases, both the data wasn’t accurate enough to identify what a supplier was actually manufacturing, and the sourcing case itself wasn’t properly defined, because users found it too hard to come up with good keywords for their search.<p>Then, LLMs entered the stage. Suddenly, we became able to both derive vastly better output from the data we got, and also offer our users easier ways to describe what they were looking for, find good keywords automatically, and actually deliver helpful results!<p>This was only possible because AI augments our product well and really provides a benefit in that niche, something that would just not have been possible otherwise. If you plan on founding a company around AI, the best advice I can give you is to choose a problem that similarly benefits from AI, but does exist without it.
评论 #42673312 未加载
resiros4 个月前
The author discusses the problem from the point of engineering, not from business. When you look at it from business perspective, there is a big advantage of not waiting, and using whatever exists right now to solve the business problem, so that you can get traction, get funding, grab marketshare, build a team, and when the next day a better model will come, you can rewrite your code, and you would be in a much better position to leverage whatever new capabilities the new models provide; you know your users, you have the funds, you built the right UX...<p>The best strategy from your experience, is to jump on a problem as soon there is opportunity to solve it and generate lots of business value within the next 6 months. The trick is finding that subproblem that is worth a lot right now and could not be resolved 6 months ago. A couple of AI-sales startups &quot;succeeded&quot; quite well doing that (e.g. 11x), now they are in a good position to build from there (whether they will succeed in building a unicorn, that&#x27;s another question, it just looks like they are in a good position now).
评论 #42674900 未加载
bko4 个月前
It&#x27;s a little depressing how many high valued startups are basically just wrappers around LLMs that they don&#x27;t own. I&#x27;d be curious to see what percentage of YC latest batch is just this.<p>&gt; 70% of Y Combinator’s Winter 2024 batch are AI startups. This is compared to -57% of YC Summer 2023 companies and ~32% from the Winter batch one year ago (YC W23).<p>The thinking is, the models will get better which will improve our product, but in reality, like the article states, the generalized models get better so your value add diminished as there&#x27;s no need to fine tune.<p>On the other hand the crypto fund made a killing off of &quot;me too&quot; block chain technology before it got hammered again. So who knows about 2-5 year term but 10 year almost certainly won&#x27;t have these billion dollar companies that are wrappers around LLMs<p><a href="https:&#x2F;&#x2F;x.com&#x2F;natashamalpani&#x2F;status&#x2F;1772609994610835505?mx=2" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;natashamalpani&#x2F;status&#x2F;1772609994610835505?mx=2</a>
评论 #42674050 未加载
leviliebvin4 个月前
Controversial opinion: I don&#x27;t believe in the bitter lesson. I just think that the current DNN+SGD approaches are just not that good at learning deep general expressive patterns. With less inductive bias the model memorizes a lot of scenarios and is able to emulate whatever real work scenario you are trying to make the model learn. However it fails to simulate this scenario well. So it&#x27;s kind of misleading to say that it&#x27;s generally better to have less inductive bias. That is only true if your model architecture and optimization approach are just a bit crap.<p>My second controversial point regarding AI research and startups: doing research sucks. It&#x27;s risky business. You are not guaranteed success. If you make it, your competitors will be hot on your tail and you will have to keep improving all the time. I personally would rather leave the model building to someone else and focus more on building products with the available models. There are exceptions like finetuning for your specific product or training bespoke models for very specific tasks at hand.
评论 #42674886 未加载
评论 #42675049 未加载
评论 #42677663 未加载
tinco4 个月前
This might be true on a very long timescale, but that&#x27;s not really relevant for VC&#x27;s. Literally every single VC I&#x27;ve talked to raised the question if our moat is not just having better prompts, it&#x27;s usually the first question. If a VC really invested in a company whose moat got evaporated by O1, that&#x27;s on the VC. Everyone saw technology like O1 coming from a mile away.<p>For the slightly more complex stuff, sure at some point some general AI will probably be able to do it. But with two big caveats, the first being: when? and the second being: for how much?.<p>In theory every deep and wide enough neural network should be able to be trained to do object detection in images, yet no one is doing that. Technologies specifically designed to process images, like CNN&#x27;s, reign supreme. Likewise for architectures of LLM&#x27;s.<p>At some point your specialization might become obsolete, but that point might be a decade or more from now. Until then, specializations will have large economic and performance advantages making the advancements in AI today available to the industry of tomorrow.<p>I think it&#x27;s the role of the VC to determine not if there&#x27;s an AI breakthrough behind a startups technology, but if there&#x27;s a market disruption and if that market disruption can be leveraged to establish a dominant company. Similar to how Google leveraged a small and easily replicable algorithmic advantage into becoming one of the most valuable companies on earth.
评论 #42673153 未加载
DebtDeflation4 个月前
&gt;Eventually, you’ll just need to connect a model to a computer to solve most problems - no complex engineering required.<p>The word &quot;eventually&quot; is doing a lot of work here. Yes, it&#x27;s true in the abstract, but over what time horizon? We have to build products to solve today&#x27;s problems with today&#x27;s technology, not wait for the generalized model that can do everything but may be decades away.
评论 #42673216 未加载
评论 #42673157 未加载
DesaiAshu4 个月前
This (particularly the figure 1 illustration) discounts the &quot;distribution&quot; layer for apps<p>Single app&#x2F;feature startups will lose (true long before AI). A few will grow large enough to entrench distribution and offer a suite of services, creating defensibility against competitors<p>The distributors (eg. a SaaS startup that rapidly landed&#x2F;expanded) will continue to find bleeding edge ways to offer a 6-12mo advantage against foundation models and incumbents<p>GitLab is a great example of this model. The equivalent bitter lesson of the web is that every cutting edge proprietary technology will eventually be offered free open source. However, there is a commercial advantage to purchasing the bleeding edge features with a strong SLA and customer service<p>The mistake is to think technology is a business. Business has always been about business. Good technology reduces the cost of sale (CAC) and cost of goods sold (COGS) to create a 85-90% margin. Good technology does not create a moat<p>Resilient businesses do not rely on singular technology advantages. They invest heavily in long term R&amp;D to stay ahead of EACH wave. Resting on one&#x27;s laurels after catching a single wave, or sitting out of the competition because there will be bigger waves later, are both surefire ways to lose the competition
doctorpangloss4 个月前
More computation cannot improve the quality or domain of data. Maybe the bitter lesson lesson is, lobby bitterly, for copyright laws that favor what you are doing, and weakened anti trust, to give you the insurmountable moat of exclusive data in a walled garden media network.
评论 #42673001 未加载
keybored4 个月前
&gt; “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin”.<p>&gt; He points out that throughout AI’s history, researchers have repeatedly tried to improve systems by building in human domain knowledge. The “bitter” part of the title comes from what happens next: systems that simply use more computing power end up outperforming these carefully crafted solutions. We’ve seen this pattern in speech recognition, computer chess, and computer vision. If Sutton wrote his essay today, he’d likely add generative AI to that list. And he warns us: this pattern isn’t finished playing out.<p>According to Chomsky (recalled in relatively recent years for him) this is why he didn’t want to work on the AI&#x2F;linguistics intersection when someone asked him in the mid 50’s. Because he thought that successful approaches would just use machine learning and have nothing to do with linguistics (that he cares about).
评论 #42673016 未加载
scosman4 个月前
Great essay. This is 100% right about the technical side, but I think it misses the “product” aspect.<p>Building a quality product (AI or otherwise), involves design and data at all levels: UX, on-boarding, marketing, etc. The companies learning important verticals and getting in the door with customers will have a pretty huge advantage as models get better. Both in terms of install base, and knowing what customers need. Really great products don’t simply do what a customer asks, but are built by taking to a ton of customers over and over, and solving their problems better than any one of them can articulate.<p>It’s true we will need less and less custom software for problems. But it isn’t realistic to say the software wrapper effort is going to zero when models improve.<p>Plus: a lot of software effort is needed for getting the data AI needs. This is going to be a huge area - think Google maps with satellites, camera cars, network effect products (ratings), data collection (maps, traffic), etc.
评论 #42673554 未加载
iandanforth4 个月前
I disagree with the author based on timelines and VC behaviour. There is sufficient time to create a product and raise massive capital before the next massive breakthrough hands the value back to OpenAI&#x2F;Google&#x2F;Anthropic&#x2F;MS. Secondly the execution of a solution in a vertical is sufficiently differentiating that even if the underlying problem <i>could</i> be solved by a next gen model there&#x27;s very little reason to believe it will be. Big Cos don&#x27;t have the interest to attack niches while there are billion user markets to go after. So don&#x27;t build &quot;photo share with AI&quot;, build &quot;plant fungus photo share for farmers&quot;.
评论 #42673756 未加载
kopirgan4 个月前
May be I&#x27;m being dumb but isn&#x27;t the AI approach somewhat like brute force hacking of password? I mean humans don&#x27;t learn that way - yes we do study code to be better coders but not in the way AI learning does?<p>Will someone truly discover a way to start from fundamentals and do better not just crunch zillions of petabytes faster and faster.<p>Or is that completely wrong? Happy to stand corrected
评论 #42673132 未加载
评论 #42673091 未加载
评论 #42676977 未加载
elisharobinson4 个月前
this is not only not true , it has no basis&#x27;s in reality. In the &quot;real&quot; world there are tradeoff&#x27;s and constraints. scaling does not work infinitely , and products which are delivered by good engneering cultures have a non linear growth ( bad vs very good ).<p>comming back to research one of the frontier model&#x27;s deepseek was able to come close to SOTA with a relatively small budget because of one of their mixture of experts approach.
评论 #42673107 未加载
jillesvangurp4 个月前
I think this is largely right. I look a this space as somebody that plugs together bits and pieces of software components for a living for a few decades. I don&#x27;t need to deeply understand how each of those things work to be effective. I just need to know how to use them.<p>From that point of view, AI is more of the same. I&#x27;ve done a few things with the OpenAI apis. Easy work. There&#x27;s not much to it. Scarily simple actually. With the right tools and frameworks we are talking a few lines of code mostly. The rest is just the usual window dressing you need to turn that into an app or service. And LLMs can generate a lot of that these days.<p>The worry for VC funded companies in this space is that a lot of stuff is becoming a commodity. For example, the llama and phi models are pretty decent. And you can run them yourself. Claude and OpenAI are a bit better and larger so you can&#x27;t run them yourself. But increasingly those cheap models that you can run yourself are actually good enough for a lot of things. Model quality is a really hard to defend moat long term. Mostly the advantage is temporary. And most use cases don&#x27;t actually need a best in class LLM.<p>So, I&#x27;m not a believer in the classic winner takes all approach here where one company turns into this trillion dollar behemoth and the rest of the industry pays the tax to that one company in perpetuity. I don&#x27;t see that happening. The reality already is that the richest company in this space is selling hardware, not models. Nvidia has a nice (temporary) moat. The point of selling hardware is that you want many customers. Not just a few. And training requires more hardware than inference. So, Nvidia is rich because there are a lot of companies busy training models.
评论 #42673434 未加载
评论 #42674095 未加载
theptip4 个月前
I think this misses a crucial dynamic. It’s not “custom scaffolding vs. wait for better models”. There is also fine-tuning.<p>In specialized domains you can’t just rely on OpenAI finding all the training data required to render your experts obsolete.<p>If you can build a data flywheel you can fine-tune models and build that lead into a moat; whoever has the best training set will have the best product. In the short term you might start fine-tuning on OpenAI but once you’ve established your dataset you can potentially gain independence by moving onto OSS models.<p>If you are building a software assistant, sure, this is clearly something that OpenAI will get better at very quickly, and Altman has commented as much that many companies are building things which are certain to get steamrolled by core product development. But I think there is a very interesting strategic question around areas like law, medicine, and probably more so niche technical areas, where OpenAI will&#x2F;can not absorb knowledge as quickly.
diziet_sma4 个月前
&gt; We’ve seen this pattern in speech recognition, computer chess, and computer vision.<p>I think the bitter lesson is true for AI researchers, but OP overstates it&#x27;s relevance to actual products. For example the best chess engines are very much still very much specific to chess. They incorporate more neural networks now, but they are still quite specific.
评论 #42674724 未加载
JohnCClarke4 个月前
The smarter AI app founders know that more advanced AI will easily replace their current tech. What they&#x27;re trying to do now is lock in a userbase.
moomin4 个月前
The thing is, what do we do with the bitter lesson once we’re essentially ingesting the entire internet? More computation runs the risk of overfitting and there just isn’t any more data. Is the bitter lesson here telling me that we’ve basically maxed out?
评论 #42673147 未加载
评论 #42673010 未加载
评论 #42674907 未加载
评论 #42673057 未加载
motbus34 个月前
I agree the article presents good points specially if you consider current LLM models and the business models (and questionable business practices) being used.<p>It seems unlikely at this point, but it it might be the case that new methods, models and&#x2F;or algorithms rise due to the trend of having high expectations&#x2F;investments on this area<p>But yes, AI business stuff will be Swallowed by those companies as much as the companies that unwillingly are providing content to those companies.
mike_hearn4 个月前
Of course, 01 is a lot more expensive than smaller models. So the startups that have spent time on tuning their prompts May yet get the last laugh, at least in any competitive market where customers are price sensitive. Bear in mind that supposedly OpenAI is losing money even at a $200 a month price point so it&#x27;s unclear that the current cost structure of model access is genuinely sustainable.
highfrequency4 个月前
One day LLMs may replace traditional search engines… in the meantime Google has built a $2 trillion business on specialized engineering and millions of human-designed features and optimizations.<p>The Bitter Lesson is an elegant asymptotic result. But from a business perspective it pays to distinguish problems that general deep learning approaches will disrupt in 1 year vs. 5 years vs. 30 years.
osigurdson4 个月前
I does feel like we are having the petfood.com moment in B2B AI. Bespoke solutions, bespoke offerings for very narrow B2B needs. Of course waiting around for AI to get so good that no bespoke solution is needed might be a bad strategy as well. I&#x27;m not sure how it will play out but I am certain there will be significant consolidation in the B2B agent space.
评论 #42674676 未加载
sd94 个月前
I think there is value in companies built around AI. The value comes from UX, proprietary supplementary datasets, and market capture. Businesses built now will be best positioned to take advantage of future improvements in general AI. That is why I am building in the AI space. I’m not naive to the predictable improvement in foundational models.
mercurialsolo4 个月前
love it when a 25 year old founder says we have been here multiple times in the past
评论 #42673077 未加载
评论 #42673075 未加载
ano-ther4 个月前
So what is the appropriate course of action if you are a founder? Just wait until the models inevitably get better, or help a customer with their problem now?<p>Computer power has been increasing all the time, but that hasn&#x27;t kept people from experimenting with the limited power of their time (which ultimately led to better solutions) rather than waiting for the more powerful machines.
评论 #42672998 未加载
评论 #42673053 未加载
anon3738394 个月前
I think some people have been traumatized by the bitter lesson and now think it’s a fundamental law of nature or something. But application development isn’t just about technology. It isn’t even primarily about technology. It’s about people. Good applications reflect conscious and purposeful design choices based on understanding of the user’s needs. That is where the value-add is.<p>The best application developers deeply understand the interface design space, the quirks and limitations of the underlying technology, their users’ mental models, and how to bring all of this into alignment to deliver a worthwhile product. I see no evidence that LLMs are on the precipice of acquiring (much less mastering) these capabilities.
Havoc4 个月前
Conceptually that seems right, but think there is a decade+ worth of runway left on working in the &quot;glue&quot; space. i.e. connecting the AI and our lived realities.<p>I don&#x27;t see the general-ness of AI overcoming the challenges there...cause there is a lot of non-obvious messiness there.
评论 #42680462 未加载
mlepath4 个月前
The author appears to be confused about the difference between research and production. In research more generic approaches typically win because they get resourced much better (plus the ever-growing compute and data have helped, but there is no guarantee that these will continue).<p>On production side of &quot;AI&quot; (I don&#x27;t love the term being thrown around this loosely as true AI should include planning, etc not just inference) the only question is how well do you solve the one problem that&#x27;s in front of you. In most business usecases today that problem is narrow.<p>LLMs drive a minuscule (but growing) amount of value today. Recommender systems drive huge amount of value. Recommender systems are very specialized.
ryanackley4 个月前
I feel like the author glosses over some nuance in the point he is trying to make and his conclusion doesn&#x27;t incorporate the nuance.<p>The nuance is that even though AI researchers have learned this lesson, they are still building purpose built AIs because it&#x27;s not possible to build an AI that can learn to do anything (i.e. AGI). Therefore, building on top of an existing AI model to meet a vertical market demand is not that crazy.<p>It&#x27;s the same risk when building on any software platform. That is the provider of the platform may add the feature&#x2F;app you are building as a free addition and your business becomes obsolete.
评论 #42680467 未加载
rnamerl4 个月前
I think the goal is just to get as much funding from VCs and government rackets as possible.<p>The previous administration had Harris as the &quot;AI czar&quot;. Which means that nothing was expected to happen.<p>The following administration has Sacks as the &quot;Crypto &amp; AI czar&quot;. I&#x27;m not aware that Sacks has any particular competence in the AI area, but he has connections. So government money is likely to flow, presumably to &quot;defense&quot; startups.<p>This All-In podcast has paid off: From peddling gut bacteria supplements and wrong explanations on how moderator rods work in nuclear power plants to major influence.
k__4 个月前
I&#x27;m not an AI founder, but I&#x27;d say there is still some value to be added by better architectures.<p>These AI wrappers run ship engines on a kayak. For example, most coding companion&#x27;s ignore my code base the moment I close other files. Somehow their context is minimal. Such systems can be improved dramatically just by changing the software around the LLM.<p>But I get it, you have to be quick and don&#x27;t waste time on MVPs. If you get critical mass, you can add that later... hopefully
jvanderbot4 个月前
TFA admits that specialization can gain a temporary edge. But, says using that specialization is useless because the next gen will eclipse that edge using raw CPU.<p>Even if it is true that the next generational change in AI is based on computational improvements, how can it be true that it&#x27;s hopeless to build products by specializing this generation of tech?<p>Moreover if I specialize gen 1, and gen 2 is similar enough, can&#x27;t specialization maintain an edge?<p>There seems to be a timescale mismatch.
bhouston4 个月前
In order to win you often have to start before the problem is fully solved and then count on it being solved better as you build and scale.<p>Thus starting with engineering effort to fit some of the AI limitations make sense but realize many of those will be temporary and will be replaced in the future.<p>But there is always a new tech or framework or something that emerges after you start and adopting it will improve your product measurably.
anonu4 个月前
I&#x27;ve observed that every time OpenAI makes an announcement, 2 dozen startups die. The pace was quite rapid a year ago but has slowed down now. I&#x27;ve advised startups of this &quot;bitter lesson&quot;. The value is in proprietary data and your ability to build a moat around domain expertise in a specific industry vertical.
asdev4 个月前
The main point here seems to be around boxing in the AI with a rigid set of rules, and then not being able to adapt to model improvements over time. I think if those &quot;rules&quot; are just in the prompt, you can still adapt, but if you&#x27;re starting to code a lot of rule based logic in the product, you can get into trouble.
AlienRobot4 个月前
I wish I knew what &quot;AI&quot; is.<p>Please correct me if I&#x27;m wrong, but to my understanding, every single one of these &quot;AI&quot; products is based on a model, i.e. it&#x27;s just a normal program with a very big configuration file. The only thing &quot;AI&quot; about it is the machine learning performed when training the model. At the generation stage, there is nothing &quot;AI&quot; about it anymore. The AI is done already. What runs is a normal program.<p>I know it&#x27;s used in computer vision, recommendation algorithms, and generative software like ChatGPT, Stable Diffusion, etc. I don&#x27;t know if there is anything besides these.<p>From what I know, the biggest problems with AI is: 1) the program pretends to be intelligent, when it isn&#x27;t, e.g. by using natural language. And 2) the program doesn&#x27;t give the user enough control, they only get one text box prompt and that&#x27;s it, no forms, no dialogs, so the product has to somehow generate the right answer from the most uncontrolled piece of data imaginable.<p>These two things combined not only limit the products potential by giving it unreasonable expectations but also makes it feel a bit of a scam: the product is a program, but what sells is its presentation: it has to present itself as being MORE than just a program by hiding as much of its internal workings from the consumers and covering its warts with additional layers of logic.<p>I don&#x27;t know if the author would consider natural language to be a &quot;hardcoding&quot; create to temporarily solve a problem that should be solved by using more compute (AGI), but to me it feels like it is.<p>The best application of AI is probably going to be using AI internally to solve existing problems in a given domain your business is familiar with rather than try to come up with some AI solution that you have to sell other people.
EGreg4 个月前
I was going to keep this to myself to maintain a competitive advantage, but I will just drop two hints:<p>1) Have AI turn natural language interactions into programs<p>2) Use test-driven development on the domain data<p>There is a third thing that’s far more crucial but if you want to find out what it is, contact me. I’m building a general-purpose solution for a few domains.<p>Unlike the other two platforms I built (Web 2.0 [1] and Web 3.0 [2]) I am not planning to open-source this project. So I don’t want to discuss more. I will however say that open source is a huge game-changer — because organizations &#x2F; customers want to be able to be in control and you don’t want to be reliant on some third-party platform for an API provider. We all know how well that one turned out for startups building on Web 2.0 platforms.<p>1. <a href="https:&#x2F;&#x2F;github.com&#x2F;Qbix">https:&#x2F;&#x2F;github.com&#x2F;Qbix</a><p>2. <a href="https:&#x2F;&#x2F;github.com&#x2F;Intercoin">https:&#x2F;&#x2F;github.com&#x2F;Intercoin</a>
评论 #42675043 未加载
alganet4 个月前
There is a circular argument going on in this article. Basically:<p>&gt; &quot;Flexible is better. Why? Because specific has been consistently worse&quot;<p>I mean, I don&#x27;t deny the historical trend. But really, why is it so? It makes sense to follow the trend, but knowing more about the reason why would be cool.<p>Also, I feel that the human cognitive aspects of &quot;engineering some shit&quot; are being ignored.<p>People are engineering solutions not only to be efficient, but to get to specific vantage points in which they can see further. They do it so they can see what the &quot;next gen flexible stuff&quot; looks like before others.<p>Finally, it assumes the option to scale computation is always available and ignores the diminishing returns of trying to scale vanguard technology.<p>The scale requirements for AI stuff are getting silly real fast due to this unshakable belief in infinite scaling. To me, they&#x27;re already too silly. Maybe we need to cool down, engineer some stuff and figure out where the comfortable treshold lies.
评论 #42674982 未加载
danielmarkbruce4 个月前
The current, intricate, post training &#x2F; fine tuning going on in models like 0-1 <i>is</i> feature engineering.
tippytippytango4 个月前
The bitter lesson is a powerful heuristic, but its adherents treat it as dogma and wield it as a club to win arguments.
crawshaw4 个月前
The bitter lesson is a wonderful essay and well worth a read: <a href="http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html" rel="nofollow">http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html</a><p>I also believe the original bitter lesson essay is correct. General methods on more powerful compute will supersede anything specific built today.<p>However I believe the lesson in this blog post is an incorrect conclusion. It is roughly analogous to &quot;all people die, so do not bother having children because they will die one day.&quot; It is true that what you build today will be replaced in the future. That is OK. Don&#x27;t get caught out when it happens. But building specific systems today is the only way to make progress with products.<p>Waymo started some 12 years ago as Google Chauffeur, built on all sorts of image&#x2F;object recognition systems that I am sure have fallen to the bitter lesson. Should Google have refused to start a self-driving car project 12 years ago, because generalized transformers would replace a lot of their work in the future? No. You have to build products with what you&#x27;ve got, where you are. Adopt better tools when they come along.
2pk034 个月前
Right. It boils down to federated learning on edge to have access to relevant data.
kubb4 个月前
They will learn their lesson how exactly? By getting money for free from VCs?<p>Remember, most of these startups are grifters. Only few of them really believe in their products.
Devasta4 个月前
This assumes that the AI bros care about building useful products. They don&#x27;t for the most part, they care only about building a startup that can plausibly get VC funding. So what if in five years your LLM on the blackchain app fails, you got to blow throw a few million dollars over a few years going around to conferences and living the high life.<p>That&#x27;s a success in anyone&#x27;s book.
CaptainFever4 个月前
Cool article, a lot more technical and informational than I thought from the headline.<p>The article gave the TL;DR as below, for those who skip to the comments:<p><pre><code> Historically, general approaches always win in AI. Founders in AI application space now repeat the mistakes AI researchers made in the past. Better AI models will enable general purpose AI applications. At the same time, the added value of the software around the AI model will diminish.</code></pre>
quantumgarbage4 个月前
The mistake this post makes is thinking that sole engineering effort is what makes startup win.<p>Most of the things you build will end up in the trash can anyway. But what matters is what you get while doing them, like domain knowledge, improved processes, actual customers..<p>If you are ok to scrap your code in order to integrate a better model then fine. Since your model is better and does not need all the software around it, it will take 10x less time to re-do what you have, leaving you more time for things that also matter, like sales.
danielovichdk4 个月前
AI founders will also make sure that users - the ones that are not capable of telling if you&#x27;re lying to them by taking their money and energy - will learn the bitter lesson of taking it right up the arse. Again. Just like a repetition of history. Humanity is like ever before caught in the action of serving the individuals that made us fuck ourselves over. And the only thing you do is thanking them for it by applying to their tactics. Middle class humanity and above is morally bankrupt, and AI is the next thing they will fight over and with, but of course without adding anything positive to the world.
mana72724 个月前
love this
xchip4 个月前
thanks for the tl;dr
darepublic4 个月前
Dunno why anyone in the startup space would be excited about new more powerful AI models unless they are just throwing a mask over their existential fears