TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Knowledge Economy Is Over. Welcome to the Allocation Economy

80 pointsby dshipperover 1 year ago

30 comments

armchairhackerover 1 year ago
ChatGPT didn&#x27;t end the knowledge economy, because it doesn&#x27;t possess reliability, niche knowledge, and experience which is hard to put into words. Reliability, niche knowledge, and experience (pick two) are what get people get high-paying, stable, knowledge-based jobs, like legacy system engineer, sysadmin, and &lt;thing&gt; advisor.<p>Surface-level knowledge (which ChatGPT is good at) was already accessible by anyone who can do basic research, albeit slower. It wasn&#x27;t a gated skill, so I can imagine some people achieved decent jobs from it, but most managers probably either did it themselves or assigned it to someone with a different job.
评论 #39068819 未加载
评论 #39068113 未加载
评论 #39068601 未加载
评论 #39068639 未加载
评论 #39069555 未加载
评论 #39068916 未加载
logicprogover 1 year ago
I think the fundamental premise of this article is wrong, because I don&#x27;t think the kind of thing chat GPT does is the same kind of thing that knowledge workers do. It isn&#x27;t just a difference in reliability and talent in knowing what knowledge is relevant and understanding of niche knowledge, although those are also important factors that he doesn&#x27;t weigh highly enough, it&#x27;s also that there is a fundamental difference in kind between what&#x27;s being done, making his comparison essentially a category error. Even if we assume that chat GPT doesn&#x27;t hallucinate, it is an information retrieval system alone with some simple synthesis capabilities, whereas knowledge work is not just having the knowledge but having a full conceptual understaning of it and solving problems with creative application of thay knowledge and conceptual understanding — it&#x27;s not just about being able to regurgitate a couple paragraphs of synthesized text or take a couple snippets from Stack Overflow and put them together, it&#x27;s about actually understanding the meaniny and concepts of things and the principles behind them, and having a good understanding of the methods of reasoning and problem solving that you can self critically apply to your own thought processes in a self-correcting manner (which is something the structure of chatGPT precludes), and being able to creatively apply the knowledge you have, through the semantic lens of the things I just listed, in order to creatively solve a specific problem within a specific context.
评论 #39069449 未加载
评论 #39071767 未加载
JackMorganover 1 year ago
The author talks of &quot;summarizing&quot; but I think there&#x27;s a deeper concept at play: compression.<p>Human thoughts are compressed into speech. Many people probably speak only a tiny fraction of their thoughts. Writing is compressed speach, most people write only a fraction of what they say (unless you&#x27;re a professional content producer&#x2F;author&#x2F;developer&#x2F;etc). Entire worlds of thought are regularly compressed into writing. Poetry is often further compressed. It&#x27;s trying to fit vast complexity into as few words as possible.<p>Think how many words have been written trying to accurately decompress the full meanings in Hamlet, the Odyssey, or Beowulf.<p>Tweets are compressed writing. Memes are compressed tweets.<p>LLM can compress huge amounts of text into a few words. This is pretty remarkable, but it is only ever as good as the input. It&#x27;s never creating, it&#x27;s simply compressing concepts it is trained on. You can unlock parts of that compressed data with prompts.<p>If my friend and I both shared the same LLM, I could send them a few words to use as a prompt, knowing that it will &quot;expand&quot; into paragraphs or even chapters of meaning already pre-compressed inside the LLM.<p>I think this is possibly a new thing, imagine like HugeLol, Reddit, X.com, but instead of tweets and memes it&#x27;s LLM prompts. We&#x27;re able now more than ever to transmit complex concepts to each other with the smallest possible bits.<p>I&#x27;ve seen this a bit already on some of the LocalLlama online groups. They&#x27;ll post prompts to each other to produce a &quot;personality&quot; they can interact with. I suspect this will be more and more used to compress and send data to each other.
评论 #39068318 未加载
评论 #39067997 未加载
评论 #39068799 未加载
unravellerover 1 year ago
&gt; You won’t be judged on how much you know, but instead on how well you can allocate and manage the resources to get work done.<p>Why would someone waste time judging me at all if such a mass levelling off of useful skill happens? It seems to suggest that in future there will only be ultra-managers and those yet to take the two-day manager course.<p>It&#x27;s very wishful thinking to hope all former work achievements as markers for future success go by the wayside. I agree that no one really cares what constitutes &quot;work&quot;, only &quot;doneness&quot;. And that&#x27;s why when someone wants a hole in the wall they are not going to judge you by how well you hold a drill in your hand for 5 seconds if better markers exist.<p>It&#x27;s still the Knowledge Economy, knowledge that already propels an economy as much as it does because it can&#x27;t be monopolized like minerals etc. I&#x27;d only expect to see more of this heaving force. Humans will always take credit for everything they can (even for not taking credit), so it will still be called &quot;his knowledge&quot; the moment it touches him. The stigma will disappear soon, just like it always does with tech.
deepsquirrelnetover 1 year ago
Somewhere between the hype and the doom I think there’s a much simpler answer. We have always needed to use language to interface with computers. In early days, we spent more time learning to talk the language of a computer. As computers became more powerful, we made their language more like ours. This is just the next “higher level”.<p>In the near future, I think programming language will be natural language and LLMs will be a translator to lower level code. Why should we have an LLM program Python, when it could probably just write low level instructions only meant to be tested and verified but not read? Translation is what LLMs are good at, and summarization is fundamentally a translation task of verbose text into key information. Code is a translation of our ideas into machine language.<p>The reasoning aspects are not the strength of an LLM. Without detailed instructions to translate, they are not good at writing code.
评论 #39069219 未加载
hyperthesisover 1 year ago
Isn&#x27;t this the standard automation trajectory, of lower level work being done by machines, leaving humans to do the higher level work? Summarizing is a step before generalizing&#x2F;inferring&#x2F;what-if&#x2F;creating. Even for knowledge-workers, wasn&#x27;t the spreadsheet a similar automation?<p>Historically, automation creates more work than it replaces; but why should that continue? One reason is that <i>automation is a commodity</i>, so competition shifts to non-automated functions. Human demand continues.<p>Of course, this doesn&#x27;t address the non-economic concerns of the Luddites, that the <i>human</i> investment in skills is lost, and the derived sense of human value and dignity. Unfortunately in the long-term: whatever your labour, AI is coming for you.
评论 #39088416 未加载
pyrophaneover 1 year ago
The author is also selling this: <a href="https:&#x2F;&#x2F;www.maxyourmind.xyz&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.maxyourmind.xyz&#x2F;</a>
satisficeover 1 year ago
Does no one notice that ChatGPT sucks at summarizing? Am I the only person in the whole fucking world who looks at LLMs without rose glasses? Does no one else test these things before declaring how great they are?<p>Good luck outsourcing anything interesting to ChatGPT, but I think all you are getting is mediocrity minus minus.<p>It&#x27;s one thing to want flying cars, jet packs, hoverboards, etc. It&#x27;s another thing to pretend we have them when we don&#x27;t. Sober up, boys.
评论 #39069933 未加载
评论 #39068417 未加载
评论 #39069547 未加载
JKCalhounover 1 year ago
&gt; We live in a knowledge economy. What you know—and your ability to bring it to bear in any given circumstance—is what creates economic value for you.<p>And for the first time the thought occurred to me that AI might in fact make actual artists even more in demand. To be sure it may well end up only being wealthy patrons but we may come to prize a thing demonstrably created by human as more &quot;genuine&quot; and therefore more valuable.
评论 #39067813 未加载
评论 #39068345 未加载
jensensbuttonover 1 year ago
&gt; choosing which work to be done, deciding whether work is good enough<p>The author elides over the fact that I&#x27;m knowledge work it is typically the manager&#x27;s experience of having developed competency themselves that makes them effective at deciding which work to do and evaluating that work.<p>I wonder if the author would bet on a tech company that decided to fill their management ranks with new grads.
vagesover 1 year ago
“New technology is about to change the landscape completely.” Sure, it will. I’m not so sure that making an effort directed at being a “model manager” is the best way to prepare yourself for the possible changes.<p>In my opinion, education (for all ages) waste a lot of time chasing “digital literacy”, with few results to show for it. I think most people will have a greater return on investment from simply writing, practicing “math”, and organising their personal work and knowledge. These skills are surprisingly hard to get good at, and will probably keep you in demand as long as human labor is.<p>Edit: … however, this is an interesting thought-experiment. It just reads like career development advice.
jacknewsover 1 year ago
The argument makes no sense, if chatGPT is capable enough to perform the work, it&#x27;s certainly capable enough to schedule it.
bitwizeover 1 year ago
I work at a company which performs analytics on medical data.<p>To test our software, and to allow our clients to test the kinds of analyses they perform, we have &quot;synthetic&quot; datasets available to our software suite. These datasets statistically <i>look</i> like real medical data, but they&#x27;re all fake. This means we can use them and get realistic results provided we&#x27;re not concerned about accuracy (testing a new feature, for example), and we don&#x27;t have to worry about things like HIPAA compliance because no actual medical data is being used.<p>To me that&#x27;s where things like LLMs come in. They produce statistically plausible, fake data. They&#x27;re not reliable as a ground source of truth, even for summarizing, because accuracy isn&#x27;t their job. Statistical plausibility with respect to their model is. And note that unless I&#x27;m running the model on my own, on a PC much beefier than any I actually own, I control neither the corpus used to train the model nor what counts as &quot;statistical plausibility&quot; -- someone at Microsoft does that. Which means I can&#x27;t trust ChatGPT&#x27;s output to produce an accurate summary or handle information I provide it in a responsible way. It may well decide to fool me with garbage because that&#x27;s what it&#x27;s designed to do: produce convincing output, not accurate output. Where it might come in handy is providing a starting point for like a fictional story, or an essay or something. Fake data can be profoundly useful in certain contexts, but it&#x27;s still fake.
alex_youngover 1 year ago
Humans reason and abstract too, making for some of the most compelling writing. How are language models going to crack those problems?<p>Keep your fountain pen. You’re still going to need it.
hyperthesisover 1 year ago
AI improves faster than economies adapt. A singularity?<p>Present LLMs are glorified search. As they approach real-time training, Google will go away, finally disrupted. (Google&#x27;s 20% time was meant to avoid this very thing, but you need a whole organization behind an idea to disrupt, however small: e.g. IBM&#x27;s PC developed in startup-like separate business unit.)
fiala__over 1 year ago
&gt; once I made that connection, I started to see summarizing everywhere<p>One of the most powerful (and dangerous) aspects of dogma is the tendency of its followers to promote it to a universal pattern.<p>I, for one, am horrified at the prospect of a future where any kind of non-managerial labour is viewed as &quot;summarising&quot; and automated out of our collective skillset. GPT output may often be equivalent to human writing&#x2F;thinking as a commodity, but human writing &amp; thinking is not a commodity in its essence.<p>To me this is not the end of knowledge economy. This is a metastasis of the same capitalist disease that attacked the traditional crafts sector more than 100 years ago, attempting to replace it with a mix of industrially exploited labour in the Global North, colonial&#x2F;slave labour in the Global South, and eventually mechanisation + automation. This brought about fantastic levels of productivity and wealth, along with insane amounts of pollution, the climate crisis and growing inequality. In sectors such as fashion the market is flooded with low-quality goods with a lifetime of a few months, which has led to astronomical amounts of waste.<p>The difference with AI is, now the Western creative middle class is affected, and due to the shadowy nature of the industry, it is not yet completely clear who is getting exploited (though we are starting to find out[1]). The good thing is, traditional crafts have not disappeared, in fact, their products are increasingly more prized and appreciated. I firmly believe generative AI&#x27;s onslaught can also be withstood, and a better world is still possible - one where artisan labour, attention and connectedness prevail over whatever hellish future generative AI would create.<p>(side note: IMO high-quality code is much, much more than a StackOverflow summary)<p>[1] <a href="https:&#x2F;&#x2F;time.com&#x2F;6247678&#x2F;openai-chatgpt-kenya-workers&#x2F;" rel="nofollow">https:&#x2F;&#x2F;time.com&#x2F;6247678&#x2F;openai-chatgpt-kenya-workers&#x2F;</a>
评论 #39069240 未加载
评论 #39088575 未加载
gnarlouseover 1 year ago
Proposing we call it “the Zig economy” then, because it’s all about managing allocators now
m0lluskover 1 year ago
Tech developers keep getting things wrong. Now we have self driving cars that won&#x27;t obay authorities telling them to stop, one of the most foundational driving skills.<p>I do property maintenance and there is a lot of similar difficulty. It will take a lot of development before robots can do cleaning, dusting, and make beds well and efficiently. But the larger task is negotiating what exactly should be done in available time and how exactly, such as using low phosphate soap with soft cloth or bleach soap with tough sponge.<p>What I see happening is there is a huge amount of subtle know how being lost as people retire without training replacements, let alone AI replacements.
jvansover 1 year ago
Advances in knowledge and technology will always come from people who understand the nitty gritty details very well and recognize patterns between abstract ideas.<p>I don&#x27;t think current generation AIs are very good at this. A nearest neighbor search returns similar concepts but misses the very highest level abstractions. A NN search on a piece of text won&#x27;t return text that talks about completely different ideas but uses similar sentence structure and argument patterns
评论 #39069037 未加载
lifeisstillgoodover 1 year ago
Summarisation will have impacts<p>“Ok, chatgpt summarise all emails sent in the company and determine who is duplicating work”<p>“Ok, summarise the minutes of every meeting”<p>“Hell stop there, <i>record</i> every meeting in the company, summarise the discussion and determine who is working on what project. Is there duplication”<p>“Ok now we record every meeting, and chatGPT can put people working on related projects inntouchbwithbeach other, tell me why we need a layer of management”
评论 #39068547 未加载
ilakshover 1 year ago
I think a few of the main points are good but part of the worldview here is pretentious and classist to an enraging degree.<p>Managers aren&#x27;t in the position they are in because they have special skills that non-managers don&#x27;t. They are often in those positions because of social class, i.e. their rich parents paid for a better college or were role models for management tracks or connections for landing executive roles etc.<p>Or they were promoted because they had very effective technical skills.<p>But putting aside all of that, the idea that AI seems like it should make most if not all of us into managers is something I have been thinking about and trying to accelerate for my life as much as possible.<p>The closest I have come to making this a practical reality so far is the &#x27;aider&#x27; programming tool. I have also started on my own agent framework. It seems like the ability to put these things in a loop over a period of time with direct feedback such as executing scripts they are writing for you is where we are headed. We can already do that of course but I mean that the effectiveness will likely continue to increase as the models and agent systems are refined.<p>I think there is huge potential for more specialized models that can run locally and continuously without racking up OpenAI bills. The theory is that if the models don&#x27;t need to know how to do literally everything, they can be smaller and still effective enough at narrow tasks.<p>To make this convenient we need an easy way to share more specific models and ideally a way to automatically discover and load them on the fly. So my goal is to build the WordPress of agent frameworks at least as far as the ability to very easily install plugins and agents.
quantum_stateover 1 year ago
Allocation can be, as it already is at small scale, algorithmic. Then what next?
WalterBrightover 1 year ago
AI cannot create new knowledge, nor can it make value judgments. It can only regurgitate existing knowledge. And, of course, garbage in garbage out.
bvoqover 1 year ago
Management thinking they are again better than ChatGPT lol. News flash: ChatGPT is also good at high level thinking.
评论 #39088599 未加载
hyperthesisover 1 year ago
I like the idea that <i>iteration is faster</i>, for people learning to manage AI vs people.
nxobjectover 1 year ago
I’m a little suspicious about the authors characterization of using a wide variety LLM workflows as “management” - it misses things like training and supporting your supervises, and generally contributing to a healthy organizational ecosystem. I think a better analogy would be “specification writing”.
igor47over 1 year ago
from TFA: &gt; there are only about 1 million managers in the U.S., or about 12% of the workforce<p>ummm. either there are way fewer people in the US workforce than i thought, or we still need humans to do some reasoning.
streetcat1over 1 year ago
I wonder, has the author ever worked on a commercial software project?
ameliusover 1 year ago
Who says computers can&#x27;t do allocation?
评论 #39069586 未加载
abathologistover 1 year ago
This is an interesting reflection, and I&#x27;m glad to have read it.<p>A few things came to mind:<p>The view of programming as &quot;summarizing what&#x27;s on StackOverflow&quot; is really alien to me. I suspect this is indicative of a particular approach to programming, and perhaps to working in general, which I don&#x27;t share. The author&#x27;s view seems to be that knowledge exists &quot;out there&quot; and the role of the &quot;knowledge worker&quot; is to accumulate, internalize, and reshape it into products derived by summarization. Compare this with another view on &quot;knowledge work&quot; taken from <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Knowledge_worker" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Knowledge_worker</a><p>&gt; Nonaka described knowledge as the fuel for innovation, but was concerned that &gt; many managers failed to understand how knowledge could be leveraged. Companies &gt; are more like living organisms than machines, he argued, and most viewed &gt; knowledge as a static input to the corporate machine. Nonaka advocated a view &gt; of knowledge as renewable and changing, and that knowledge workers were the &gt; agents for that change. Knowledge-creating companies, he believed, should be &gt; focused primarily on the task of innovation.<p>High value knowledge work involves creating and transforming knowledge, not just compressing or reconfiguring it.<p>To expand on this in abstract terms: knowledge work is fundamentally <i>cognitive</i> and it gets its higher order purpose and potential from the application of <i>reason</i>; i.e., it is concerned with rational cognition. Rational cognition is mainly about synthesizing new, higher order concepts that direct and evolve given concepts into more general and potent structures. This work involves <i>re-cognition</i> as a necessary component, but if it were only recognitive -- as it would be if were only concerned with recollection and summarization -- it would not have the creative dynamic which it does.<p>To expand in more specific terms: programming work involves problem solving, but it is not mainly about reassembling existing solutions to solve known problems. The most valuable aspects of this work come from <i>problem discovery</i>, <i>root cause analysis</i>, and <i>solution invention</i>. (Programming work that consists in StackOverflow copy pasta is probably best viewed as the production of tech debt :) This is not to say resources like StackOverflow arent useful, they definitely are!)<p>I suspect it says more about the author&#x27;s career aspirations and the reigning interests of the political-economic system that they envision a future where everyone is a manager. First, only if you have &quot;manager brain&quot; can you look at what&#x27;s happening in tech and see a future where everyone is a manager as a positive development, when compared with a future where everyone is a researcher, artists, arisen, inventor, etc. Second, the managerialization of work is actually describing an idealized view of the present situation, and if it is looming in the future it is only as an intensification of the current dynamics. The rise of [the Professional Managerial Class was heralded in the 70s][1], and most tech workers are in the PMC:<p>&gt; Who are these Americans working in the upper echelons of the knowledge &gt; economy, exactly? ... the Professional Managerial Class. The PMC, as they are &gt; now often called, came into existence in the late nineteenth and early &gt; twentieth centuries. They were not the old petty bourgeoisie of small-business &gt; proprietors and independent farmers, but a new class whose expertise was &gt; required to make an industrial economy function: engineers, scientists, &gt; teachers, doctors, social workers, functionaries, bureaucrats, and other &gt; professionals and managers who had the know-how to create and control the &gt; levers of the modern capitalist world [0].<p>The managerialization of everything does seem very likely, because that is basically what our current economic regime has been trying to achieve since the advent of the &quot;digital revolution&quot; (and maybe since Hobbes: a pyramid scheme of nested managers, where every managed worker is actually a manager automating its own autonomous subordinates).<p>Against the view that computers are a tool for rendering every &quot;maker&quot; into a &quot;manager&quot;, I propose meditating on the view propounded by Conal Elliot that computers should be &quot;telescopes for meaning&quot;.<p>[0]: <a href="https:&#x2F;&#x2F;pages.cs.wisc.edu&#x2F;~remzi&#x2F;Naur.pdf" rel="nofollow">https:&#x2F;&#x2F;pages.cs.wisc.edu&#x2F;~remzi&#x2F;Naur.pdf</a><p>[1]: <a href="https:&#x2F;&#x2F;www.michaeljkramer.net&#x2F;ideas-of-the-pmc&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.michaeljkramer.net&#x2F;ideas-of-the-pmc&#x2F;</a>
评论 #39084325 未加载