TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

This Time It's Different

38 pointsby justuswabout 2 years ago

22 comments

TrackerFFabout 2 years ago
What puzzles me about all the skeptics (deniers, ever) is that they don&#x27;t seem to be too forward thinking. Sure, these LLMs will not replace anyone right now, but damn look at the progression. Going back 5-10 years, we&#x27;re doing stuff was seemingly impossible back then. Just imagine what things will look like in another 10-15-20 years?<p>If you&#x27;re a 14-15 year old computer enthusiast, planning on getting a BS or MS in CS, you&#x27;ll likely first enter the workforce 7-10 years. The time you spend in University is a lifetime for certain fields of Machine Learning &#x2F; AI, and who knows what entry jobs have been completely automated.<p>Personally, I think this will be the death of entry-level jobs.
评论 #35221148 未加载
评论 #35221172 未加载
评论 #35221490 未加载
评论 #35224819 未加载
评论 #35221663 未加载
评论 #35221381 未加载
评论 #35221123 未加载
评论 #35229365 未加载
评论 #35221273 未加载
评论 #35221508 未加载
pontusabout 2 years ago
One (perhaps silly) way I like to frame this stuff is by imagining that I&#x27;m some special purpose Turing machine designed specifically for some task. Sure, sometimes other Turing machines come along that appear to infringe upon my skill set but they ultimately only perform a small subtask better than I am able to (e.g. calculator, spell check, word processor, IDE, code completion, ...). So, I incorporate it into my routine, effectively boosting my own performance.<p>Now, what would happen if all of a sudden a universal Turing machine came along? Well, by virtue of being universal, that means that it can emulate me and all other Turing machines. This time around things are different. Even if I can find a way to incorporate it into my workflow, it can still emulate that more sophisticated version of me by virtue of being universal. So it then comes down to whether or not I can incorporate the latest version of this universal Turing machine faster than its own design is improved. If not, I will be replaced. Since in our instantiation, I am made from biological material it&#x27;s in my mind only a matter of time before the universal Turing machine starts outpacing me.<p>So, I guess the question is then if these GPT models (or their descendants) are universal (in my hand wavy definition of the term).
评论 #35221443 未加载
1atticeabout 2 years ago
I&#x27;m about to step out, and I will have to write my own essay in reply, but frankly, this time <i>is</i> different.<p>- OP&#x27;s family of arguments, which I&#x27;ll call BAU (Business as Usual, i.e. the claim that there is nothing fundamentally different about <i>this</i> disruption) depends on historical induction<p>- Historical induction is unreliable<p>- Sometimes things <i>really are</i> different, for example, the discovery of germ theory, or the invention of nuclear weapons<p>- The example given, e.g. farrier, is nothing like the present situation<p>- The fundamental difference between the coming disruption and previous disruptions is the scale. (Just as the difference between TNT and nukes was, again, scale.) Scale matters. Differences in quantity become differences in quality.<p>- By my read, transformer-based AI obviates the need for <i>most cognitive work</i>.<p>- That will upend the &#x27;merit&#x27; part of our supposed meritocracy. We&#x27;ll either have to become egalitarians (unlikely anytime soon, esp in USA) or we&#x27;ll fall back on some other, worse metric for deciding who serves and who eats at the restaurant of life.<p>- I&#x27;d put my money on a resurgence in terrible ideas from the past, because they are so hot right now. Stuff like racism, title, caste, what-have-you.<p>- All of the abovegoing is Bad, and we should feel bad, because things are about to get bad.<p>- A better way to model this is as a reduction in habitat -- whereas the introduction of the ICE increased &#x27;habitat&#x27; for minds desiring useful employment (engineer, what-have-you) while marginalizing a profession or two (farrier), the introduction of GPT seems poised to reduce habitat at a scale we have not seen before, and the &#x27;new, better jobs&#x27; that Sam Altman alluded to, for example, seem beyond naming. Like, what is there left to do? Think it through. Where is your mind going to go? Knitting?<p>- Again, proper essay forthcoming; first, brunch
评论 #35221433 未加载
评论 #35220921 未加载
carapaceabout 2 years ago
It definitely <i>feels</i> different.<p>I saw the output of a GPT4 code assistant the other day and my immediate reaction was, &quot;well, my career as a programmer is over.&quot; I can still do valuable things (gosh I sure hope so!) but the stuff I&#x27;ve been doing for the last twenty years or so is over. And good riddance! Software is buggy crap. The machines will do a better job.<p>The main issues are:<p>Who gets to decide the boundaries of publicly acceptable thought?<p>Who gets to reap the economic windfall?<p>How do we educate ourselves in a world that contains talking machines that can answer any (permitted) question?
评论 #35221804 未加载
评论 #35221162 未加载
评论 #35221382 未加载
ftioabout 2 years ago
Really difficult to agree with this given the pace at which LLMs are improving.<p>LLMs are disruptive in that they enable a form of outsourcing. Outsourcing to the the lowest-cost region in the world, inside a computer. Outsourcing to tireless, ever-improving, highly-intelligent machine workers. Workers that will eventually have a variety of specialized and&#x2F;or general skills, depending on what they&#x27;re trained for.<p>Imagine &quot;offshoring&quot; (AIshoring?) for 1% of the cost of a human employee to a machine with zero time off, zero time zone separation, zero cultural or communication barriers, and with 100% access to all of your corporate documentation, goals, and other context.<p>Imagine that these &quot;offshore&quot; AI workers only improve every year.<p>This time, it really is different.
评论 #35221234 未加载
nkozyraabout 2 years ago
It&#x27;s extremely uncommon that everything changes suddenly and forever.<p>Most technological progress is a continuum, with little step functions of &quot;this is it.&quot;<p>The AI&#x2F;ML progress in the last ten years is a big local maximum, though, and that&#x27;s enough to drastically change things for a lot of people.
评论 #35220775 未加载
pjdemersabout 2 years ago
The price of things that AI can do well is quickly going to fall to the cost of the electricity to run the model. The price of things that AI can do, but can&#x27;t do well, will fall some, but not nearly as much. The price of things that AI can&#x27;t do at all will go up, because there will be fewer people working in knowledge jobs, and therefore they will be able to command higher wages.
评论 #35220607 未加载
评论 #35220697 未加载
spywaregorillaabout 2 years ago
&gt; What about the printing press or any other of our advances in writing, such as the typewriter, spellcheckers, emails, grammar checkers, and so on? While the work of a classical secretary became unnecessary in many ways, the fact that the individual increase in productivity meant an overall growth of the economy meant, in turn, that those who would have lost their current role or even employment to technological advancement would find new occupations within this growth.<p>Oh please. That is not at all what happened. Technology that makes people more productive largely does so by lowering the required skill to do something. A shop clerk used to be a pretty difficult job. Now it&#x27;s done by people with mental disabilities.<p>The pay goes down. The work becomes less meaningful. We foster a system where the supply of humans willing to do menial labor tasks grows so that those with capital have access to uber drivers and other permanent servant class workers.<p>The growth of the economy is not a tide that lifts all ships.
评论 #35220288 未加载
评论 #35220260 未加载
pupppetabout 2 years ago
The author suggests GPT-4 is analogous to tools like the spellchecker, but really the roles get flipped. AI does the work, and we take the role of the spellchecker.<p>Not dissimilar to the one employee who hangs around the 10 self-checkouts ensuring they work properly. Can’t say I look forward to that career change.
评论 #35222487 未加载
comment_ranabout 2 years ago
GPT4: The author expresses skepticism about the idea of an imminent Singularity, a point where artificial intelligence surpasses human intelligence. They argue that LLMs are more likely to be force multipliers, improving productivity and automating routine tasks, rather than replacing human workers.<p>I would say the problem is that we are multiplication something, and as a result, we don&#x27;t know the outcome of the multiplication. Right now, the multiplication is only intended to improve productivity for some people. However, if this multiplication were to occur on a global scale or on a societal level, the true impact is unknown.
low_tech_loveabout 2 years ago
The one thing that surprises&#x2F;scares me the most in the current state of affairs is not so much the technology itself (after all, technology needs to be absorbed by society, and that is surely a bottleneck here), but how fast the <i>research</i> is going. There is something about machine learning &#x2F; AI research that makes it both faster to do than other types of research and also super motivating. The people who do ML research are basically just working 24&#x2F;7 and doing that with a smile on their faces. Most other types of research require you to spend a lot of extra time doing boring, bureaucratic stuff (like handling human subjects, complex protocols, lab equipment, endless meetings, etc.) but in ML you just ideate, program, run the tests, write, publish, repeat. (Sure of course you need to do it right, but that is not the point here.)<p>And for those who are researchers at heart, that is the best thing in the world: to be able to do your research and push your results out as quickly and efficiently as possible. So nowadays a paper comes out with some new and interesting development, then two months later there are 30 other papers with significant improvements over that first one. (Yes there is a lot of junk, but again that&#x27;s not the point here: the point is that those who are doing it right can do it efficiently) This is the most incredible thing about this whole situation, and maybe the most scary: there is no way to stop this avalanche of research, because it&#x27;s not a centralized thing: it&#x27;s just a bunch of human beings doing what they love, with motivation (both financial and personal). Nobody can stop this. If someone happens to press the doom button in the middle of this, well... that&#x27;s it!
评论 #35224594 未加载
seydorabout 2 years ago
&gt; knowledge workers<p>Aka Thinkers. I don&#x27;t think the author considered the full extent of automating thinking. They are underestimating what it is. This technology can only get better and is probably already superhuman. This time it is different indeed and makes lot of knowledge work not just be obsolete, but also inferior.
评论 #35221210 未加载
评论 #35223058 未加载
QuiEgoabout 2 years ago
AI has to be trained. So it’s good at doing things people already have done many times. It’s not awesome at design for novel things.<p>So, I may get an AI that’s like a new college grad level of coder.<p>As a senior, I’m mostly giving work to my team anyways then reviewing what I get back, so it would not be so different.<p>As a junior, you have to be scared you’re gonna be replaced, and question coming in industry.<p>So, less juniors will come in industry, which will make a shortage of seniors in the coming years, but they will be all the more needed to direct the AI.<p>If you’re experienced, you’re gonna be in amazing place during this transition phase. If you’re junior, there’s gonna be a huge hump.<p>When the current crop of seniors get old and retiree, there’s gonna be a shortage like none the industry has ever seen. So the smaller class of juniors that ride out the revolution are going to have it best of all.
streetcat1about 2 years ago
The problem with AI such as GPT4 is training data. As it usage increases, most of its training data will be data generated by GPT4, hence creating a positive feedback loop. This would actually increase the value of human knowledge workers.
评论 #35221153 未加载
评论 #35224822 未加载
评论 #35221134 未加载
ergonaughtabout 2 years ago
This is the point where “it all” goes sideways. Not toward a grand and transformative singularity, but sideways into some variation of dystopian hellscape.<p>Being unable to recognize why this is all very clearly going to go wrong requires a great deal of ignorance or a wildly unrealistic faith in humanity, which is sort of the same thing.<p>There’s nothing necessarily wrong with ignorance or delusional faith in humanity, per se, but the people qualified to assess this seem almost universally negative regarding the most likely outcome.
mettamageabout 2 years ago
So I&#x27;ve studied CS, have been professionally programming for 3 years. Considering the advances of AI, what (somewhat) programming related job would be the most safe &#x2F; the best bet to go forward? I don&#x27;t have a full picture. A few fields I see:<p>Specialized field as:<p>* FPGA programming<p>* AI itself<p>* Red teaming &#x2F; blue teaming<p>I think these fields will have a tougher time:<p>* Web dev<p>* Game dev (they do now as well)
teucrisabout 2 years ago
Putting aside fears of AGI for a moment, seeing the comments here I’m coming back to the same idea I come to every time new tech comes along: Complaints about AI and automation are actually complaints about capitalism. The increase in productivity from AI could in fact enrich our lives, but given the never ending hunger for growth from late-stage capitalism, the average person will see a drop in wages, harder jobs, and more inequality.<p>Can we break the cycle on this? Is there a way drive innovation while valuing humans?
评论 #35220958 未加载
评论 #35220974 未加载
评论 #35221118 未加载
Animatsabout 2 years ago
There will be denial that it&#x27;s different until large language models start replacing CEOs. Then it will be a crisis.<p>Here&#x27;s a way to approach CEO automation. Collect up business cases, as used in business schools, and add them to the training set. Harvard and Stanford have huge collections of business cases.<p>Then try management in-basket tests.[1] Work on prompts that get large language models to pass those.<p>Then shadow some high level executives. Intercept all their incoming and outgoing communications (which some companies already do) and have the system respond to the same inputs the executives do. Speech to text is good enough now for this.<p>A good exercise for YC would be to keep all the inputs from new company pitches, and use those, plus the results two years later, as a training set for selecting new companies.<p>Once ML systems are outperforming humans, the fundamental goals of corporate capitalism require that they be in charge.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;In-basket_test" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;In-basket_test</a>
评论 #35221550 未加载
qz_kbabout 2 years ago
No one ever considers the equally likely scenario of a technological plateau instead of singularity. Complexity&#x2F;Entropy always forces things to level off. There&#x27;s an plausible scenario that GPT# &quot;replaces&quot; all knowledge work, but cannot move anything forward. All humans become comfortable and the skills&#x2F;knowledge&#x2F;tools required to improve anything are lost to time as systems producing capable humans erode and we gain an overreliance on GPT# to solve every knowledge problem, but the knowledge problems that both we and GPT# care to solve plateau because were all synchronized to the same crystallized state of the world that the final GPT# model was trained on and &quot;cares&quot; about.<p>Maybe at some point maybe we only act as meat-robots which shovel coal into the machine, but a lack of redundancy in GPT# due to it&#x27;s own human like blind spots means it shuts down. Humans can no longer get it running again because they can&#x27;t query it properly to help fix the complicated problems. The ability to even do the tasks or design systems required to keep modern world robust to unknown future disasters or breakdowns does not and will not exist in any of the training data. If we get rid of all knowledge work, we can no longer bootstrap things back to a working state should everything go wrong.<p>Maybe the current instantiation of GPT#&#x2F;SD etc. pollute the training data with plausible but subtly flawed software, text, images etc. halting improvement around here. Maybe the ability to evaluate if the model improved becomes more noise than signal because it gets too vague what improvement even means. RLHF will already have this problem, as 100 people will have a 100 slightly different biases about what constitutes the &quot;best&quot; next token.<p>No matter how hard it tries, I think we can say GPT will not solve NP-Hard problems magically, it will not somehow find global optima in non-linear optimizations, It will not break the laws of physics, It will not make inherently serial problems embarrassingly parallel. It will probably not be more energy efficient at attempting to solving these problems, maybe just faster at setting up systems to try solving them.<p>Another trap, as it becomes more human like in its reasoning and problem solving capabilities, it starts to gain the same blind spots as us too, and also gains stochastic behavior which may cause it to argue with other instances of itself. I&#x27;m not convinced an AGI innovates at an unfathomable rate or even supersedes humans in all contexts. I&#x27;m especially not convinced a world filled with AGIs that is indistinguishable from a very intelligent human or corporation or what have you through imitation does any better at anything than the 9 billion embodied AGI agents that currently populate the earth.
pryelluwabout 2 years ago
When the WWW came about I embraced it. Why? It’s a new form of communication like TV or radio. I, like others, recognized the writing on the wall. Embracing it early turned out to be a good decision.<p>Chatgpt, or well tuned LLMs, are not quite a new form of communication. They’re a new way to enhance thought. Call it Thought++. I’m fully embracing it as my personal pseudo-assistant. Why? The writing on the wall is clear enough to understand the following: I don’t know what the future will be. I know chatgpt and similar have the potential to embrace it in unimaginable ways. Learning the fundamentals of this technology is a safe bet. It’s also an investment in the future.<p>I’m already using it as a thought lubricant. Can’t wait until I can have do things for me.
awinter-pyabout 2 years ago
obvious missing piece here is we&#x27;re not the farrier, we&#x27;re the horse. horses ended up as glue. (&#x27;glue code&#x27; pun intended, mostly)<p>ATMs didn&#x27;t kill the bank branch, exactly, but crappy banks have survived competition just by having access to cheap credit + yield<p>Key thinker in this topic w&#x2F; Piketty, bc his lens is perfect: when can machines do what people do, economically. He is agnostic to technology in that he doesn&#x27;t care about computers vs steampunk, and he is open to the market + political dynamics of labor as factors.
ameliusabout 2 years ago
It&#x27;s often said that technology progresses exponentially.<p>So I&#x27;m wondering, is anyone measuring the performance of AI in some way that allows us to check if it&#x27;s an exponential curve?
评论 #35220723 未加载
评论 #35220403 未加载
评论 #35221349 未加载
评论 #35220846 未加载