TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

LLMs and Programming in the first days of 2024

461 点作者 nalgeon超过 1 年前

47 条评论

kevindamm超过 1 年前
Salient point:<p>&gt; Would I have been able to do it without ChatGPT? Certainly yes, but the most interesting thing is not the fact that it would have taken me longer: the truth is that I wouldn&#x27;t even have tried, because it wouldn&#x27;t have been worth it.<p>This is the true enabling power of LLMs for code assistance -- reducing the activation energy of new tasks enough that they are tackled (and finished) when they otherwise would have been left on the pile of future projects indefinitely.<p>I think the internet and the open source movement had a similar effect, in that if you did not attempt a project that you had some small interest in, it would only be a matter of time before someone else did enough of a similar problem for you to reuse or repurpose their work, and this led to an explosion of (often useful, or at least usable) applications and libraries.<p>I agree with the author that LLMs are not by themselves very capable but provide a force multiplier for those with the basic skills and motivation.
评论 #38841180 未加载
评论 #38842670 未加载
评论 #38841119 未加载
评论 #38842904 未加载
评论 #38845092 未加载
评论 #38841218 未加载
评论 #38841991 未加载
评论 #38853573 未加载
netcraft超过 1 年前
When it comes to programming, I agree completely. The sweet spot for any use of LLMs is that you already know enough about the subject to verify the work - at least the output - and know enough how to describe in detail (ideally only salient details) what you want. Huge +1 to it helping me do things faster, do things that I wouldnt have otherwise done, or using it for throwaway, mostly inconsequential yet valuable programs.<p>But another area I have found it extremely helpful in is exploring a new topic entirely, programming or otherwise. Telling it that I dont know what im talking about, don&#x27;t necessarily need specifics, but here is what I want to talk about and want it to help me think through.<p>Especially if you are that person who is willing to take what you hear and do more research or ask more question. The entrance to so many fields and subjects is just understanding the basic jargon, listening for the distinctions being made and understanding why, and knowing who the authorities are on the subject.
评论 #38841665 未加载
评论 #38842566 未加载
评论 #38841519 未加载
评论 #38842884 未加载
评论 #38842236 未加载
miki123211超过 1 年前
I think the most under-appreciated aspect of LLMs, one on which the article touched on but didn&#x27;t directly address, is being the &quot;developer that knows everything&quot; aspect.<p>No matter how senior of a programmer you are, you&#x27;re eventually going to encounter a technology you know very little about. You&#x27;re always going to be a junior at something. Maybe you&#x27;re the God of Win32, C++ and COM, but you get stuck on obscure NSIS scripts when packaging your software. Maybe you&#x27;ve been writing web apps for the last 25 years and sit on the PHP language committee, but then you&#x27;re asked to implement some obscure ISO standard for communicating with credit card networks, and you&#x27;ve never communicated with credit card networks on that level before. Maybe you&#x27;ve been writing iOS apps since the first iPhone and Mac apps before that, spent a few years at Apple, know most iOS APIs by heart and designed quite a few yourself, but then you&#x27;re asked to implement CalDAV support in your app and you don&#x27;t know what CalDAV is, much less how to use it. An LLM can help you out in these situations. Maybe it won&#x27;t write all the code for you, but it&#x27;ll at least put you on the right track.
评论 #38841565 未加载
评论 #38841976 未加载
评论 #38851634 未加载
评论 #38841979 未加载
anotherpaulg超过 1 年前
<i>The code was written mostly by doing cut &amp; paste on ChatGPT…</i><p>I am constantly shocked by how many people put up with such a painful workflow. OP is clearly an experienced engineer, not a novice using GPT to code above their knowledge. I assume OP usually cares about ergonomics and efficiency in their coding workflow and tools. But so many folks put up with cutting and pasting code back and forth between GPT and their local files.<p>This frustrating workflow was what initially led me to create aider. It lets you share your local git repo with GPT, so that new code and edits are applied directly into your files. Aider also shares related code context with GPT, so that it can write code that is integrated with your project. This lets GPT make more sophisticated contributions, not just isolated code that is easy to copy &amp; paste.<p>The result is a seamless “pair programming” workflow, where you and GPT are editing the files together as you chat.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;paul-gauthier&#x2F;aider">https:&#x2F;&#x2F;github.com&#x2F;paul-gauthier&#x2F;aider</a>
评论 #38841831 未加载
评论 #38841879 未加载
评论 #38841933 未加载
评论 #38842643 未加载
评论 #38843796 未加载
评论 #38843829 未加载
评论 #38841921 未加载
评论 #38843604 未加载
评论 #38841949 未加载
评论 #38843174 未加载
评论 #38843239 未加载
Wowfunhappy超过 1 年前
For the past few days, I have been trying to fix a bug in a closed-source Mac app. I otherwise love the app, but this bug has been driving me crazy for years.<p>I was pretty sure I knew which Objective-C method was broadly responsible for the bug, but I didn&#x27;t know what that method <i>did</i>, and the decompiled version was a nonsensical mess. I felt like I&#x27;d hit a wall.<p>Then I thought to feed the decompiler babble to GPT-4 and ask for a clean version. The result wasn&#x27;t perfect, but I was able to clean it up. I swizzled the result into the app, and I&#x27;m <i>pretty sure</i> the bug is gone. (I never found reproduction steps, but the problem would usually have occurred by now.)<p>I <i>never</i> could have done this without GPT-4.
评论 #38842439 未加载
couchand超过 1 年前
This post is absolutely devastating to me. Salvatore is surely one of the most capable software engineers working today. He can lucidly see that this supposed tool is completely useless to him within his area of expertise. Then, rather than cast it off as the ill-fitting, bent screwdriver that it is, he accepts the boosters&#x27; premise that he must find some use for it.<p>Just as any introductory macroeconomics class teaches, if one island has superior skill in producing widgets A, it doesn&#x27;t matter how terrible the other island&#x27;s skill at producing B is, we&#x27;ll still see specialization where island A leverages island B. So of course antirez&#x27;s relative ability in systems programming would relegate the LLM to other progamming tasks.<p>However! We do not exist in isolation. There is a multitude of human beings around us, hungry for technical challenges and food. Many of them have or could obtain skills complimentary to our own. In working together, our cooperative efforts could be more than the sum of their parts.<p>Perhaps the LLM is better at writing PyTorch code than antirez. Just because we have an old bent screwdriver in the garage doesn&#x27;t mean we should try to use it. Perhaps we&#x27;d be better off heading to the hardware store today.
评论 #38841736 未加载
评论 #38841675 未加载
评论 #38841721 未加载
评论 #38841737 未加载
评论 #38841875 未加载
评论 #38842352 未加载
评论 #38842321 未加载
评论 #38843954 未加载
评论 #38841987 未加载
评论 #38842356 未加载
sevagh超过 1 年前
There is an impedance problem when working on a new project.<p>At the beginning, when there&#x27;s 0% of the task done, and you need to start _somewhere_, with a hello world or a CMakeLists file or a Python script or whatever, it takes effort. Before ChatGPT&#x2F;LLM, I had to pull that effort out from within myself, with my fingertips. Now, I can farm it out to ChatGPT.<p>It&#x27;s less efficient, not as powerful as if I truly &quot;sat down and did it myself,&quot; but it removes the cost of &quot;deciding to sit down and do it myself.&quot; And even then, I&#x27;m cribbing and mashing together copy-pasted fragments from GitHub code search, Stackoverflow, random blog posts, reading docs, Discord, etc. After several attempts and retries, I have a &quot;5% beginning&quot; of a project when it finally takes form and I can truly work on it.<p>I sort of transition from copy-pasting ChatGPT crap to quickly create a bunch of shallow, bullshit proofs-of-concept, eventually gathering enough momentum to dive into it myself.<p>So, yes, it&#x27;s slower, and more inefficient, and ChatGPT can&#x27;t do it better than I can. But it&#x27;s easier and I don&#x27;t have to dig as deep. The end result is I have much more endurance in the actual important parts of the project (the middle and end), versus burning myself out on the beginning.
评论 #38842301 未加载
madeofpalk超过 1 年前
&gt; <i>I have a problem, I need to quickly know something that </i>I can verify* if the LLM is feeding me nonsense. Well, in such cases, I use the LLM to speed up my need for knowledge.*<p>This is the key insight from using LLMs in my opinion. One thing that makes programming especially well suited for LLMs is that it&#x27;s often trivial to verify the correctness.<p>I&#x27;ve been toying around this concept for evaluating whether a LLM is the right tool for the job. Graph out &quot;how important is it that the output is correct&quot; vs &quot;how easy is it to verify the output is correct&quot;. Using ChatGPT to make a list of songs featuring female artists who have won an Emmy is time consuming to verify it&#x27;s correct, but it&#x27;s also not very important and it&#x27;s okay if it contains some errors.
评论 #38842132 未加载
评论 #38841801 未加载
评论 #38845664 未加载
apwell23超过 1 年前
I use chatgpt as my thinking partner writing code. I chat with it all day everday to finish work.<p>My company has approved copilot but Copilot autocomplete has been an awful experience. company hasn&#x27;t approved copilot chat ( which is what i need) .<p>But I would love something similar that can run on my laptop for my code to generate unit tests, code comments ect ( ofcourse with my input and guidance).
评论 #38841263 未加载
评论 #38841787 未加载
评论 #38841444 未加载
kromem超过 1 年前
Perhaps the most important point in the piece, and one that can&#x27;t be repeated enough or understood enough as we head into what 2024 has in store:<p>&gt; And then, do LLMs have some reasoning abilities, or is it all a bluff? Perhaps at times, they seem to reason only because, as semioticians would say, the &quot;signifier&quot; gives the impression of a meaning that actually does not exist. Those who have worked enough with LLMs, while accepting their limits, know for sure that it cannot be so: their ability to blend what they have seen before goes well beyond randomly regurgitating words. As much as their training was mostly carried out during pre-training, in predicting the next token, this goal forces the model to create some form of abstract model. This model is weak, patchy, and imperfect, but it must exist if we observe what we observe. If our mathematical certainties are doubtful and the greatest experts are often on opposing positions, believing what one sees with their own eyes seems a wise approach.
ahgamut超过 1 年前
&gt; Instead, many have deeply underestimated LLMs, saying that after all they were nothing more than somewhat advanced Markov chains, capable, at most, of regurgitating extremely limited variations of what they had seen in the training set. Then this notion of the parrot, in the face of evidence, was almost universally retracted.<p>I&#x27;d like to see this evidence, and by that I don&#x27;t mean someone just writing a blog post or tweeting &quot;hey I asked an LLM to do this, and wow&quot;. Is there a numerical measurement, like training loss or perplexity, that quantifies &quot;outside the training set&quot;? Otherwise, I find it difficult to take statements like the above seriously.<p>LLMs can do some interesting things with text, no doubt. But these models are trained on terabytes of data. Can you really guarantee &quot;there is no part of my query that is in the training set, not even reworded&quot;? Perhaps we can grep through the training set every time one of these claims are made.
评论 #38843639 未加载
评论 #38845175 未加载
tipsytoad超过 1 年前
The most useful feature of llms is how much output you get from such little signal. Just yesterday I created a fairly advanced script from my phone on the bus ride home with chatgpt which was an absolute pleasure. I think multi-prompt conversations don&#x27;t get nearly as much attention as they should in llm evaluations.
评论 #38843121 未加载
andyjohnson0超过 1 年前
&gt; These are all things I do not want to do, especially now, with Google having become a sea of spam in which to hunt for a few useful things.<p>Seriously, just don&#x27;t use Google for search. Google search is just a way to get you to look at their ads.<p>Use a search engine that is aligned with your best interests, suppresses spammy sites, and lets you customise what you want it to surface.<p>I&#x27;ve used chatgpt as a coding assistant, with varying results. But my experience is that better search is orders of magnitude more useful.
评论 #38840971 未加载
评论 #38841014 未加载
kvz超过 1 年前
antirez thank you for talking some sense. I’ve seen skilled devs discard LLMs entirely based on seeing one (too many) hallucinations, then proclaiming they are inferior and throwing the baby away with the bathwater. There is still plenty of use to be had from them even if they are imperfect.
drubio超过 1 年前
What an ending...<p><i>I have never loved learning the details of an obscure communication protocol or the convoluted methods of a library written by someone who wants to show how good they are. It seems like &quot;junk knowledge&quot; to me. LLMs save me from all this more and more every day.</i><p>This is depressing or tongue-in-cheek considering who he is -- Redis creator -- and has an older post titled &#x27;In defense of linked lists&#x27;, so talking about linked lists in Rust is not &quot;junk knowledge&quot; or something an LLM can analyze circles around any human.<p>It&#x27;s the best coding nihilism as a profession post I have read though.
评论 #38842090 未加载
jgalt212超过 1 年前
&gt; this erudite fool is at our disposal and answers all the questions asked of them,<p>Yes, but I have to double-check every answer. And that, for me, greatly mitigates or entirely negates their utility. Of what value is a pocket calculator that only gets the right answer 75% if the time, and you don&#x27;t ex ante know what 75%?
评论 #38841385 未加载
评论 #38841104 未加载
评论 #38841060 未加载
fallingknife超过 1 年前
I have found only a few cases where ChatGPT has been very useful to me. e.g. writing long SQL queries and certain mathematical functions like finding the area of intersection of two rectangles. And it hallucinates enough that a lot of the time I can&#x27;t use it because I know it would take more time to check it for correctness and edge cases than it would to just write it in the first place. Maybe I am using it wrong, but so far the results for me have been extremely impressive, but not yet very useful.
评论 #38841992 未加载
abhinavstarts超过 1 年前
&gt; High levels of reasoning are not required. LLMs are quite good at doing this, although they remain strongly limited by the maximum size of their context. This should really make programmers think. Is it worth writing programs of this kind? Sure, you get paid, and quite handsomely, but if an LLM can do part of it, maybe it&#x27;s not the best place to be in five or ten years<p>I appreciate the author writing this article. Whenever I read about future of field, I get anxiety and confusion but then again I think other options too which were available to me was less interest of me.<p>I am now at the place that I still have the opportunity to pivot and focus on pure&#x2F;applied mathematics than being in software field.<p>Honestly I wanted to make money through this career but I don&#x27;t know what carrer to choose now.<p>I keep working on myself and don&#x27;t compare myself to others but if argument is top 1% programmers will be required in the future then I doubt myself because I have still learn lot of things and then how about competing with both experienced &amp; knowledgeable.<p>I was thinking about pin-pointing a target then becoming expert at it (by 10000 hrs rule)<p>I&#x27;m sorry to ask but today or in-general I am very confused which path&#x2F;carrer to Target related to computing, Mathematics. Please suggest and give me your valuable advice. Thank you
评论 #38842782 未加载
评论 #38842543 未加载
评论 #38842503 未加载
block_dagger超过 1 年前
One of the areas that has sped up the most for me while using ChatGPT to code is having it write test cases. Paste it a class and it can write a pretty good set of specs if you iterate with it. Literally 10x faster than doing it myself. This speed-up can also occur with languages&#x2F;frameworks I&#x27;m not familiar with.
cies超过 1 年前
&gt; This should really make programmers think. Is it worth writing programs of this kind? Sure, you get paid, and quite handsomely, but if an LLM can do part of it, maybe it&#x27;s not the best place to be in five or ten years.<p>Some one, a person with a sense of responsibility, has to sign off on changes to the code. LLMs have shown to come with answers that make no sense or contain bugs. A person (for now) needs to decide is the LLM&#x27;s suggestion is acceptable, if we need more tests, if we want to maintain it.<p>I think programmers will be needed for that, they will just be made more productive (as what happened with the introduction of garbage collection, strong typed languages, powerful IDEs, StackExchange, ...)
kibibu超过 1 年前
The deep coder example doesn&#x27;t appear to actually be doing what the comments or the article say it does.<p>It appears no better than the mixtral example that it&#x27;s supposedly an improvement on.
评论 #38840997 未加载
082349872349872超过 1 年前
&gt; <i>At the same time, however, my experience over the past few months suggests that for system programming, LLMs almost never provide acceptable solutions if you are already an experienced programmer.</i><p>Hmm, this suggests to me that in a better world, the systems problems would have been solved with code, and the sorts of one-off problems which current LLMs do handle well would have been solved with formulae in a (shell-like? not necessarily turing-complete?) DSL.
评论 #38841350 未加载
andai超过 1 年前
I find that I am very unproductive if I am connected to the internet. The only way for me to get any real work done is to turn off the router.<p>At the same time, GPT apparently doubles programming productivity. (Though obviously this depends on the task.)<p>I&#x27;ve long wished to have the best of both worlds. It seems I may soon get my wish: local LLMs will probably catch up with GPT-4 this year, or even outpace it!
gumballindie超过 1 年前
For me LLMs revealed how easily it is to manipulate masses with properly done marketing. Despite these tools being obviously unreliable, tens of people on here report how well they work and how much they changed their lives. Shows that with sufficient propaganda you can make people see and feel things which are not there - not a new concept. But what’s new to me is just how easy it is.
评论 #38841458 未加载
amclennon超过 1 年前
&gt; At the same time, however, my experience over the past few months suggests that for system programming, LLMs almost never provide acceptable solutions if you are already an experienced programmer.<p>In one off tasks where someone is not enough of an expert to know its flaws, and such expertise is not required, &quot;the marvel is not that the bear dances well, but that the bear dances at all&quot;.
esafak超过 1 年前
LLMs are going to have to get much cheaper to train to be useful in corporations, where the questions you want to ask are going to depend on proprietary code. You can&#x27;t ask &quot;What does subsystem FooBar do, and where does it fit in the overall architecture?&quot; You&#x27;d want to be able to continuously retrain the model, as the code base evolves.
eminence32超过 1 年前
&gt; Since the advent of ChatGPT, and later by using LLMs that operate locally<p>Does HN have any favorite local LLMs for coding-related tasks?
评论 #38843891 未加载
评论 #38841697 未加载
sebringj超过 1 年前
Currently, what I get out from it is a good quick overview with some hallucinations. You have to actually know what you&#x27;re doing to check the code. However, this is a fast moving target and will in no time be doing that part as well. I think stepping back and thinking maybe this thing is just giving us more and more agency and what can we do with that? We need to adapt to not constrain ourselves to just being programmers. We are humans with agency and if we can adapt to this, we can be more and more powerful having our technical insight that we&#x27;ve gained over the years to do some really cool things. I have a startup and with ChatGPT I&#x27;ve managed to do all parts of the stack with confidence and used it for all sorts of business related things outside of coding that have really helped move the business forward quickly.
SZJX超过 1 年前
&gt; I regret to say it, but it&#x27;s true: most of today&#x27;s programming consists of regurgitating the same things in slightly different forms. High levels of reasoning are not required.<p>But that&#x27;s basically what engineering (, and medicine, and law, and all sorts of professions out there,) has always been about. Engineers build railways and bridges based on the same proven principles in slightly different forms, adapting to the specific needs of each project. Their job is not to come up with groundbreaking inventions every day.
IKantRead超过 1 年前
This quote in particular struck me as relevant:<p>&gt; And now Google is unusable: using LLMs even just as a compressed form of documentation is a good idea.<p>Beyond all the hype, it&#x27;d undeniable that LLMs <i>are</i> good at matching your query about a programming problem to an answer without inundating you with ads and blog spam. LLMs are, at the very least, just better at answering your questions than putting your question into to google and searching Stack Overflow.<p>About two years ago I got so sick of how awful Google was for any serious technical questions that I started building up a collection of reference books again just because it was quickly becoming the only way to get answers about many topics I cared about. I still find these are helpful since even GPT-4 struggles with more nuanced topics, but at least I have a fantastic solution for all those mundane problems that come up.<p>Thinking about it, it&#x27;s <i>not</i> surprising that Google completely dropped the ball on AI since their business model has become <i>bad</i> search (i.e. they derive all their profit from adding things you don&#x27;t want to your search experience). At their most basic, LLMs are just really powerful search engines, it would take some cleverness to make them <i>bad</i> in the way Google benefits from.
habibur超过 1 年前
How many of us remember that at the beginning of last year the fear was that programming by programmers will get obsolete by 2024 and LLMs will be doing all the job?<p>How much has changed?
评论 #38841142 未加载
评论 #38841489 未加载
评论 #38841178 未加载
评论 #38841232 未加载
评论 #38841316 未加载
评论 #38841095 未加载
评论 #38841058 未加载
评论 #38841085 未加载
boulos超过 1 年前
I really like the argument about misinformation vs testing. I&#x27;m not totally sold on &quot;you can just see it&quot;, but I do think something like TDD could suddenly be really productive in this world.<p>I&#x27;ve found autocomplete via these systems to be improving rapidly. For some work, it&#x27;s already a big boost, and it&#x27;s close to a difference in kind from the original IntelliSense. Amusingly though, I primarily write in an editor without any autocomplete, so I don&#x27;t experience this often. But I do, <i>precisely</i> for the throwaway code and lower-value changes.<p>Finally, it&#x27;s not clear to me that the distinction is between systems programming and scripting. My sense is that Chat GPT and similar are (a) heavily influenced by the large corpus of Python, so it&#x27;s better at it than C and (b) the examples here involved more clever bit manipulation than <i>most</i> software engineers ever interact with.
评论 #38841445 未加载
pknerd超过 1 年前
Besides using chatGPT for certain pieces of code that use a 3rd party library. I successfully used it as a &quot;Code Reviewer&quot;. I recently copied functions of a Symfony PHP controller and asked for a code review and suggestions for refactoring with code and reasons. Surprisingly it worked very well and I was able to refactor a good amount of code.
CaptainFever超过 1 年前
I feel that I&#x27;m being too conservative with how I use AI. Currently I use Copilot Autocomplete with a bit of Copilot Chat, which is great and almost always gets small snippets correct, but I sometimes worry that I&#x27;m not using it to the full potential -- so I can be faster with my side projects -- for example, by generating entire classes.
评论 #38843999 未加载
legendofbrando超过 1 年前
This is one of the best pieces I’ve read that articulates what it’s like to work closely with LLMs as creative partners.
Havoc超过 1 年前
I definitely mostly use it in the same way - generating discrete snippets.<p>Haven’t had much luck with code completion thus far.
kaycey2022超过 1 年前
Here it is said that the earlier assumption of LLMs being parrots has been universally retracted in the face of the evidence. But if you see the NYT case against OpenAI, being uncontrolled parrots is exactly what ChatGPT is being accused of. Which is the truth?
bbor超过 1 年前
<p><pre><code> LLMs are like stupid savants who know a lot of things. </code></pre> Leaving the requisite “no, that’s not what language models are, you’re misunderstanding what’s important here, the best knowledge model already exists and it’s called Wikipedia”
dmezzetti超过 1 年前
While there clearly was a lot of hype, retrieval augmented generation (RAG) proved to be an effective technique with LLMs. Using RAG with project documentation and&#x2F;or code can be useful.
mercurialsolo超过 1 年前
Honestly speaking code generation is a form of augmented retrieval. And going further back, I would say human memory is generated from context rather than retrieved (which is why it&#x27;s often fallible - we hallucinate details).<p>LLM&#x27;s today for me are the equivalent of a large scale human memory for code or for faster augmented retrieval - do they hallucinate details, quite often, but do I find it utilitarian versus dragging myself over documentation details - more often than not.
tmaly超过 1 年前
I think LLMs are good for quick prototyping first drafts of small functions or simple systems.<p>For me they help when time is short and when I want to maximize creative exploration.
antirez超过 1 年前
@dang something is wrong with the ranking of this post.
voidhorse超过 1 年前
I enjoy antirez&#x27;s work, and I enjoyed this essay, but I disagree with many of its conclusions. In particular:<p>&gt; this goal forces the model to create some form of abstract model. This model is weak, patchy, and imperfect, but it must exist if we observe what we observe.<p>Is a completely fallacious line of reasoning, and I&#x27;m surprised that he draws this conclusion. The whole reason the &quot;problem of other minds&quot; is still a problem in philosophy is precisely because we cannot be certain that some &quot;abstract model&quot; exists in someone&#x27;s head (man or machine, do you argue it does? show it to me) simply because an output meeting certain constraints exists. This is exactly the problem of education. A student that studies to answer questions correctly on a test may not have an abstract model of the subject area at all. Even they may not be conducting what we call reasoning. If a student aces a test, can you confidently say they <i>actually</i> understand a domain? Or did they simply ace a test?<p>Furthermore, LLM&#x27;s lack of consistency and inability to answer basic mathematical questions, and their limitation to purely text based areas of concern and representation are all much stronger arguments for siding with the notion that they really are just sophisticated, stochastic, machines, incapable of what we&#x27;d normally call reason in a human context. If LLM&#x27;s &quot;reason&quot; it is a much different form of reasoning than that which human beings are capable of, and I&#x27;m highly skeptical that any such network will achieve parity to human reason until it can &quot;grow up&quot; and learn embodied in a rich, multi sensory environment, just like human beings. For machines to achieve reason, they will need to break out of the text-only&#x2F;digital-only box first.
评论 #38841828 未加载
slowmovintarget超过 1 年前
TLS cert has expired on the antirez site.
renonce超过 1 年前
&gt; Homo sapiens invented neural networks<p>Is it just me or did anyone smile at this sentence? The first paragraph sounds like the academic way of saying &quot;we invented huge neural networks but we couldn&#x27;t understand it&quot;.
4ad超过 1 年前
This might be a good article, I wouldn&#x27;t know because I can&#x27;t read this monospace atrocity.<p>Reader view in Safari preserves the monospace font... &#x2F;facepalm
评论 #38841055 未加载
评论 #38841082 未加载
评论 #38841210 未加载
pluc超过 1 年前
bro it&#x27;s 2024, get SSL