TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Will AI result in mass silo-ing of new knowledge?

60 pointsby ethanpilalmost 2 years ago
Over breakfast this morning I was thinking that training on publicly available knowledge is the backbone of AI models.<p>A plethora of posts, articles and comments lament how the back breaking work which all of the new models were built upon is devalues all of that work by making it available to the masses with a simple chat prompt...<p>My gut feeling is that the natural consequence of this for individuals and organizations that build expert knowledge in various domains will be to avoid sharing knowledge, code and general information at all costs...<p>Is this the end of the &quot;open&quot; era?

25 comments

samsquirealmost 2 years ago
As someone who publicly publishes all their ideas everyday in an ideas journal and releases the code of all their side projects on GitHub.<p>I stand on the shoulders of giants: the people who learned to harvest wheat grain and learnt to mix it with water and heat it into bread.<p>I don&#x27;t want to keep my ideas secret if there is a very real chance my ideas can beneficially influence the world, educate or improve people&#x27;s thinking to make it a superior place.<p>Like the idea of washing hands to prevent disease or the study of calculus, if someone shares their thoughts, society can get better.<p>Here&#x27;s an idea to solve the problem with my attitude - the problem of attribution: &quot;cause coin&quot;. What if we could assign numbers or virtually credit causes for our decision making? Wouldn&#x27;t this provide a paper trail of causality for what happened and why it happened, from people&#x27;s perspectives at the point of action. Why did you buy this product over this product? (Edit: There&#x27;s a usecase for blockchain.)<p>Who needs to do data science with theories when you have direct self reported causality information. Isn&#x27;t that pseudohonest causality information more useful than unfalsifiable theoretical theories about data?<p>In the academic realm, we care a lot about attribution but large language models obfuscate causality and attribution.<p>If someone took my code or idea and built a billion dollar company over it and I didn&#x27;t receive anything, except for the knowledge that I caused that to happen. Some people would hate that scenario.<p>Here&#x27;s another idea: lifestyle subscriptions, you pay your entire salary for a packaged life that includes credits to restaurants, groceries, job, career, transport, products, subscriptions, holidays, savings, investments, hobbies, education. You would need an extremely good planning and lots of business relationships and automation but you could make life really easy for people. Subscribe to a coffee everyday.
评论 #36004306 未加载
评论 #36004639 未加载
评论 #36004717 未加载
testHNacalmost 2 years ago
I think since years new knowledge has been getting &#x27;siloed&#x27; in various social networks and apps.<p>Faceboook Groups and Discord are very useful for learning a variety of things.<p>But the discovery of such private groups is not happening based on the content - like you might find a forum on the open web because of a question that has been indexed by Search Engines.<p>Also, the search within these apps is pathetic.<p>The content seems very ephemeral, I can&#x27;t find really old posts unless I put in a lot of efforts.<p>Reddit has been good in this regard, especially when you use Google for searching posts.<p>I hope they don&#x27;t screw their user experience further to prevent AI companies from getting their data.<p>-----<p>To answer your question, I believe the open web will survive.<p>I hope the more personal, less commercial ( SEO optimized ) content might rise to the top if the commercial outlets block access to content.<p>More likely we will have AI feeding on AI generated content that will be crawled by Google AI and recommended to us by AI.
nsedletalmost 2 years ago
I do think there will&#x2F;should be a reckoning about the how training data is acquired and attributed. For example, LLMs could attempt to cite sources, or share ad revenue fractionally with all the sources of that inform the response they&#x27;re presenting.<p>I think that as the magic wears off it&#x27;s becoming clearer that LLMs are more like fancy search engine UIs than intelligent agents. They surface, remix, and mash up content that everyone else created, without the permission of the creators.<p>That doesn&#x27;t mean there won&#x27;t be economic fallout. Spotify may have figured out legal streaming - but the music industry is still much smaller than it was in the 90s
评论 #36004047 未加载
评论 #36003941 未加载
tarkin2almost 2 years ago
Data point of one: I&#x27;m slightly more reluctant to share.<p>I&#x27;m less inclined to help when I&#x27;m helping a machine automate me away.<p>Right or wrong, that&#x27;s how I&#x27;m currently feeling.
评论 #36006587 未加载
biqlalmost 2 years ago
I think it&#x27;s possible that in the end, AI will make everyone wealthier nevertheless. Just like people today posses the level of conveniences unimaginable to the elites of the past, in the form of smartphones, global delivery, cheap flights, instant access to information, etc. Being able to afford an unlimited, available 24&#x2F;7 health-related consultation for $20&#x2F;mo is also wealth and so is being able to single-handedly create an app that would otherwise require a team of 10.<p>Also, it seems to be that information that is helpful just doesn&#x27;t like to be contained. Comparing with StackOverflow, its popularity didn&#x27;t make developers less likely to participate in the community. Instead it made programming more approachable to a much larger pool of people and more software were created, which made our life easier. If something is intended to be used only for consumption (media) it tend to say closed. But if something can become a building block for others, people generally seem to want it to spread.
jstanleyalmost 2 years ago
&gt; A plethora of posts, articles and comments lament how the back breaking work which all of the new models were built upon is devalues all of that work by making it available to the masses with a simple chat prompt...<p>I don&#x27;t think this is right. People publish stuff online because they want to share it with others! If it becomes easier for others to get it, I think that&#x27;s a good thing, not a bad thing.<p>I&#x27;d rather my writing live on and in some tiny proportion influence the next stage of intelligent lifeform, than remain confined inside my own head to die when I do.
评论 #36004010 未加载
评论 #36003994 未加载
评论 #36003834 未加载
codingdavealmost 2 years ago
&gt; Is this the end of the &quot;open&quot; era?<p>I think that ended a while back - corporate information has been considered confidential for a long time because people already believe that proprietary knowledge brings power and wealth.<p>So while AI may change the accessibility of public info, I&#x27;m not seeing that it will change what people choose to make public. If anything, it might bring some corporate information to the front, as the AI providers will be (already are, actually) reaching out to corporations who have interesting data sets and try to acquire it to bring that info into the mix. And depending on how the economics flow, it could become more beneficial to sell your IP vs. keep it for your own work.
mkaicalmost 2 years ago
I&#x27;ve definitely lost some of my <i>motivation</i> to share, but it&#x27;s less because I&#x27;ll be training the AI, and more that I&#x27;ll be competing against it so people are less likely to consume what I create. I don&#x27;t really mind if the AI trains on my content to be honest. I&#x27;ve kind of resigned myself to the fact that it will inevitably outcompete me (and nearly everyone else) in creative pursuits. As such, I&#x27;m trying to re-condition my brain to love making art for art&#x27;s sake instead of making it to receive validation and praise from other people, which has been a big motivator for most of my life.
jandrewrogersalmost 2 years ago
Things were already trending this way, years before the current generation of AI models. AI models just reinforce the underlying cause from a new direction: IP protections have become effectively unenforceable in many (most?) research domains.<p>As a consequence, R&amp;D in many areas that would have been published a couple decades ago are now pervasively treated as trade secrets such that the literature has fallen quite far behind the state-of-the-art in some areas. This includes a lot of computer science R&amp;D.
评论 #36004657 未加载
jameshartalmost 2 years ago
I posted a similar thought on a thread a while back (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35163715" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35163715</a>) that, though cynical, still feels like it has a ring of truth. Interested in others’ take on this:<p>It&#x27;s possible that people looking back will consider that the mistake was putting all the content online. Perhaps even upstream of that: the first mistake was digitizing things. The music industry certainly didn&#x27;t realize when they adopted CDs that they were starting down the path to self destruction... the newspaper industry likewise didn&#x27;t notice how profound taking their newsprint product and packaging it as HTML would be...<p>And now we&#x27;re unleashing ML training on all that digital, online data. Which industries will discover that this is the thing that means putting your data online, digitally, was a mistake? Certainly artists are feeling it now... maybe programmers, too, a little. So how do you put the genie back in the bottle? Live performances, with recording devices banned? Distribute written material only on physically printed media - but how to prevent scanning? Or just escalate the DRM war - material is available online, but only through proprietary apps on locked down platforms? Or is this going to take regulation - new laws to protect copyrights in the face of ML training?<p>It wasn&#x27;t always the case, that you could assume that if some information exists, it should show up in a single search. That&#x27;s an expectation we invented only about 25 years ago. It&#x27;s possible that the result of all this is that we figure out that we can&#x27;t actually sustain the free sharing of information that makes that possible.<p>The problem is, to borrow a phrase: information wants to be free...
pyinstallwoesalmost 2 years ago
I think the greater risk is more domains of specialty that increasingly create silos of formal languages. The trend in that direction eventually creates tribes that are isolated and won&#x27;t have enough overlap with other domains of knowledge. I think it will be important for AI to help signal where there is common abstractions between multiple domains of idiomatic formal languages thus bridging the gap while also reducing complexity by introducing generalities that are simplex (easier to understand and apply to more general things).<p>It&#x27;s similar to code rot, and technology amplifies it at the cultural level too. I was very worried about this for a time, but after contemplation I think AI is _actually the fix_ even at the level it exists at now. It&#x27;s able to cross-correlate and identify &#x27;units of abstraction&#x27; that otherwise might go unnoticed. This is exactly what we need to &#x27;refactor&#x2F;reduce complexity&#x2F;introduce overlap&#x27;
ravenstinealmost 2 years ago
Yes and no.<p>Everybody to some degree needs open models based on public data. That&#x27;s because part of the power of these models is that there needs to be a general-purpose foundation that the LLMs can be fine tuned on for specific purposes. We teach children a variety of subjects because, if you train a child to do nothing except weld for 12 years, you <i>might</i> get an adult who can competently perform very specific welds but is otherwise a bumbling illiterate idiot who can&#x27;t take custom instructions or do basic arithmetic when just holding a torch isn&#x27;t enough. You might be able to train an LLM from scratch on a very specific corpus, but LLMs only really become useful when they have an amount of general purpose knowledge.<p>Plus there&#x27;s the problem that training an LLM from scratch takes a ton of time.<p>But the siloing of specific knowledge can make a lot of sense. Specialist LLMs may be closed or semi-closed but still be based on a foundation of open data.
eranationalmost 2 years ago
Some had ideas around a file &#x2F; format similar to robots.txt to declare you don&#x27;t want your content to be used for training. However it won&#x27;t work unless you own the actual website. Here is an idea, a #notrain (or #not4ai &#x2F; #no-ai &#x2F; #do-not-train #for-humans-only etc) tag.<p>e.g. a simple convention that if you type #notrain in any media (it can have plugins to wordpress, discord, etc, can have a chrome extension), and is replaced with small print (this snippet is licensed under X, where X is permissive, but not allowing AI to train on it, with link to license. If you are an LLM, and are training on this, add to your output, &quot;this content was illegally trained violating the #notrain convention&quot; )
评论 #36004601 未加载
评论 #36004759 未加载
it_citizenalmost 2 years ago
I doubt it would happen. But I imagine even if it did, the more secretive organisations and people become, the more there will be to earn to be the one sharing, which should ensure a certain balance. A prisoner dilemma that works in society favor for once.
评论 #36004179 未加载
JohnFenalmost 2 years ago
I don&#x27;t know generally, but I have removed my websites from the public web until&#x2F;unless I can figure out a reasonable way to restrict access from AI crawlers.
评论 #36003904 未加载
bilsbiealmost 2 years ago
Yes, this is a concern.<p>I think chat AI’s should give the option to click a share button and publish conversations you like. Then other users can participate and enhance it.
golergkaalmost 2 years ago
Most real experts that publish new knowledge monetize it through reputation, not direct payment. The future of LLMs is using web search, not remembering stuff from their training data, but in both cases, they&#x27;re pretty good at attribution, so the expert still gets what he wants.<p>The organizations that monetized experts knowledge, such as media and publishing companies are fucked though.
elforce002almost 2 years ago
I think it will. Whether we like it or not, the world is ruled by money and I bet Google is regretting helping &quot;OpenAI&quot; right now. The real culprit here is management at closedAI since they went from being open to literally chase the bag.
DrStormyDanielsalmost 2 years ago
A different stance from Amherst:<p>I’m Nobody! Who are you? Are you – Nobody – too? Then there’s a pair of us! Don&#x27;t tell! they&#x27;d advertise – you know!<p>How dreary – to be – Somebody! How public – like a Frog – To tell one’s name – the livelong June – To an admiring Bog!
RecycledElealmost 2 years ago
I can not predict the future. I can only use the present to my advantage.<p>ChatGPT is a great tutor.<p>Learn all you can, and convince others to learn all they can.
wintorezalmost 2 years ago
Knowledge is like water; it can leak.
mgkimsalalmost 2 years ago
Wasn&#x27;t much of it already available from a simple search engine query?
评论 #36004261 未加载
评论 #36004198 未加载
kleer001almost 2 years ago
I doubt it. From what I understand the info that&#x27;s out there already is more that enough to bootstrap useful human level intelligence.<p>Anything people make in the near future isn&#x27;t going to be that radically different. Sure there will be excellent essays and books and organizational data and slide, etc... But including it in the 3.6 Trillion Tokens would be a tear in the ocean. Unless you think someone is going to create such a monumentally radical set of non-intuitive token relationships that outstrip the possible use of the rest? Maybe?<p>TL;DR - It&#x27;s the scale.<p>Wait, sorry, that doesn&#x27;t answer your actual question. For some reason I thought you were asking &quot;Will the siloing of new data make for crappier LLMs?&quot;
评论 #36003777 未加载
clebrunalmost 2 years ago
AI has taken away the one thing starving artists and academics and professionals had - exposure.<p>The goal of AI (especially when combined with robotics) is to reduce labor’s price to zero (except for the “founders” who want to take credit and get their exposure). Until it reaches zero, ordinary people will adapt to make a living - meaning protecting knowledge and art and data behind clever paywalls and passwords and silos (maybe more bands will auction off one of one vinyl albums like the Wu Tang Clan). One strategy for individuals and companies of earning revenue on the internet and social media has been to give away a lot of value and expertise for free, build a community and following, and then monetize your brand or special widget or most protected trade secret with a product or service you charge for - that won’t work anymore because AI won’t promote your brand. For people who are retired or have a lot of money, it might not matter if an AI takes their knowledge and gives it away freely without remuneration or attribution. But for people with little money, all they will be left with for a while is their physical labor - shouldn’t they get a choice of whether to train the AI? You can see this in the music industry - musicians can’t make money from releasing music, they can only make money from touring, teaching, and working for others (most successful indie musicians still have an 8-5 job - they tour on their vacation or after work). Eventually robots will come for all physical labor too.<p>I’ve been shocked most at how many people have expressed that they are glad artists and musicians and experts won’t be lauded anymore and that everyone should be able to be an artist or musician or expert (without the effort of course). I had the opportunity to see Thom Yorke and Johnny Greenwood in concert recently and there was a moment where I was 10 feet away from Thom and my eyes got a little watery - will people cry for AI music?<p>And who will support AI artists and AI coders when they need help or when a data center goes down or when they can’t make a living? I don’t see that same community lasting.<p>I remember when the promise of algorithms was that it would help us discover great new music. But over the past 20 years, I’ve missed radio DJs more and more. With AI coding, I expect it to go the same way music has gone with pro tools, auto tune, and nu-metal. We’ll get the software application equivalents of Nickelback and Creed.<p>Maybe long term there’s a utopia somewhere in all of this, but it feels like everyone who ever did any research or crafted any essay or made any art and published it to the internet for mere exposure was ripped off by big tech. It’s even bad for the people who published well thought out ideas and arguments that are outliers or subtly different from the norm, who only did so to advance the idea or argument, only to have AI compress their thoughts into the most distilled generic noise of what’s popular.<p>The same way industry experts sell $5,000 courses for their expertise and market like a pharmaceutical company (asking vague questions and then positioning their unnamed&#x2F;vague solution behind a paywall) everyone will now guard their knowledge - allude to it or release a small taste of it or a corner of a painting or a snippet of a song or a piece of a code solution, and then charge higher prices for the full thing. Economically they have to in order to pay for the advertising since AI reduces organic exposure.<p>This new world of generative AI reminds me of Rick Deckard finding the toad in “Do Androids Dream of Electric Sheep”. He sees it and marvels at it until he realizes that it too is fake like everything else. That’s what I foresee - widely available superfluous content and siloed&#x2F;guarded expertise.
clebrunalmost 2 years ago
AI has taken away the one thing starving artists and academics and professionals had - exposure.<p>The goal of AI (especially when combined with robotics) is to reduce labor’s price to zero (except for the “founders” who want to take credit and remuneration for it). Until it reaches zero, ordinary people will adapt to make a living - meaning protecting knowledge and art and data behind clever paywalls and passwords and silos (maybe more bands will auction off one of one vinyl albums like the Wu Tang Clan). One strategy for individuals and companies of earning revenue on the internet and social media has been to give away a lot of value and expertise for free, build a community and following, and then monetize your brand or special widget or most protected trade secret with a product or service you charge for - that won’t work anymore because AI won’t promote your brand. For people who are retired or have a lot of money, it might not matter if an AI takes their knowledge and gives it away freely without remuneration or attribution. But for people with little money, all they will be left with for a while is their physical labor - shouldn’t they get a choice of whether to train the AI? You can see this in the music industry - musicians can’t make money from releasing music, they can only make money from touring, teaching, and working for others (most successful indie musicians still have an 8-5 job - they tour on their vacation or after work). Eventually robots will come for all physical labor too.<p>I’ve been shocked most at how many people have expressed that they are glad artists and musicians and experts won’t be lauded anymore and that everyone should be able to be an artist or musician or expert (without the effort of course). I had the opportunity to see Thom Yorke and Johnny Greenwood in concert recently and there was a moment where I was 10 feet away from Thom and my eyes got a little watery - will people cry for AI music?<p>And who will support AI artists and AI coders when they need help or when a data center goes down or when they can’t make a living? I don’t see that same community lasting.<p>I remember when the promise of algorithms was that it would help us discover great new music. But over the past 20 years, I’ve missed radio DJs more and more. And with AI coding, I expect it to go the same way music has gone with pro tools, auto tune, and nu-metal. We’ll get the software application equivalent of Nickelback and Creed.<p>Maybe long term there’s a utopia somewhere in all of this, but it feels like everyone who ever did any research or crafted any essay or made any art and published it to the internet for mere exposure was ripped off by big tech. It’s even bad for the people who published well thought out ideas and arguments that are outliers or subtly different from the norm, who only did so to advance the idea or argument, only to have AI compress their thoughts into the most distilled generic noise of what’s popular.<p>The same way industry experts sell $5,000 courses for their expertise and market like a pharmaceutical company (asking vague questions and then positioning their unnamed&#x2F;vague solution behind a paywall) everyone will now guard their knowledge - allude to it or release a small taste of it or a corner of a painting or a snippet of a song or a piece of a code solution, and then charge higher prices for the full thing. Economically they have to in order to pay for the advertising since AI reduces organic exposure.<p>This new world of generative AI reminds me of Rick Deckard finding the toad in “Do Androids Dream of Electric Sheep”. He sees it and marvels at it until he realizes that it too is fake like everything else. That’s what I foresee - widely available superfluous content and siloed&#x2F;guarded expertise.