TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Can we create a new internet where search engines are irrelevant?

374 pointsby subhrmalmost 6 years ago
If we were to design a brand new internet for today&#x27;s world, can we develop it such a way that:<p>1- Finding information is trivial<p>2- You don&#x27;t need services indexing billions of pages to find any relevant document<p>In our current internet, we need a big brother like Google or Bing to effectively find any relevant information in exchange for sharing with them our search history, browsing habits etc. Can we design a hypothetical alternate internet where search engines are not required?

106 comments

adrianmonkalmost 6 years ago
I think it would be helpful to remember to distinguish two separate search engine concepts here: indexing and ranking.<p>Indexing isn&#x27;t the source of problems. You can index in an objective manner. A new architecture for the web doesn&#x27;t need to eliminate indexing.<p>Ranking is where it gets controversial. When you rank, you pick winners and losers. Hopefully based on some useful metric, but the devil is in the details on that.<p><i>The thing is, I don&#x27;t think you can eliminate ranking.</i> Whatever kind of site(s) you&#x27;re seeking, you are starting with some information that identifies the set of sites that might be what you&#x27;re looking for. That set might contain 10,000 sites, so you need a way to push the &quot;best&quot; ones to the top of the list.<p>Even if you go with a different model than keywords, you still need ranking. Suppose you create a browsable hierarchy of categories instead. Within each category, there are still going to be multiple sites.<p>So it seems to me the key issue isn&#x27;t ranking and indexing, it&#x27;s who controls the ranking and how it&#x27;s defined. Any improved system is going to need an answer for how to do it.
评论 #20288380 未加载
评论 #20287495 未加载
评论 #20287886 未加载
评论 #20287473 未加载
评论 #20287084 未加载
评论 #20307106 未加载
评论 #20289683 未加载
评论 #20288720 未加载
评论 #20287693 未加载
评论 #20287376 未加载
iblainealmost 6 years ago
Yes, it was called Yahoo and it did a good job of cataloging the internet when hundreds of sites were added per week: <a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;19961227005023&#x2F;http:&#x2F;&#x2F;www2.yahoo.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;19961227005023&#x2F;http:&#x2F;&#x2F;www2.yahoo...</a><p>I&#x27;m old enough to remember sorting sites by new to see what new URLs were being created, and getting to that bottom of that list within a few minutes. Google and search was a natural response to solving that problem as the number of sites added to the internet grew exponentially...meaning we need search.
评论 #20286562 未加载
评论 #20286844 未加载
评论 #20286963 未加载
评论 #20287044 未加载
评论 #20287062 未加载
ovi256almost 6 years ago
Everyone has missed the most important aspect of search engines, from the point of view of their core function of information retrieval: they&#x27;re the internet equivalent of a library index.<p>Either you find a way to make information findable in a library without an index (how?!?) or you find a novel way to make a neutral search engine - one that provides as much value as Google but whose costs are paid in a different way, so that it does not have Google&#x27;s incentives.
评论 #20284702 未加载
评论 #20284691 未加载
评论 #20285644 未加载
评论 #20286867 未加载
评论 #20286491 未加载
评论 #20284617 未加载
评论 #20287975 未加载
评论 #20286743 未加载
评论 #20284571 未加载
评论 #20287786 未加载
评论 #20284601 未加载
neoteoalmost 6 years ago
I think Apple&#x27;s current approach, where all the smarts (Machine Learning, Differential Privacy, Secure Enclave, etc.) reside on your device, not in the cloud, is the most promising. As imagined in so much sci-fi (eg. the Hosaka in Neuromancer) you build a relationship with your device which gets to know you, your habits and, most importantly in regard to search, what you mean when you search for something and what results are most likely to be relevant to you. An on-device search agent could potentially be the best solution because this very personal and, crucially, private device will know much more about you than you are (or should be) willing to forfeit to the cloud providers whose business is, ultimately, to make money off your data.
评论 #20284408 未加载
评论 #20289499 未加载
评论 #20283390 未加载
评论 #20286008 未加载
alfanickalmost 6 years ago
I see a lot of good comments here, I got inspired to write this:<p>What if this new Internet instead of using URI based on ownership (domains that belong to someone), would rely on topic?<p>In examples:<p>netv2:&#x2F;&#x2F;speakers&#x2F;reviews&#x2F;BW netv2:&#x2F;&#x2F;news&#x2F;anti-trump netv2:&#x2F;&#x2F;news&#x2F;pro-trump netv2:&#x2F;&#x2F;computer&#x2F;engineering&#x2F;react&#x2F;i-like-it netv2:&#x2F;&#x2F;computer&#x2F;engineering&#x2F;electron&#x2F;i-dont-like-it<p>A publisher of webpage (same html&#x2F;http) would push their content to these new domains (?) and people could easily access list of resources (pub&#x2F;sub like). Advertisements are driving Internet nowadays, so to keep everyone happy, what if netv2 is neutral, but web browser are not (which is the case now anyway)? You can imagine that some browsers would prioritise some entries in given topic, some would be neutral, but harder to retrieve data that you want.<p>Second thought: Guess what, I&#x27;m reinventing NNTP :)
评论 #20284878 未加载
评论 #20284664 未加载
评论 #20284922 未加载
评论 #20284729 未加载
评论 #20287487 未加载
评论 #20285297 未加载
评论 #20288173 未加载
评论 #20292327 未加载
评论 #20285945 未加载
codeulikealmost 6 years ago
That was what the early internet was like (I was there). People built indexes by hand, lists of pages on certain topics. There was the Gopher protocol that was supposed to help with finding things. But this was all top-down stuff, the first indexing&#x2F;crawling search engines were bottom-up and it worked so much better. And for a while we had an ecosystem of different search engines until Google came along, was genuinely miles better than everything else, and wiped everything else out. Really, search isn&#x27;t the problem, its the way that search has become tied to advertising and tracking thats the problem. But then DuckDuckGo is there if you want to avoid all that.
评论 #20284853 未加载
评论 #20284388 未加载
davidy123almost 6 years ago
I think in one sense the answer is it always depends who or what you are asking for your answers.<p>The early Web wrestled with this, early on it was going to be directories and meta keywords. But that quickly broke down (information isn&#x27;t hierarchical, meta keywords can be gamed). Google rose up because they use a sort of reputation system based index. In between that, there was a company called RealNames, that tried to replace domains and search with their authoritative naming of things, but that is obviously too centralized.<p>But back to Google, they now promote using schema.org descriptions of pages, over page text, as do other major search engines. This has tremendous implications for precise content definition (a page that is &quot;not about fish&quot; won&#x27;t show up in a search result for fish). Google layers it with their reputation system, but these schemas are an important, open feature available to anyone to more accurately map the web. Schema.org is based on Linked Data, its principle being each piece of data can be precisely &quot;followed.&quot; Each schema definition is crafted by participation from industry and interest groups to generally reflect its domain. This open world model is much more suitable to the Web, compared to the closed world of a particular database (but, some companies, like Amazon and Facebook, don&#x27;t adhere to it since apparently they would rather their worlds have control; witness Facebook&#x27;s open graph degeneration to something that is purely self-serving).
_nalplyalmost 6 years ago
The deeper problem is advertising. It is sort of a prisoner&#x27;s dilemma: all commercial entities have a shouting contest to attract customer attention. It&#x27;s expensive for everybody.<p>If we could kill advertisement permanently, we can have an internet as described in the question. This will almost be like an emergent feature of the internet.
评论 #20282921 未加载
评论 #20284470 未加载
评论 #20282986 未加载
评论 #20282851 未加载
评论 #20286532 未加载
评论 #20284378 未加载
评论 #20284300 未加载
评论 #20298813 未加载
评论 #20288789 未加载
quelsolaaralmost 6 years ago
Yes, we need search engines, but they don&#x27;t need to be monolithic. Imagine that indexing the text of your average web page takes up 10k. Then you get 100.000 pages per Gig. It means that you if you spend ~270USD on a consumer 10 tera drive you can index a billion webpages. Google no longer says how many pages they index, but its estimated to be with in one order of magnitude of that.<p>This means that in terms of hardware, you can build your own google, then you get to decide how it rates things and you don&#x27;t have to worry about ads and SEO becomes much harder because there is no longer one target to SEO. Google obviously don&#x27;t want you to do this (and in fairness google indexes a lot of stuff that isn&#x27;t keywords form web pages), but it would be very possible to build an open source configurable search engine that anyone could install, run, and get good results out of.<p>(Example: The piratebay database, that arguably indexes the vast majority of avilable music &#x2F; tv &#x2F; film &#x2F; software was &#x2F; is small enough to be downloaded and cloned by users)
评论 #20288122 未加载
评论 #20287774 未加载
theon144almost 6 years ago
Almost definitely not.<p>Search engines are there to find and extract information in an unstructured trove of webpages - no other way to process this than with something akin to a search engine.<p>So either you&#x27;ve got unstructured web (the hint is in the name) and GoogleBingYandex or a somehow structured web.<p>The latter has been found to be not scalable or flexible enough to accomodate for unanticipated needs - and not for a lack of trying! This has been the default mode of web until Google came about. Turns out it&#x27;s damn near impossible to construct a structure for information that won&#x27;t become instantly obsolete.
评论 #20283222 未加载
swalshalmost 6 years ago
I&#x27;ve had this idea floating in my head for a while, that one thing that might make the world better is some kind of distributed database, and a gravitation back to open protocols (though instead of RFC&#x27;s... maybe we could maintain an open source library for the important bits) I was thinking the architecture of DNS is a good starting point. From there we can create public indexes of data. This includes searchable data, but also private data you want to share (which could be encrypted, and controlled by you (think PGP). I&#x27;d modify browsers so that I don&#x27;t have to trust a 3rd party service)<p>Centralization happens because the company owns the data, which becomes aggregated under one roof. If you distribute the data it will remove the walled gardens, multiple competitors should be able to pop up. Whole ecosystems could be built to give us 100 googles.... or 100 facebooks, where YOU control your data, and they may never even see your data. And because we&#x27;re moving back to a world of open protocols, they all work with each other.<p>These companies aren&#x27;t going to be worth billions of dollars any more.... but the world would be better.
评论 #20284748 未加载
评论 #20284829 未加载
评论 #20284683 未加载
alangibsonalmost 6 years ago
The 2 core flaws of the Internet (more precisely the World Wide Web) are lack of native search and native payments. Cryptocurrencies have started to address the second issue, but no one that I know of is seriously working on the first.<p>Fast information retrieval requires an index. A better formulation of the question might be: how do we maintain a shared, distributed index that won&#x27;t be destroyed by bad actors.<p>I wonder if the two might have parts of the solution in common. Maybe using proof of work to impose a cost on adding something to the index. Or maybe a proof of work problem that is actually maintaining the index or executing searches on it.
评论 #20288628 未加载
lefstathioualmost 6 years ago
My approach to answering this would entail:<p>1) Determining what percentage of search engine use is driven by the need for a short cut to information you know exists but dont feel like accessing the hard way<p>2) Information you are actually seeking.<p>My initial reaction is that making search engines irrelevant is a stretch. Here is why:<p>Regarding #1, the vast majority of my search activity involves information I know how and where to find but seek the path of least resistance to access. I can type in &quot;the smith, flat iron nyc&quot; and know I will get the hours, cross street and phone number for the Smith restaurant. Why would I not do this instead of visiting the yelp website, searching for the Smith, set my location in NYC, filtering results etc. Maybe I am not being open minded enough but I don&#x27;t see how this can be replaced short of reading my mind and injecting that information into it. There needs to be a system to type a request and retrieve the result you&#x27;re looking for. Another example, when I am looking for someone on LinkedIn, I always google the person instead of utilizing LinkedIn&#x27;s god awful search. Never fails me.<p>2. In the minority of cases I am looking for something, I have found that Google&#x27;s results have gotten worse and worse over the years. It will still be my primary port of call and I think this is the workflow that has potential disruption. Other than an Index, I dont know what better alternatives you could offer.
peteyPetealmost 6 years ago
You&#x27;d still want to be able to retrieve &quot;useful&quot; information which can&#x27;t be tampered with easily which I think is the biggest issue.<p>You can&#x27;t curate manually.. That just doesn&#x27;t scale. You also can&#x27;t let just anyone add to the index as they wish or any&#x2F;every business will just flood the index with their products... There wouldn&#x27;t be any difference between whitehat&#x2F;blackhat marketing.<p>You also need to be able to discover new content when you seek it, based on relevancy and quality of content.<p>At the end of the day, people won&#x27;t be storing the index of the net locally, and you also can&#x27;t realistically query the entire net on demand. That would be an absolutely insane amount of wasted resources.<p>All comes back to some middleman taking on the responsibility (google,duckduckgo,etc).<p>Maybe the solution is an organization funded by all governments, completely transparent, where people who wish to can vote on decisions&#x2F;direction. So non profit? Not driven by marketing?<p>But since when has government led with innovation and done so at a good pace? Money drives everything... And without a &quot;useful&quot; amount of marketing&#x2F;ads etc, the whole web wouldn&#x27;t be as it is.<p>So yes, you can.. But you won&#x27;t have access to the same amount of data, as easily, will likely have a harder time finding relevant information (especially if its quite new) without having to parse through a lot of crap.
kyberiasalmost 6 years ago
If we were to design a brand new DATABASE ENGINE for today&#x27;s world, can we develop it such a way that:<p>1. Finding information is trivial<p>2. You don&#x27;t need services indexing billions of rows to find any relevant document
评论 #20283596 未加载
评论 #20286007 未加载
fghtralmost 6 years ago
&gt;In our current internet, we need a big brother like Google or Bing to effectively find any relevant information in exchange for sharing with them our search history, browsing habits etc.<p>The evil big brothers may not be necessary. We just need to expand alternative search engines like YaCy.
azangrualmost 6 years ago
I can&#x27;t imagine how this is possible. Imagine I have a string of words (a quote from a book or an article, a fragment of an error message, etc), and I want to find the full text where it appears (or pages discussing it). How would you do that without a search engine?
评论 #20282728 未加载
评论 #20282663 未加载
评论 #20282796 未加载
lxnalmost 6 years ago
Most of the search engines now days have the advantage of being closed source (you don&#x27;t know how their algorithm actually work). This makes the fight against unethical SEO practices easier.<p>With a distributed open search alternative the algorithm is more susceptible to exploits by malicious actors.<p>Having it manually curated is too much of a task for any organization. If you let user vote on the results... well, that can be exploited as well.<p>The information available on the internet is to big to make directories effective (like it was 20 years ago).<p>I still have hope this will get solved one day, but directories and open source distributed search engines are not the solution in my opinion unless there is a way to make them resistant to exploitation.
评论 #20282655 未加载
评论 #20282641 未加载
评论 #20282689 未加载
VvR-Oxalmost 6 years ago
This would be the internet that was used in Star Trek I think. The computers they use can just be asked about something and the whole system is searched for that - so the service to find things is inherent to the system itself. In our world things like that are done by entities who try to maximize their profits (like the Ferengi) without thinking too much about effectiveness or ethics.<p>This phenomena can be seen throughout many systems we built - e.g. use of internet, communication, access to electricity or water. We have to pay the profit-maximizing entities for all of this though it could be covered by global cooperatives who manage this stuff in a good way.
blue_devilalmost 6 years ago
I think &quot;search engines&quot; is misleading. These are relevance engines. And relevance sells - the higher the relevance, the better.<p><a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2019&#x2F;06&#x2F;19&#x2F;opinion&#x2F;facebook-google-privacy.html" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2019&#x2F;06&#x2F;19&#x2F;opinion&#x2F;facebook-google-p...</a>
Ultramanoidalmost 6 years ago
This is what we had in early internet days, directories of links. Early Yahoo was the perfect example of this. You jumped from one site to another, you asked other people, you discovered things by chance. You went straight to a source, instead of reading a post of a summary of a site that after 20 redirections loaded with advertising and tracking gets you to the intended and actually useful destination.<p>Most web sites then also had a healthy, sometimes surprising link section, that has all but disappeared these days.
评论 #20282630 未加载
评论 #20282615 未加载
d-scalmost 6 years ago
Indexing information is a political problem as much as a technical one. Ultimately there will always be people who will put more effort into getting their information known than others. These people would game whatever technical solution exists.
评论 #20286569 未加载
vbstevenalmost 6 years ago
I was recently thinking about an open search protocol with some federation elements in two parts. A frontend and an indexer. The idea is that anyone can run his own search frontend or use a community hosted one (like Matrix). And then each frontend has X amount of indexers configured.<p>Each indexer is responsible for a small part of the web and by adding indexers you can increase your personal search area. And there is some web of trust going on.<p>Entities like stackoverflow and Wikipedia and reddit could host their own domain specific indexers. Others could be crowdsourced with browser extensions or custom crawlers and maybe some people want to have their own indexer that they curate and want to share with the world.<p>It will never cover the utility and breadth of Google Search but with enough adoption this could be a nice first search engine. With DDG inspired bang commands in the frontend you could easily retry a search on Google.<p>With another set of colon commands you can limit a search to one specific indexer.<p>The big part I am unsure about in this setup is how a frontend would choose which indexers to use for a specific query. Obviously sending each query to each indexer will not scale very well.
dalbasalalmost 6 years ago
Just as a suggestion, this question might be rephrased as can we have an Internet that doesn&#x27;t require search companies, or even just massive search monopolies.<p>I&#x27;m not sure what the answer is re:search. But, an easier example to chew on might&#x27;ve social media. It doesn&#x27;t take a Facebook to make one. There are lots of different social networking sites (including this one) that are orders of magnitude smaller in terms of resources&#x2F;people involved, even adjusting for size of the userbase.<p>It doesn&#x27;t take a Facebook (company) to make Facebook (site). Facebook just turned out to be the prize they got for it. These things are just decided as races. FB got enough users early enough. But, if they went away tomorrow.. users will not lack for social network experiences. Where they get those experiences is basically determined by network effects, not the product itself.<p>For search, it doesn&#x27;t take a Google either. DDG make a search engine, and they&#x27;re way smaller. With search though, it does seem that being a Google helps. They <i>have</i> been &quot;winning&quot; convincingly even without network effects and moat that make FB win.
zzbzqalmost 6 years ago
<a href="https:&#x2F;&#x2F;medium.com&#x2F;@Gramshackle&#x2F;the-web-of-native-apps-ii-google-and-facebook-ed2ee497302d" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@Gramshackle&#x2F;the-web-of-native-apps-ii-go...</a><p>Cliff&#x27;s notes:<p>- Apps should run not in a browser, but in sandboxed App containers loaded from the network, somewhat between Mobile Apps and Flash&#x2F;Silverlight. Mobile apps that you don&#x27;t &#x27;install&#x27; from a store, but navigate to freely like the web. Apps have full access to the OS-level APIs (for which there is a new cross-platform standard), but are containerized in chroot jail.<p>- An app privilege (&quot;this wants to access your files&quot;) should be a prominent feature of the system, and ad networks would be required to built on top of this system to make trade-offs clear to the consumer.<p>- Search should be a functionality owned and operated by the ISPs for profit and should be a low-level internet feature seen as an extension of DNS.<p>- Google basically IS the web and would never allow such a system to grow. Some of their competitors have already tried to subvert the web by the way they approached mobile.
btbuildemalmost 6 years ago
You don&#x27;t remember how it was before search engines, do you?<p>It was like a dark maze, and sometimes you&#x27;d find a piece of the map.<p>Search coming online was a watershed moment -- like, &quot;before search&quot; and &quot;after search&quot;
评论 #20288712 未加载
chriswwwebalmost 6 years ago
Sorry, but this was too tempting: <a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;6UcAOnF" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;6UcAOnF</a><p>But seriously, I&#x27;m not sure it is feasible, I wish the internet could auto-index itself and still be decentralized, where any type of content can be &quot;discovered&quot; as soon as it is connected to the &quot;grid&quot;.<p>The advantage would be that users could search any content without filters, without AI tempering with the order based on some rules ... BUT on the other hand, people use search engines because their results are relevant (what ever that means these days), so having an internet that is searchable by default would probably never be a good UX and hence not replace existing search engines. It not just about the internet being searchable, it would have to solve all the problems search engines have solved in the last ten years too
mhandleyalmost 6 years ago
We could always ask a different question: what would it take for everyone to have a copy of the index? Humans can only produce new text-based content at a linear rate. If storage continues to grow at an exponential rate, eventually it becomes relatively cheap to hold a copy of the index.<p>Of course those assumptions may not be valid. Content may grow faster than linear. Content may not all be produced by humans. Storage won&#x27;t grow exponentially forever. But good content probably grows linearly at most, and maybe even slower if old good content is more accessible. Already it&#x27;s feasible to hold all of the English wikipedia on a phone. Doing the same for Internet content is certainly going to remain non-trivial for a while yet. But sometimes you have to ask the dumb questions...
评论 #20285310 未加载
评论 #20286001 未加载
评论 #20283934 未加载
评论 #20283111 未加载
tooopalmost 6 years ago
The question should be how can we create a new internet where we don&#x27;t need a centralized 3rd party search engine not a new internet where there is no search engine. You can&#x27;t find anything if there is no search (engine) and you can&#x27;t change that.
GistNoesisalmost 6 years ago
Yes, download data, create indices on your data yourself as you see fit, execute SQL queries.<p>If you don&#x27;t have the resources to do so yourself, then you&#x27;ll have to trust something, in order to share the burden.<p>If you trust money, then gather enough interested people to share the cost of construction of the index, at the end everyone who trust you can enjoy the benefits of the whole for himself, and you now are a search engine service provider :)<p>Alternatively if you can&#x27;t get people to part with their money, you can get by needing only their computations, by building the index in a decentralized fashion. The distributed index can then be trusted at a small computation cost by anyone who believe that at least k% of the actors constructing it are honest.<p>For example if you trust your computation and if you trust that x% of actors are honest :<p>You gather 1000 actors and have each one compute the index of 1000th of the data, and publish their results.<p>Then you have each actor redo the computation on the data of another actor picked at random ; as many times as necessary.<p>An honest actor will report the disagreement between computations and then you will be able to tell who is the bad actor that you won&#x27;t ever trust again by checking the computation yourself.<p>The probability that there is still a bad actor lying is (1-x)^(x*n) with n the number of times you have repeated the verification process. So it can be made as small as possible, even if x is small by increasing n. (There is no need to have a majority or super-majority here like in byzantine algorithms, because you are doing the verification yourself which is doable because 1000th of the data is small enough).<p>Actors don&#x27;t have the incentive to lie because if they do so, it will be exposed provably as liars forever.<p>Economically with decreasing cost of computation (and therefore decreasing cost of index construction), public collections of indices are inevitable. It will be quite hard to game, because as soon as there is enough interest gathered a new index can be created to fix what was gamed.
cf141q5325almost 6 years ago
There is an even deeper problem then surveillance, the results of search engines get more and more censored with more and more governments putting pressure on them to censor results according to their individual wishes.
wlesieutrealmost 6 years ago
Taking a step back to before search engines were the main driver for finding content online, who remembers webrings?<p>Is there a way to update that idea of websites deliberately recommending each other, but without having it be an upvote&#x2F;like based popularity contest driven by an enormous anonymous mob? It needs to avoid both easy to manipulate crowd voting like reddit and the SEO spam attacks that PageRank has been targeted by.<p>Some way to say &quot;I value recommendations by X person,&quot; or even give individual people weight in particular types of content and not others?
评论 #20285265 未加载
评论 #20285004 未加载
topmonkalmost 6 years ago
What we should have is an open, freely accessible meta-information database of things like, whether user X liked&#x2F;disliked, what other page&#x2F;sites linked to this site, what their admins ranked this site as, if they did, etc., etc.<p>Then we have individual engines that take this data and choose for the user what to display for that user only. So if the user is unhappy with what they are seeing, they simply plug in another engine.<p>Probably a block chain would be good to store such a thing.
jonathanstrangealmost 6 years ago
There is still YaCy [1]. I&#x27;m not sure whether it&#x27;s this one or another distributed search engine I tried 10 years ago, but the results were not very convincing. I believe that&#x27;s to some extent because of a lack of critical mass, if more people would use these engines, they could improve their rankings and indexing based on usage.<p>[1] <a href="https:&#x2F;&#x2F;yacy.net&#x2F;en&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;yacy.net&#x2F;en&#x2F;index.html</a>
_Nat_almost 6 years ago
&gt; In our current internet, we need a big brother like Google or Bing to effectively find any relevant information in exchange for sharing with them our search history, browsing habits etc.<p>Seems like you could access Google&#x2F;Bing&#x2F;etc. (or DuckDuckGo, which&#x27;d probably be a better start here) through an anonymizing service.<p>But, no, going without search engines entirely doesn&#x27;t make much sense.<p>I suspect that what you&#x27;d really want is more control over what your computer shares about you and how you interact with services that attempt to track you. For example, you&#x27;d probably like DuckDuckGo more than Google. And you&#x27;d probably like Firefox more than Chrome.<p>---<p>With respect to the future internet...<p>I suspect that our connection protocols will get more dynamic and sophisticated. Then you might have an AI-agent try to perform a low-profile search for you.<p>For example, say that you want to know something about a sensitive matter in real life. You can start asking around without telling everyone precisely what you&#x27;re looking for, right?<p>Likewise, once we have some smarter autonomous assistants, we can ask them to perform a similar sort of search, where they might try to look around for something online on your behalf without directly telling online services precisely what you&#x27;re after.
gesmanalmost 6 years ago
I think there is a grain of a good idea here.<p>As i see it - new, &quot;free search&quot; internet would be a specially formatted content for each page published that will make it content easily searchable. Likely some tags within existing HTML content to comply with new &quot;free search&quot; standard.<p>Open source, distributed agents would receive notifications about new, properly formatted &quot;free search&quot; pages and then index such page into the public indexed DB.<p>Any publisher could release content and notify closest &quot;free search&quot; agent.<p>Then - just like a blockchain - anyone could download such indexed DB to do instant local searches.<p>There will be multiple variations of such DB - from small ones (&lt;1TB) to satisfy small users giving just &quot;titles&quot; and &quot;extracts&quot; to large ones who need detailed search abilities (multi TB capacity).<p>&quot;Free search&quot;, distributed agents will provide clutter-free interface to do detailed search for anyone.<p>I think this idea could easily be pickup up pretty much by everyone - everyone would be interested to submit their content to be easily searchable and escape any middlemen monopoly that is trying to control aspects of searching and indexing.
hokusalmost 6 years ago
<a href="https:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc1436" rel="nofollow">https:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc1436</a>
salawatalmost 6 years ago
The problem isn&#x27;t search engines per se.<p>The problem is closed algorithms, SEO, and advertising&#x2F;marketing.<p>Think about it for a minute. Imagine a search engine that generates the same results for everyone. Since it gives the same results for everyone, the burden of looking for exactly what you&#x27;re looking for is put back exactly where it needs to be, on the user.<p>The problem though, is you&#x27;ll still get networks of &quot;sink pages&quot; that are optimized to show up in every conceivable search, that don&#x27;t have anything to do with what you&#x27;re searching for, but are just landing pages for links&#x2F;ads.<p>Personally, I liked a more Yellow Pageish net. After you got a knack for picking out the SEO link sinks, and artificially disclose them, you were fine. I prefer this to a search provider doing it for you because it teaches you, the user, how to retrieve information better. This meant you were no longer dependant on someone else slurping up info on your browsing habits to try to made a guess at what you were looking for.
tablethnuseralmost 6 years ago
One way to replace search is to return to curation by trusted parties. Rather than anyone putting a web page up and then a passive crawler finding it and telling everyone about it, (why should I trust any search engine crawler,) we could &quot;load&quot; our search engine with lists of websites. These lists are published and maintained by curators that we have explicitly chosen to trust. When we type into the search box it can only return results from sites present on our personal lists.<p>e.g. someone&#x27;s list of installed lists might look like:<p>- New York Public Library reference list<p>- Good Housekeeping list of consumer goods<p>- YCombinator list of tech news<p>- California education system approved sources<p>- Joe Internet&#x27;s surprisingly popular list of JavaScript news and resources<p>How do you find out about these lists and add them? Word of mouth and advertising the old fashioned way. Marketplaces created specifically to be &quot;curators of curators&quot;. Premium payments for things like Amazing Black Friday Deals 2019 which, if you liked, you&#x27;ll buy again in 2020 and tell your friends.<p>There are two points to this. First, new websites only enter your search graph when you make a trust decision about a curator - trust you can revoke or redistribute whenever you want. Second, your list-of-lists serves as an overview of your own biases. You can&#x27;t read conspiracy theory websites without first trusting &quot;Insane Jake&#x27;s Real Truth the Govt Won&#x27;t Tell You&quot;. Which is your call to make! But at least you made a call rather than some outrage optimizing algorithm making it for you.<p>I guess this would start as a browser plugin. If there&#x27;s interest let&#x27;s build it FOSS.<p>Edit: Or maybe it starts as a layer on top of an existing search engine. Are you hiring, DDG? :P
评论 #20284998 未加载
评论 #20286781 未加载
评论 #20286646 未加载
dpacmittalalmost 6 years ago
Why don&#x27;t we get rid of tracking instead of getting rid of search engines. Why can&#x27;t I just have my ad settings set by myself. I should be able to say, I&#x27;m interested in tech, fashion, watches, online backup solutions etc. Show me only these ads. It would get rid of all kinds of tracking.<p>Can anyone tell me why such an approach wouldn&#x27;t work?
评论 #20287415 未加载
8bitsrulealmost 6 years ago
IME, searching by collections of keywords has become a good strategy. Avoiding using vague&#x2F;topical keywords (&#x27;music&#x27;, &#x27;chemical&#x27;), instead asking for specific words that should&#x2F;must be found in the search results. If the results start to exclude an important keyword (e.g. &#x27;1872&#x27; or &#x27;giant&#x27; or &#x27;legend&#x27;), put a plus sign in front of it and resubmit.<p>I regularly use DDG (which claims privacy) for this, and requests can be quite specific. E.g. a quotation &quot;these words in this order&quot; may result in -no result at all-, which is preferable to being second-guessed by the engine.<p>I wonder how &#x27;search engines are not required&#x27; would work without expecting the searcher to acquire expertise in drilling down through topical categories, as attempts like &#x27;<a href="http:&#x2F;&#x2F;www.odp.org&#x2F;&#x27;" rel="nofollow">http:&#x2F;&#x2F;www.odp.org&#x2F;&#x27;</a> did.
gexlaalmost 6 years ago
Good question. I&#x27;m going to run an experiment.<p>First &quot;go-to&quot; for search will be my browser history.<p>As long as the site I know I&#x27;m looking for is in my browser history, then I&#x27;ll go there and use the search feature to find other items from that site.<p>Bookmark all the advanced search pages I can find for sites I find myself searching regularly.<p>Resist mindless searching for crap content which usually just takes up time as my brain is decompressing from other tasks.<p>For search which is more valuable to me, try starting my search from communities such as Reddit, Twitter or following links from other points in my history.<p>Maybe if it&#x27;s not worth going through the above steps, then it&#x27;s not valuable enough to look up?<p>NOTE: Sites such as Twitter may not be much better than Google, but I can at least see who is pushing the link. I can determine if this person is someone I would trust for recommendations.<p>I bet if I did all of the above, I could put a massive dent in the number of search engine queries I do.<p>Any other suggestions?
评论 #20286674 未加载
评论 #20285014 未加载
ex3xualmost 6 years ago
Like others here I don&#x27;t have too much problem with indexing.<p>What I would like to see is a human layer of infrastructure on top of algorithmic search, one the leverages the fact that there are billions of people who could be helping others find what they need. That critical mass wasn&#x27;t available at the beginning of the internet, but it certainly is now.<p>You kind of have attempts at this function in efforts like the Stack Exchange network, Yahoo Questions, Ask Reddit, tech forums etc. but I&#x27;d like to see more active empowerment and incentivization of giving humans the capacity to help other humans find what they need, in a way that would be free from commercial incentives. I envision stuff like maintaining absolutely impartial focus groups, and for commercial search it would be nice to see companies incentivized to provide better quality goods to game search rather than better SEO optimization.
ntnlabsalmost 6 years ago
How about this: Internet as a service. Instead of looking for answers You will &quot;broadcast&quot; Your needs. Like &quot;I need a study about cancer&quot;. And You will receive a list of sources that answered Your question. maybe with some sort of decentralised rating and maybe Country and author. How about that?
评论 #20283498 未加载
评论 #20285044 未加载
descalmost 6 years ago
As others have commented, the problem here is the ranking algorithm and how it can be gamed. Essentially, trust.<p>&#x27;Web of trust&#x27; has its flaws too: a sufficiently large number of malicious nodes cooperating can subvert the network.<p>However, maybe we can exploit locality in the graph? If the user has an easy way to indicate the quality of results, and we cluster the graph of relevance sources, the barrier to subverting the network can be raised significantly.<p>Let&#x27;s say that each ranking server indicates &#x27;neighbours&#x27; which it considers relatively trustworthy. When a user first performs a search their client will pick a small number of servers at random, and generate results based on them.<p>* If the results are good, those servers get a bit more weight in future. We can assume that the results are good if the user finds what they&#x27;re looking for in the top 5 or so hits (varying depending on how specific their query is; this would need some extra smarts).<p>* If the results are poor (the user indicates such, or tries many pages with no luck) those servers get downweighted.<p>* If the results are actively malicious (indicated by the user) then this gets recorded too...<p>There would need to be some way of distributing the weightings based on what the servers supplied, too. If someone&#x27;s shovelling high weightings at us for utter crap, they need to get the brunt of the downweighting&#x2F;malice markers.<p>Servers would gain or lose weighting and malice based on their advertised neighbours too. Something like PageRank? The idea is to hammer the <i>trusting</i> server more than the <i>trusted</i>, to encourage some degree of self-policing.<p>Users could also chose to trust others&#x27; clients, and import their weighting graph (but with a multiplier).<p>Every search still includes random servers, to try to avoid getting stuck in an echo chamber. The overall server graph could be examined for clustering and a special effort made to avoid selecting more than X servers in a given cluster. This might help deal with malicious groups of servers, which would eventually get isolated. It would be necessary to compromise a lot of established servers in order to get enough connections.<p>Of course, then we have the question of who is going to run all these servers, how the search algorithm is going to shard efficiently and securely, etc etc.<p>Anyone up for a weekend project? &gt;_&gt;
gistalmost 6 years ago
This is to broad a question to answer. There are really to many different uses of the Internet to try and fashion a solution that works in all areas. Not to mention the fact that it&#x27;s to academic to begin with. How do you get such a large group of people to change a behavior that works for them already? And very generally most people are not bothered by the privacy aspect as much as tech people (always whining about things) are or even the media. People very generally like they can get things at no cost and don&#x27;t (en masse) care anywhere near as much about being tracked as you have been led to believe. And that&#x27;s when tracking is not even benefiting them which it is often. This is not &#x27;how can we eliminate robo calls&#x27;. It&#x27;s not even &#x27;how can we eliminate spam&#x27;.
Havocalmost 6 years ago
Seems unlikely. Search engines solves a key problem.<p>To me they are conceptually not the problem. Nor is advertising<p>This new wave of track you everywhere with ai brand of search engines is an issue though. They’ve taken it too far essentially.<p>Instead of respectable fishing they’ve gone for kilometer long trawling nets that leave nothing in their wake
hideoalmost 6 years ago
This isn&#x27;t an entire solution, but Van Jacobson&#x27;s Content-Centric Networking concept is fascinating, especially when you consider its potential social impact compared to the way the internet exists today<p><a href="https:&#x2F;&#x2F;www.cs.tufts.edu&#x2F;comp&#x2F;150IDS&#x2F;final_papers&#x2F;ccasey01.2&#x2F;FinalReport&#x2F;FinalReport.html" rel="nofollow">https:&#x2F;&#x2F;www.cs.tufts.edu&#x2F;comp&#x2F;150IDS&#x2F;final_papers&#x2F;ccasey01.2...</a> <a href="http:&#x2F;&#x2F;conferences.sigcomm.org&#x2F;co-next&#x2F;2009&#x2F;papers&#x2F;Jacobson.pdf" rel="nofollow">http:&#x2F;&#x2F;conferences.sigcomm.org&#x2F;co-next&#x2F;2009&#x2F;papers&#x2F;Jacobson....</a>
munchausen42almost 6 years ago
To get rid of search engines like Google and Bing we don&#x27;t need to build a new internet - we just need to build new search engines.<p>E.g., how about an open source spider&#x2F;crawler that anyone can run on their own machine continuously contributing towards a distributed index that can be queried in a p2p fashion. (Kind of like SETI@home but for stealing back the internet).<p>Just think about all the great things that researchers and data scientists could do if they had access to every single public Facebook&#x2F;Twitter&#x2F;Instagram post.<p>Okayokay ... also think about what Google and FB could do if they could access any data visible to anyone (but let&#x27;s just ignore that for a moment ;)
评论 #20288695 未加载
nonwifehaver3almost 6 years ago
Yes, out of sheer necessity. Search results have become either a crapshoot when looking for commercially adjacent content due to SEO, or “gentrified” when looking for anything even remotely political, obscure, or controversial. Google used to feel like doing a text search of the internet, but it sometimes acts like an apathetic airport newsstand shopkeeper now (&amp; with access to only the same books and magazines).<p>Due to this I think people will have to use site-specific searches, directories, friend recommendations, and personal knowledge-bases to discover and connect things instead of search engines.
cy6erlionalmost 6 years ago
I think there is only two options.<p>1) Have an index created by a centralized entity like google 2) Have the nodes in the network create the index<p>The first option is the easiest but can be biased on who gets to be on the index and their position on the index.<p>Option two is hard because we need a sort of mechanism to generate the index from the subjective view of the nodes in the network and sync this to everyone in the network.<p>The core problem here is not really the indexing but the structure of the internet, domains&#x2F;websites are relatively dumb they can not see the network topology, indexing is basically trying to create this topology.
JD557almost 6 years ago
You could use something like Gnutella[1], where you flood the network with your query request and that request is then passed along nodes.<p>Unfortunately (IIRC and IIUC how Gnutella works), malicious actors can easily break that query schema : just reply to all query requests with your malicious link. I believe this is how pretty much every query in old Gnutella clients returned a bunch of fake results that were simply `search_query + &quot;.mp3&quot;`.<p>1: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Gnutella" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Gnutella</a>
quickthrower2almost 6 years ago
Search engines are not required: there are directories out there with tonnes of links. It is just that search engines are damn convenient. And googles search is light years ahead of any websites own search.
评论 #20282621 未加载
oeveralmost 6 years ago
The EU has a funding call open for Search and Discovery on the Next Generation Internet.<p><a href="https:&#x2F;&#x2F;nlnet.nl&#x2F;discovery&#x2F;" rel="nofollow">https:&#x2F;&#x2F;nlnet.nl&#x2F;discovery&#x2F;</a>
inputcoffeealmost 6 years ago
It was thought that one way of finding information is to ask your network (Facebook and Twitter would be examples), and then they would pass on the message and a chain of trusted sources would get the information back to you.<p>I am being purposefully vague because I don&#x27;t think people know what an effective version of that would look like, but its worth exploring.<p>If you have some data you might ask questions like:<p>1. Can this network reveal obscure information?<p>2. When -- if ever -- is it more effective than indexing by words?
评论 #20285475 未加载
ninjualmost 6 years ago
I find myself not need to do a &#x27;generic&#x27; Internet search that much anymore<p>For long-term facts and knowledge lookup: Wikipedia pages (with proper annotation)<p>For real-time World happens: A mix of direct news websites<p>For random &#x27;social&#x27; news: &lt;-- the only time I direct direct Google&#x2F;Bing&#x2F;DDG search<p>The results from the search engines nowadays are so filled with (labeled) promoted results and (un-labeled) SEO results that I have become cynical and jaded to the value of the results
jkaalmost 6 years ago
There&#x27;d be a feedback loop problem, but are DNS query logs a potential source of ranking&#x2F;priority?<p>Over time the domains that users genuinely organically visit (potentially geo-localized based on client location) should rise in query volume.<p>Caveats would include DNS record cache times, lookups from robots&#x2F;automated services, and no doubt a multitude of inconsistent client behavior oddities.<p>A similar approach could arguably be applied even at a network connection log level.
mahnouelalmost 6 years ago
Maybe I&#x27;m missing the point. But Instagram, Facebook, Twitter - all of them are not mainly experienced through search but through a feed of endless content, curated by an algorithm. Most regular users don&#x27;t even search that often, they consume. Maybe there could be an decentralized Internet where you follow specific handles and then they bring their content into your main &quot;Internet&quot; aka feed (= user friendlier RSS).
z3t4almost 6 years ago
An idea I&#x27;ve had for a long time is a .well-known&#x2F;search standard (REST) endpoint. Where your browser, or a search aggregator combines results from many sites like stack overflow, MDN, news sites, i duvidual blogs, etc. That way search engines doesnt have to create a index. It would be up to the sites to create the search result. This means searching would be parallel and distributed.
epynonymousalmost 6 years ago
my ideal internet would be more like a set of concentric rings per user, a ring would represent different preferences, filters, and data, i could choose to include certain users access to parts of my rings, and i could access other parts of other user&#x27;s rings. obviously there should be an open ring that every user can access which would need a search engine run by a company or set of companies, this would be like today&#x27;s internet, but that would not be the same ring, i could switch between rings with ease. i think this maybe somewhat what tim berners lee is doing with the decentralized web, or perhaps bits of dark net interwoven with the internet.<p>an example use case would be like a set of apps that my family could use for photo sharing, messaging, sending data, links to websites, etc. perhaps another set of apps for my friends, another for my company, or school. the protocols would not require public infrastructure, dns, etc. perhaps tethering of devices would be enough. there would be a need for indexing and search, email, etc.
sktrdiealmost 6 years ago
I feel like Linked Data Fragments provides a solution to this: <a href="http:&#x2F;&#x2F;linkeddatafragments.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;linkeddatafragments.org&#x2F;</a><p>You&#x27;re effectively crawling portions of the web based on your query, at runtime! It&#x27;s a pretty neat technique. But you obviously have to trust the sources and the links to provide you with relevant data.
Johny4414almost 6 years ago
What about Xanadu? Internet is very broken but almost no one seems to care (for a reason). Idea of more p2p web is there for a while but at the end of the day user don&#x27;t care to much about anything so it probably never happen.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Project_Xanadu" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Project_Xanadu</a>
评论 #20282702 未加载
CapitalistCartralmost 6 years ago
I&#x27;ve said this before: I dearly miss Alta Vista. It indexed, but the user had to provide the ranking, which required actually thinking about what was wanted. I would construct searches of the pattern (word OR word) And (word NEAR word) with great success. Naturally Google, requiring far less thinking to use, won.
politicianalmost 6 years ago
Lately, I&#x27;ve been turning over an idea that in order to advance, the next generation of the Internet should be designed so that third-party advertising is impossible to implement. I believe that as a consequence, this requirement will prevent crawler-based search engines from operating which presents a source discovery problem.<p>Discovering new sources of information in this kind of environment is difficult, and basically boils down to another instance of the classic key distribution problem - out-of-band, word-of-mouth, and QR codes.<p>Search engines like Google and Bing solve the source discovery problem by presenting themselves as a single source; aggregating every other source through a combination of widespread copyright infringement and an opaque ranking algorithm.<p>Google and Bing used to do a great job of source discovery, but the quality of their results have deteriorated under relentless assaults from SEO and Wall Street.<p>I think it&#x27;s time for another version of the Internet where Google is not the way that you reach the Internet (Chrome) or find what you&#x27;re looking for on the Internet (Search) or how you pay for your web presence (Adsense).
BerislavLopacalmost 6 years ago
We already have it, and it&#x27;s called BitTorrent. DNS as well.<p>What you call Internet is actually World Wide Web, just another protocol (HTTP) on top of Internet (TCP&#x2F;IP), which was designed to be decentralised but lacked any worthwhile discovery mechanism before two students designed the BackRub protocol.
wsyalmost 6 years ago
To everybody who wants to tackle this challenge: start by considering how you would protect your &#x27;new internet&#x27; against SPAM and SEO attacks.<p>For example, if you build on a decentralized network, ask yourself how you can prevent SEO companies from adding a huge amount of nodes to promote certain sites.
rayrrralmost 6 years ago
There&#x27;s been a few mentions of the PageRank algorithm already...FWIW, Google&#x27;s patent just expired. <a href="https:&#x2F;&#x2F;patents.google.com&#x2F;patent&#x2F;US6285999B1&#x2F;en" rel="nofollow">https:&#x2F;&#x2F;patents.google.com&#x2F;patent&#x2F;US6285999B1&#x2F;en</a>
qazpotalmost 6 years ago
See Ted Nelson&#x27;s Xanadu Project - <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Project_Xanadu#Original_17_rules" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Project_Xanadu#Original_17_rul...</a><p>Point 4 allows a user to search and retrieve documents on the network.
hayksaakianalmost 6 years ago
If you look at usage patterns, social media has replaced search engines for many use cases.<p>For example, if you want to know where to eat tonight, instead of searching &quot;restaurants near me&quot; you might ask your friends &quot;where should I eat tonight&quot; and get personalized suggestions.
weliketocodealmost 6 years ago
Your two points really don’t fit with your follow-up explanation.<p>If you don’t believe finding information is currently trivial using Google, that’s going to be a tough nut to crack.<p>What would you use for information retrieval that doesn’t involve indexing or a search engine?
garypocalmost 6 years ago
We would still need search engines, but we could change the business model. For example we could make a protocol to associate URL with content and search keywords. Something similar to DNS associated with distributed Elasticsearch servers
lowcosthostingsalmost 6 years ago
The good one post which you have to share. <a href="https:&#x2F;&#x2F;www.lowcostwebhostings.com&#x2F;dealstore&#x2F;webhostingpad" rel="nofollow">https:&#x2F;&#x2F;www.lowcostwebhostings.com&#x2F;dealstore&#x2F;webhostingpad</a>
fookeralmost 6 years ago
I&#x27;ll be pessimistic here and say no, that is an impossible pipe dream. For any such system design you can come up with, a centralized big brother controller system will be more more efficient and have better user experience.
siliconc0walmost 6 years ago
You could make a browser plugin that effectively turned everyone into a spider that sent new chunks of the index to some decentralized blockchain-esque storage system for all to query with its own blockchain-esque micro payments
tmalyalmost 6 years ago
I think once really good AI becomes a commodity and can fit in your phone AND<p>Once we have really fast 5?G networks, there is a good possibility that some type of distributed mesh type search solution could replace the big players.
Advaithalmost 6 years ago
I think this is the long game with respect to blockchains and establishing trust in general.<p>You will be able to trust data and sources instantly. There will be no intermediaries and trust will be bootstrapped into each system.
blackflame7000almost 6 years ago
What if we just make a program that Googles a bunch of random stuff constantly so that there is so much garbage in their algorithms that they can&#x27;t effectively figure out real vs synthetic searches.
nobodyandproudalmost 6 years ago
We need an alternative internet where anonymity between two parties is impossible.<p>Not a place for entertainment, but where government or business transactions can be safely conducted.<p>A search engine would be of secondary importance.
reshiealmost 6 years ago
i guess if we had a highly regulated and one site for one type of service it would be possible but i would not really want that. you could have a algorithm that would parse your query and send you directly to a site of course it could get it wrong where you may need to refine you query just like now sometimes. of course thats still a search engine but more direct. bookmarks are already a form of web without re-searching.<p>it sounds like what you really want is a decentralized search engine and anonymous by default as apposed to no search engine.
评论 #20282605 未加载
paparushalmost 6 years ago
We could go back to Gopher.
Papirolaalmost 6 years ago
I still remember gopher <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Gopher_(protocol)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Gopher_(protocol)</a>
评论 #20289591 未加载
Isamualmost 6 years ago
That was the original Internet. Search engines evolved to make finding things possible.<p>Another original intent: that URLs would not need to be user-visible, and you wouldn&#x27;t need to type them in.
truckerbillalmost 6 years ago
We could try and revive and improve the web-ring concept. Or more simply, convince the community to dedicate a page of their site linking to other related&#x2F;relevant sites.
评论 #20284612 未加载
thedevindevopsalmost 6 years ago
You want to create another <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Deep_web" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Deep_web</a> ?
kenalmost 6 years ago
Is this the same as asking if we can create a telephone system with no phone books, or a city with no maps? Where is our shared understanding of the system&#x27;s state?
kazinatoralmost 6 years ago
Can you walk through a complete use case?<p>A user wants to find a &quot;relevant document&quot;.<p>What is that? What information does the user provide to specify the document?<p>Why does the user trust the result?
bitLalmost 6 years ago
How can I help? Dumped most of centralized solutions in favor of self-hosted (mostly ActivityPub-based) services and still can&#x27;t get rid of search.
comboyalmost 6 years ago
I&#x27;m too late, but yes, it is not easy but it definitely seems doable: <a href="http:&#x2F;&#x2F;comboy.pl&#x2F;wot.html" rel="nofollow">http:&#x2F;&#x2F;comboy.pl&#x2F;wot.html</a><p>I&#x27;m sorry it&#x27;s a bit long, TL;DR you need to be explicit about people you trust. Those people do the same an then thanks to the small world effect you can establish your trust to any entity that is already trusted by some people.<p>No global ranking is the key. How good some information is, is relative and depends on who do you trust (which is basically form of encoding your beliefs). And yes, you can avoid information bubble much better than now but writing more when I&#x27;m so late to the thread seems a bit pointless.
FPurchessalmost 6 years ago
I wonder if we could rearranged the internet as decentralised nodes exchanging topic maps which then can be queried in a p2p fashion.
评论 #20282965 未加载
otabdeveloper4almost 6 years ago
Yes, it&#x27;s called &quot;Facebook&quot;, and it already exists.<p>Probably not what you had in mind, though. Be careful what you wish for.
xorandalmost 6 years ago
Two-way links would help. I can&#x27;t locate the information now but it seems that it was proposed initially.
robotalmost 6 years ago
It is a huge problem. It&#x27;s not possible to fix it some other way without putting in the same effort.
buboardalmost 6 years ago
Didnt we? It is called &quot;ask your friends&quot;. It s a great way to turn your friends into enemies.
ISNITalmost 6 years ago
Maybe we should all just learn a graph query language and live on WikiData ;)
ameliusalmost 6 years ago
Are any academic groups still researching search engines?
ptahalmost 6 years ago
I guess nowadays the web IS the internet
sys_64738almost 6 years ago
Yes because nobody will be using it.
peterwwillisalmost 6 years ago
tl;dr the problems are 1) relevancy, 2) integrity, 3) content management&#x2F;curation.<p>If you&#x27;ve ever tried to maintain a large corpus of documentation, you realize how incredibly difficult it is to find &quot;information&quot;. Even if I know exactly what I want.... where is it? With a directory, if I&#x27;ve &quot;been to&quot; the content before, I can usually remember the path back there... assuming nothing has changed. (The Web changes all the time) Then if you have new content... where does it go in the index? What if it relates to multiple categories of content? An appendix by keyword would get big, fast. And with regular change, indexes become stale quickly.<p>OTOH, a search engine is often used for documentation. You index it regularly so it&#x27;s up to date, and to search you put in your terms and it brings up pages. Problem is, it usually works poorly because it&#x27;s a simple search engine without advanced heuristics or PageRank-like algorithms. So it&#x27;s often a difficult slog to find documentation (in a large corups), because managing information is hard.<p>But if what you actually want is just a way to look up domains, you still need to either curate an index, or provide an &quot;app store&quot; of domains (basically a search engine for domain names and network services). You&#x27;d still need some curation to weed out spammers&#x2F;phishers&#x2F;porn, and it would be difficult to find the &quot;most relevant&quot; result without a PageRank-style ordering based on most linked-to hosts.<p>What we have today is probably the best technical solution. I think the problem is how it&#x27;s funded, and who controls it.
fergiealmost 6 years ago
Author of the npm module search-index here.<p>&quot;1- Finding information is trivial&quot;<p>The web already consists, for the most part, of marked up text. If speed is not a contraint, then we can already search through the entire web on demand, however, given that we dont want to use 5 years on every search we carry out, what we really need is a SEARCH INDEX.<p>Given that we want to avoid Big Brother like entities such as Google, Microsoft and Amazon, and also given, although this is certainly debatable, that government should stay out of the business of search, what we need is a DECENTRALISED SEARCH INDEX<p>To do this you are going to need AT THE VERY LEAST a gigantic reverse index that contains every searchable token (word) on the web. That index should ideally include some kind of scoring so that the very best documents for, say, &quot;banana&quot; come at the top of the list for searches for &quot;banana&quot; (You also need a query pipeline and an indexing pipeline but for the sake of simplicity, lets leave that out for now).<p>In theory a search index is very shardable. You can easily host an index that is in fact made up of lots of little indexes, so a READABLE DECENTRALISED SEARCH INDEX is feasable with the caveat that relevancy would suffer since relevancy algorithms such as TD-IDF and Page Rank generally rely on an awareness of the whole index and not just an individual shard in order to calculate score.<p>Therefore a READABLE DECENTRALISED SEARCH INDEX WITH BAD RELEVANCY is certainly doable although it would have Lycos-grade performance circa 1999.<p>CHALLENGES:<p>1) Populating the search index with be problematic. Who does it, how they get incentivized&#x2F;paid, and how they are kept honest is a pretty tricky question.<p>2) Indexing pipelines are very tricky and require a lot of work to do well. There is a whole industry built around feeding data into search indexes. That said, this is certainly an area that is improving all the time.<p>3) How the whole business of querying a distributed search index would actually work is an open question. You would need to query many shards, and then do a Map-Reduce operation that glues together the responses. It may be possible to do this on users devices somehow, but that would create a lot of network traffic.<p>4) All of the nice, fancy schmancy latest Google functionality unrelated to pure text lookup would not be available.<p>&quot;2- You don&#x27;t need services indexing billions of pages to find any relevant document&quot;<p>You need to create some kind of index, but there is a tiny sliver of hope that this could be done in a decentralized way without the need for half a handful of giant corporations. Therefore many entities could be responsible for their own little piece of the index.
sonescarolalmost 6 years ago
l
sonnyblarneyalmost 6 years ago
G is apparently losing a lot of product related search to Amazon, I suggest that the &#x27;siloing&#x27; of the web, for better or worse, might yield some progress here.<p>i.e. when you search, you start in a relevant domain instead of Google so Amazon for products, Stack Exchange for CS questions.<p>Obviously not ideal either.
diminotenalmost 6 years ago
No. Search is a consequence of data volume.
wfbarksalmost 6 years ago
a New New Internet
codegladiatoralmost 6 years ago
No
nojobsalmost 6 years ago
Also we should keep hiding from big brothers to save our data from companies and government and pay for it. VPN I mean. But first you need to find a proper one, I waste enough time on it. <a href="https:&#x2F;&#x2F;vpn-review.com&#x2F;found" rel="nofollow">https:&#x2F;&#x2F;vpn-review.com&#x2F;found</a> one here
drenvukalmost 6 years ago
Finding information has never been trivial and until you can read people&#x27;s minds to see what they really mean when they search for &#x27;cookies&#x27; when they really mean &quot;how to clear my internet browsing history for the past hour&quot; it will continue to be non-trivial. The work Google has done in the search space is damn near magical. Your question belittles the literal billions of dollars and millions of man hours that have gone into making the current and previous implementations of Google&#x27;s search engine <i>almost good enough</i>.<p>This is not simple, and your Ask HN reeks of ideology and contempt without so much as an inkling of the technical realities that would have to be overcome for such a thing to happen. That goes for both old and new internet.<p>&#x2F;rant
评论 #20282704 未加载
评论 #20283112 未加载