HTML ain't the problem; you can build websites without tracking. If you somehow managed to pull enough users to Gopher, they'd just write Gopher Chrome and start adding new features that conveniently allow tracking into it, and gradually kill off the original protocol (see EEE). The problem is economic, and the solution must be too.
How about reviving the “blogosphere” instead? Does it even need reviving? Most of the personal or tech blogs I visit do not have heavy ads or tracking on them, still offer full RSS articles and so on. People who care still have a lot of nice web sites to go to.<p>Maybe what we need is a search engine that penalises JS and tracker use.
Why not just serve static text over HTTP? At least then you'd have the ability to inline images. This--the use of Javascript and other technology for tracking purposes--isn't a problem for Gopher to solve. It's a problem for web content creators.
Gopher is a really fun (and constrained) protocol. I’ve experimented a bit with interactive gopher servers in the past.<p>A cool thing is that you can build a server in an afternoon starting with nothing more than your favorite programming language, some TCP server docs, and the wikipedia page.<p>I’d love to see people build some gopher sites to do stupid and crazy things. Interactive fiction over gopher? Sure! SQL to gopher gateway with ascii viz? Awesome!<p>Everyone should have a gopher hole... probably firewalled off of any production networks.
I always like to refer to Ian Hickson's Requirements for Replacing the Web [0] when this topic comes up. They seems to encapsulate well the social, technological and economic dynamics required when discussing replacing the Web. However, few attempts (Crockford's Seif Project [2], MS's Project Atlantis and Project Gazelle [2][3][4][5]) seemed have to heed this wisdom.<p>[0] <a href="https://webcache.googleusercontent.com/search?q=cache:8zGGJQ5VxwEJ:https://plus.google.com/%2BIanHickson/posts/SiLdNL9MsFw+&cd=1&hl=en&ct=clnk&gl=us" rel="nofollow">https://webcache.googleusercontent.com/search?q=cache:8zGGJQ...</a><p>[1] <a href="http://seif.place/" rel="nofollow">http://seif.place/</a><p>[2] <a href="https://youtu.be/1uflg7LDmzI" rel="nofollow">https://youtu.be/1uflg7LDmzI</a><p>[3] <a href="https://mickens.seas.harvard.edu/publications/atlantis-robust-extensible-execution-environments-forweb-applications" rel="nofollow">https://mickens.seas.harvard.edu/publications/atlantis-robus...</a><p>[4] <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/gazelle.pdf" rel="nofollow">https://www.microsoft.com/en-us/research/wp-content/uploads/...</a><p>[5] <a href="https://www.microsoft.com/en-us/research/blog/browser-not-browser/" rel="nofollow">https://www.microsoft.com/en-us/research/blog/browser-not-br...</a>
I've half seriously evangelized a few times here for a .text TLD.<p>It wouldn't solve everything, but would make a nice playground that might be taken interesting places.
The article discusses reviving gopher, but doesn't mention how to access it (sure, I could invest a bit of time and effort googling how to do that, but that seems beside point for an article evangelising its revival).
I was toying with an idea a while back of making sites just for non visual browsers.
There was basically just a piece of css, blocking the visualization of content and letting users know this "this is a web 0.5 website. This site is best viewed in a terminal".
The enforced rules where kind of a gentleman's (gentleperson) code of no css no js.<p>Conclusions I got is that the thing had crazy fast loading (it even weird when you can no longer distinguish local for server), that it would be actually quite enjoyable coding experience has it's suddently is just 50% of the work and that the rendering of web pages in terminal browsers is actually really nice.
> Gopher is not HTML<p>Gopher can easily serve HTML content (and any other content type, too)<p>I made a Gopher HackerNews proxy a few years ago, you can see it in action by running<p><pre><code> lynx gopher://hn.irth.pl
</code></pre>
and check out the source at <a href="https://github.com/irth/gophernews" rel="nofollow">https://github.com/irth/gophernews</a>
We need a new mode for Fırefox an extremely restricted form of html5 without javascript, call it html0.<p><doctype html0><p>No JS, no thirparty content, only html5+, css3+, text, images, videos, audio and other stuff.
How about not developing sites that break when JS is turned off? Why has it become a standard to make websites completely in JavaScript when it brings nothing positive to the table whatsoever? Who came up with this idiocy?
As someone who grew up with a 1200 baud modem and never used gopher: why would I start using gopher? What even is it? Can I use it to host webpages? It sounds like if "tracking is impossible" it probably can't use html+javascript? Why would I want to use that?
I did a hackday project where I wrote a script that converted our intranet at work into a gopher site. Before I started, I was really enthused about it, but once I started, it just became evident how much a kludge these early protocols were.<p>Its the COBOL of page description languages. Its truly horrible, its not like HTML was just this minor improvement, its a complete conceptual shift. GOPHER is just a tab delimitated file, so excel is the best editor for it.<p>The first character is the type of thing, it can be a submenu (1), a text doc(0), a gif (g), an image (I), a binary file (b), a bin-hex file (4) or it can tell you the name of a mirror server so you can load balance?? (+).<p>How do you take form input like a a street address? You can't, its one way data transfer.
I'd rather have a conservative version of our current web standards, strip things back to a sensible subset of what we have now, possibly consider putting some kind of heavy rate-limiting or quotas on any client-side code that's ran.<p>The web is no longer open if you need the funds and backing of a megacorp in order to implement a renderer that covers the whole standard.
Blast from the past. If anyone wants to download NCSA Mosaic and load the author's gopher site with it, here ya go:<p><a href="https://github.com/alandipert/ncsa-mosaic" rel="nofollow">https://github.com/alandipert/ncsa-mosaic</a> (binaries in Ubuntu's Snap Store, probably in other distros too)<p>Find out more at: <a href="https://en.wikipedia.org/wiki/Mosaic_(web_browser)" rel="nofollow">https://en.wikipedia.org/wiki/Mosaic_(web_browser)</a>
For those looking in the comments for places to explore in gopherspace, I would recommend starting here (use lynx):<p><pre><code> gopher://sdf.org # large community
gopher://floodgap.com # a venerable gopher presence
gopher://bitreich.org # small but very active community
gopher://gopher.black/1/moku-pona # my phlog listing aggregator</code></pre>
There are around 281 Gopher servers active on the Internet at the moment:<p><a href="https://www.shodan.io/report/jhkXWTvL" rel="nofollow">https://www.shodan.io/report/jhkXWTvL</a><p>Will be interesting to see whether that number shifts in the near future.
My memory is getting kind of blurry on this, but wasn't Gopher heading in a direction where someone wanted to extract licensing fees from it?<p>I do remember discovering how WWW had made some leaps forward and promptly abandoning my project to write a Gopher+ server and instead turning what I was working on into an HTTP server. Sadly I never bothered publishing the code since interesting things were happening with the NCSA httpd code at the time (something which eventually turned into Apache)
The nature of this problem (that companies are able to track you) is not so much a technological problem, but an economical one. Even if gopher were the only alternative back then, it would have evolved just as HTML/HTTP did to support ads and tracking.<p>All a content provider that doesn't want to serve ads and tracking has to do is not implement it. While content creators are still bound to whatever their publishing platform chooses to do (e.g. any content on Medium is subject to Medium's tracking practices), using an inferior technology is simply not a realistic solution. This is essentially a human issue, technology has little to do with it.<p>You want to enable ad-free, tracking-free mass publishing? Provide a free publishing platform. The catch? Someone has to pay for it.<p>You don't want to be tracked? Disable javascript. Some sites stopped working? Oh yeah, tracking you is how they pay the cost (nominal or economical) for serving you content.<p>I would see merit however, in a search engine that allowed filtering for non-javascript friendly content.
There is a public gopher proxy you can use e.g. <a href="http://gopher.floodgap.com/gopher/gw?gopher://box.matto.nl:70/0/revivegopher.txt" rel="nofollow">http://gopher.floodgap.com/gopher/gw?gopher://box.matto.nl:7...</a>
Once again, an internet user decides that in order to solve a social problem, we must move people to an ancient internet protocol for serving web pages rather than <i>actually address</i> the social problem by dealing with the real world entities performing the tracking.
As others are saying, HTML isn't the problem and Gopher isn't the solution: any bidirectional request-response protocol can be used to track clients, because there's a record of interactions the server can save. Client-side scripting as now commonly used on the Web increases the likelihood that some of these events occur despite the user's intent, but hosts can track and profile you by IP just fine, and if this hypothetical Gopher revival came to pass, it would also revive an interest in server-side ad serving and log mining that dynamic ads have long made obsolete.<p>The two solutions are to: (a) not interact with hosts who track you -- which is hard to know ahead of time -- or (b) use a one-way broadcast protocol that leaves no ability for hosts to collect an interaction stream. And this exists too, from over-the-air television and radio, to teletext [1] and datacasting [2]. Compare the business models: unencrypted broadcast streams are full of ads too, but you don't get tracked. Or, the services are encrypted and the key exchange is moved out of band; you trade a bit of your privacy to establish an ongoing customer relationship to access gated content.<p>Of course, broadcast on public airwaves is heavily regulated, and broadcast on unlicensed spectrum is sufficiently intertwined with and streamlined into wireless internet to be hidden in plain sight. Despite its technical merits, a broadcast 'renaissance' of sorts isn't likely to attract a discretionary audience without a real integrated commercial offering raising awareness -- amateur radio and tech demos don't have universal appeal, but a sleek device that accesses compelling first-party content in a privacy-preserving way might. But it's also a technical gamble when more proven solutions are less risky, and the kinds of players who deliver integrated offerings can deliver their service over IP with less fuss.<p>[1] <a href="https://en.wikipedia.org/wiki/Teletext" rel="nofollow">https://en.wikipedia.org/wiki/Teletext</a> [2] <a href="https://en.wikipedia.org/wiki/Datacasting" rel="nofollow">https://en.wikipedia.org/wiki/Datacasting</a>
I don't know if I necessarily want Gopher back, but I often dream of returning to the days when "the Internet" was primarily Usenet, IRC, Telnet, and email.
The article makes the incorrect assumption that tracking depends on HTML and/or JS/images. If we managed to revive Gopher, browser makes would soon build tracking into browsers and publishers would simply track on the server side like Cloudflare does already (<a href="https://www.cloudflare.com/analytics/" rel="nofollow">https://www.cloudflare.com/analytics/</a>).
I remember during my undergrad days, my university (McGill) used to have its classified ads accessible via gopher. It was pretty popular and fairly easy to use. Surprisingly, there were quite a few non-technical people on there, e.g., posting apartments to sublet. This was in the days of Windows 2k/ME, so people had lower expectations for user interfaces back then.
Should at least have a small guide on how to get started, like server software, client software, how to make a "page" ?<p>You could block all ip-ranges for known trackers via firewall. And also disable JavaScript, cookies and media content. Or just surf the web using an old browser. Serious webmasters still make sure their web pages work in more then just Chrome.
My biggest issue with "gopher" is that I don't know how secure it is, how do I know the connection I'm using is secure and hasn't been intercepted, the current clients don't show that at all if it's even possible. I couldn't care less about tracking when the content isn't trustworthy.
What do Gopher pages look like, are they mostly ascii or is the format weird? Why did HTML/HTTP become the standard over Gopher? It seems like Gopher could be capable of doing similar things to the web, just nobody bothered to expand on it or the standard is frozen in time.
When they talk about “putting content on gopher”, what do they mean? Gopher is basically FTP with the ability to link to other sites. Other than text blogs or videos, what sort of content would we out on there?
> If you build it, they will come.<p>People already built it. And not even talking about old gopher. Adblockers are that now.<p>People who are technical enough see the benefit and swear by it. We just need to make it easier to use. Maybe an adblocker add-on with live support and constant monitoring (and tweaking of the rules) is a produt that you can sell by the millions?<p>Canvas fingerprinting? gone. Third party cookies? gone. Auto play media? gone. etc. Everyone say that privacy is most expensive luxury nowadays. Maybe we need to commoditize it?
<i>Every step you take on the web, every site you visit, every page you view, is used to create and enhance a profile about you. Everything you do is carefully tracked and monitored.</i><p>Bold of the author to openly admit this.