Something I've been thinking about lately is how browsers have essentially become a dependency for any sort of auth on the internet. Pretty much everything uses OAuth2, which requires you to be able to render HTML and CSS, and in many implementations JavaScript.<p>That's ~20M (Firefox) to ~30M (Chromium) lines of code as a dependency for your application, just for auth. This applies even if you have a slick CLI app like rclone. If you want to connect it to Google drive you still need a browser to do the OAuth2 flow. All of this just so we have a safe, known location to stash auth cookies.<p>It would be sweet if there was a lightweight protocol where you could lay out a basic consent UI (maybe with a simple JSON format) that can be rendered outside the browser. Then you need a way to connect to a central trusted cookie store. You could still redirect to a separate app, but it wouldn't need to be nearly as complicated as a browser.
I like the idea of this. There's so much <i>information</i> on the web, but we still need a way to bring that information to other applications, without being tied to a particular source. That was really the dream of the semantic web, after all.<p>This kind of idea would be really nicely paired with good Microformats[1] support, which continues to be a very good idea. That way we can find, say, a recipe or an address on a web page in a reusable way and without needing magical heuristics.<p>(Of course, "reusable" in theory, with the caveat that everybody forgot about microformats around when Google decided they could machine learn their way out of everything).<p>[1] <a href="http://microformats.org" rel="nofollow">http://microformats.org</a>
Wow! I actually love the idea of being able to interact with websites via a standard API rather being forced to use web-based UI they provide. It opens up a whole lot of possibles for things like alternate clients, standard UIs for interacting across multiple sites, etc. Also eliminates the possibility of sites engaging in annoying or abusive behavior by putting users in full control of the client rather than the site operator. Obviously it can't work for <i>every</i> site, but it's quite the interesting concept.
Believe it or not, around the turn of the century there were many thick client apps. But back then it was a challenge to ship and update these applications. This pain, along with the continued rollout of broadband led many to advocate for creating applications that would run in a web browser while being controlled on centralized servers. In practice, turning the platform that was designed to render markup text into an application host. This would allow applications to be shipped and updated with little interaction from the user.<p>However, right about the same time web apps were taking over the world there were thick client apps that were solving the problems of installation and and updates. Two of the prominent thick client applications doing this were iTunes and the browsers themselves.<p>Now fast forward a decade to the early teens and the ubiquitous use of smart phones. What is the single largest determining factor of platform success? Is it the ability for web apps to render on your platform's web browser or is it the breadth and depth of your platform's app store?<p>My rant is over, I wish web apps would die. I've wished that for most of the 21st century.
This is.... bizarre. And I like it?<p>At first I thought this was like an API to integrate web content into your own apps. But now it looks more like Groupware, in the sense that Woob is actually your user interface and there are just modules to consume content from random websites.<p>It goes back to the old idea where you would have one dedicated desktop application for each thing you wanted to do on the internet, like read news, send mail, listen to music, view a calendar... turning your computer into a utilitarian appliance. Rather than a portal for businesses to spend a lot of time and money building their own dedicated user interfaces to lock you in. The latter has made life more difficult, where we have to constantly learn every business's new interface, there's always competition between missing features, and the dedicated UI (or platform) becomes a way for the business to squeeze more out of the user.<p>And there are no ads. I just realized there's an entire generation who have never seen technology without advertisements. I wonder what they'd make of this.
Interesting to see Woob here. Most of the modules are for french environment (banks, dating websites, job boards ...). I always liked the irreverence of the module's names and logos (which are authentic MS Paint piece of work).
This has Bloomberg terminal / minitel vibes. I think there's definitely a space for an alternative browser that can render guis with visually consistent widgets.
This is so cool. A custom client for websites. Essentially a web scraper with a GUI on top. You can define your own user experience instead of accepting what they designed for you.
A while back I heard about Z39.50 [0], a protocol that libraries use for their catalogs. In the 90s it seems there were native clients for the protocol so that one could interact with the library catalog without using a web interface. A lot of the current web interfaces are terribly slow JS monstrosities now so I'd like to try something faster.<p>I never did figure out if any of the GUI clients [1] are still actively developed and I'd appreciate if anyone who knows about this could point me towards a good client.<p>[0] <a href="https://en.wikipedia.org/wiki/Z39.50" rel="nofollow">https://en.wikipedia.org/wiki/Z39.50</a><p>[1] Some software listed here: <a href="http://www.loc.gov/z3950/agency/resources/software.html" rel="nofollow">http://www.loc.gov/z3950/agency/resources/software.html</a>
I've been playing with it, but I keep running into errors. E.g.:<p>in woob-weather, with weather.com backend, I've been getting "Error(weather): 401 Client Error: Unauthorized";<p>in woob-gallery, with imgur backend, when I attempt to download an image the module crashes with "FileNotFoundError: [Errno 2] No such file or directory: ''"<p>I like the idea though and I'll keep trying further.<p>---<p>Update: I resolved the image-gallery problem by specifying the foldername (so: using "download ID 1 foldername" instead of "download ID"). BUT: it looks like I'm unable to download text descriptions that sometimes accompany the images.
I think a version of this is what the internet needs but using headless browsers from the client and with a somewhat-centrally curated set of scraper "recipes" if you will. Basically a community curated/updated set of scraper logic per site (yes some trust is required) that essentially provides JSON data and/or APIs based on the site. Even just a neutered HTML equivalent of sites (e.g. amp w/out the Google and ads stuff) would be good.<p>Since it is all client side, it can be dubbed a "browser" not a "scraper" and one might hope popularity is high enough that active blocking of it is blatantly user hostile. Granted one hopes that, like EasyList and uBO and others have shown, the community can outpace site owners. Not appearing headless (tunneling captchas, literal mousemove events in pseudo-random human-like ways, etc) should be doable.<p>It's something I have thought about and once dubbed "recapitate" (<a href="https://github.com/cretz/software-ideas/issues/82" rel="nofollow">https://github.com/cretz/software-ideas/issues/82</a>) and plan to revisit. I have seen many versions of this attempted. We need to encourage shared data extraction tools.
Been using web outside of browser for last 10+ years. I am addicted to it, having written many scripts and programs to fetch and process HTML and other filetypes outside the browser. I can retrieve and extract data/information much faster and more efficiently using simpler programs that are small and work together. For viewing HTML, I prefer my text-only browser. HTML looks much better, less variable, more uniform, than in a graphical browser.
The only reason these methods are appealing to me is because the alternatives are so unappealing. I avoid having to deal with the downsides of using "modern" web browsers and the annoyances of trying to view the web through those browsers.
Reading the linked site and some of the discussion, I highly recommend finding your nearest Chinese friend or person and getting authorized on Wechat. Its a whole parallel other internet! Kinda similar to how a private set of Facebook pages are otherwise inaccessible with an account, except in a parellel reality where people use them for all business and have no other internet presence.<p>Yes, as a user another government gets to read your posts, but I mean yet another.<p>To get on, I literally just knocked on a few doors in San Francisco and got authorized, so many people here can too. You could probably do it at a park.<p>Note: Hong Kong citizens cannot do it for US citizens even I was trying. Has to be a mainland Chinese person.
This is clever and fantastic. I have been pondering a similar concept recently and I think I would like to contribute. I'm curious as to why LGPL-3 was chosen as the license, though, not that the license is a show stopper.
Also: woob - 1994 is one of the best ambient albums ever made.<p><a href="https://www.youtube.com/watch?v=0S3owK3pN64" rel="nofollow">https://www.youtube.com/watch?v=0S3owK3pN64</a>
Eons ago there used to be a type of service on the web that allowed you to send a URL to a mail server. It would fetch the content at that URL and email the article to you.<p>I can't for the life of me remember what that type of service was. It was back in the era of anonymous remailers.. any ideas?
I'm mainly interested Youtube functionality, and wanted to check if it was well maintained.<p>The developer was listed as Laurent Bachelier (<a href="https://github.com/laurentb" rel="nofollow">https://github.com/laurentb</a>). Searching him up, he unfortunately seems to have committed suicide a year ago. Bizarrely with some links to right winged political groups? (First result on google from "Laurent Bachelier")<p>My deepest condolences to the programmers of this project who have lost someone who I assume was a close friend and co-worker.
Not a single comparison to Sherlock/Watson? I always thought it was a great idea, because it allowed a consistent interface for theoretically finding anything on the internet.