Something I don't see anyone else saying: the World Wide Web's primary "language", HTML, being so incredibly un-powerful and reliant on browsers for new features, is an <i>intentional, FOUNDATIONAL design principle</i>, called the Principle of Least Power: <a href="http://www.w3.org/DesignIssues/Principles.html" rel="nofollow">http://www.w3.org/DesignIssues/Principles.html</a><p>I'm not saying I completely agree with it, but the rationale behind the principle, and its incredible success, are compelling arguments against what the article proposes, that are not addressed whatsoever.<p>Also, a nitpick I don't see anyone else making: the author doesn't seem to know the difference between the Internet and the World Wide Web, which is acceptable for a mediocre Web developer, but pretty embarrassing for someone proposing a foundational change to the Web (one that, contrary to the provocative "new Internet" gospel in the first paragraph, doesn't affect the Internet Protocol Suite whatsoever).<p>(And of course, as pointed out by a plethora of other comments, the proposed foundational change is essentially the Java VM.)
I like this idea! It reminds me of this page <a href="http://mozilla.github.com/pdf.js/web/viewer.html" rel="nofollow">http://mozilla.github.com/pdf.js/web/viewer.html</a> which loads a JS-based PDF reader and then uses it to render a PDF in your browser. It could be expanded to do a lot more than static documents - in fact it seems to echo the ideas that Alan Kay has been working on for decades. <a href="http://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442?pgno=2" rel="nofollow">http://www.drdobbs.com/architecture-and-design/interview-wit...</a> "Binstock: Still, you can't argue with the Web's success.<p>Kay: I think you can... go to the article on Logo, can you write and execute Logo programs? Are there examples? No. The Wikipedia people didn't even imagine that, in spite of the fact that they're on a computer.... Go to a blog, go to any Wiki, and find one that's WYSIWYG like Microsoft Word is. Word was done in 1974. HyperCard was 1989. Find me Web pages that are even as good as HyperCard. The Web was done after that, but it was done by people who had no imagination. They were just trying to satisfy an immediate need.... what you definitely don't want in a Web browser is any features.<p>Binstock: "Any features?"<p>Kay: Yeah. You want to get those from the objects. You want it to be a mini-operating system, and the people who did the browser mistook it as an application. They flunked Operating Systems 101.... I mean, look at it: The job of an operating system is to run arbitrary code safely. It's not there to tell you what kind of code you can run."
No links, no bookmarks, no standard UI conventions and components, no history, no back and forward buttons that work consistently, no mental model of how it all clicks together (page model), no accessibility, no search, no semantic content, no pesky users controlling stuff. Sounds awesome, sign me up.<p>Sarcasm aside, we're moving in that direction already.
I think the internet is just trying to find the right abstraction level. And with html5/css3/js/webgl and the newest generation of standards, we almost found it.<p>We did used to download random binaries from the internet and run them. It took too much effort to get up and running and there were compatibility issues between systems, not to mention dependencies (java / c++ runtimes etc.), dll hell, having to support myriad different OSes and OS versions. When it broke, it broke badly and usually you had 0 chances of debugging, save sending a core dump back over the internet, if the user let you.<p>Meanwhile, standards came and improved, and finally we realized that we already had a UI toolkit that ran reasonably across all platforms, didn't require any dependency installations, ran "programs" instantly without any download and installation procedure. This alone is as large a marketing win as it is a tech win. Remind you of something? It's the web we know and love today. And you get logs when it breaks. And you can watch your users using it, page by page and get stats. And the limited nature of the abstraction layer means that if it breaks, it doesn't fail as catastrophically, there is less chance of a big fat ugly core dump message, there is more of a chance you almost instantly know. And the hard abstraction layer means that there is a clearer understanding of security implications. Web apps, generally speaking, can't access my files. Can Java applets or ActiveX components? A bit more hazy. Can executables? Yeah. Can this proposed system? Maybe?<p>The modern web (browser and standards together) <i>is</i> a universal VM and an SDK rolled into one. It's the only one we have that works reasonably well enough and strikes a decent balance between capability and ease of development.<p>The point of this person's system, which is the proposed freedom from browser restrictions and incompatibilities, is a complete fallacy. Is Microsoft going to recreate this new system for every system under the sun? No. People (and companies like Apple, Google) will have to create their own, and thus incompatibilities will prevail. Open standards are good. Established open standards are better.<p>Many people tried to create alternatives, and failed miserably. It works. Don't fuck with it.
This post received a lot more attention than I thought it would; if I had known, I might have explained a few things more carefully. If you didn't immediately dismiss it as stupid and found it at least a little thought-provoking, I highly recommend checking out the technical report <a href="http://research.microsoft.com/pubs/173709/submitted-nsdi13.pdf" rel="nofollow">http://research.microsoft.com/pubs/173709/submitted-nsdi13.p...</a> (PDF), especially Figure 2 (which describes the entirety of the syscall interface), Section 3 (which describes it) and Section 4 (which talks about how to reimplement traditional browser user interface paradigms in this new world.) It's all quite interesting, and at the very least, might say something about how we should take mobile native apps forward.
Seems like a bad idea to me. How will my browser do things such as autofill usernames and passwords, allow links to be opened in new tabs, block third-parties from tracking me with cookies, or allow me to globally change my font preferences if each web page actually uses a different "browser" that it downloaded, and whose only connection to the outside is IP and a framebuffer? Or how would a browser be able to give me crisper text on a Retina display with no change to web pages, or provide hardware-accelerated threaded scrolling, or let me play video with hardware-accelerated decoding?<p>(These are a few of the things that, as a browser engineer, I expect could not be done in this model.)<p>Also as a Mac user, I expect this would result in many forms in web pages giving me controls that look and act like Windows.<p>Against these likely disadvantages, the advantages seem slim. We already have good ways to distribute sandboxed native apps. The Web doesn't need to become one of them.
No, no no, a thousand times no. The problem with this scheme is that we lose all that juicy parse-ability and addressability that make the internet so useful and powerful.<p>Let's face it: the open web is a terrible environment to program in. It is messy and arcane and difficult. But some of this difficulty is <i>not</i> acidental complexity. Some of it is the price we pay for writing apps who's outputs and interfaces are themselves addressable, mutable, and usable.<p>If apps were to become big opaque blobs of binaries again, yes, they would be easier to program. But is that good for us, or good for the world? What is the long-term cost of making the web?<p>The promise of real, globally shared coding environments is only just starting to get some legs with tools like jsfiddle revolutionizing the way we solve programming problems together. I <i>can't</i> really share Java code of any complexity with people and be assured that they will be run in a suitable context. The path to creating an environment, compiling and executing Java is intense, and jsFiddle literally <i>eliminates all of that work for the people I want to see my code</i>. Learning the quirks of the open web is a small price to pay for that super-power.<p>And let me also say that there is still plenty of accidental complexity to eliminate! In particular, servers have been doing far too much for far too long, and once they start doing what they are good at and <i>only</i> what they are good at (which is partitioning the sum total of your information into discrete URL-addressable resources) we will all breath a huge sigh of relieve, and focus on the real problem. (Which is, of course, the monstrosity that is CSS).
So instead of writing Javascript, we would build our own browsers to download alongside the HTML? Or failing that would have to offer some prepackaged browser.<p>And every device, from a smartphone to a 30" screen running on an octo-Xenon would have to be able to execute the same native code.<p>And each vendor of a browser kit would try to establish their own web standards to go with their home-grown engine.<p>I am pretty sure this is not the future of the web, and this seems like a great thing.<p>And it's not like this is a completely novel idea. None of the proprietary sandboxed browing environments of the past (Flash, Silverlight...) made anything really easier. They sometimes enabled us to do things that were impossible with HTML+JS, but they were all both a pain in the lower back for developers and users and had glaring security holes.
So the internet should just be a simplified app store and hypervisor/VM?<p>I like the text based internet! The fact that I can go to a website, right click and view source is the greatest part of the internet!
This is called "sandboxing" and contrary to being a "radical departure from the original Internet", the Java VM has been doing this in the applet engine since the mid-90's. It's a complicated platform to work with, though some applications do rely upon it, and it's the source of the most high profile security holes found in the Java platform overall (<a href="https://www.google.com/search?q=java+sandbox+vulnerabilities" rel="nofollow">https://www.google.com/search?q=java+sandbox+vulnerabilities</a>).
I'm sorry, but the suggestion is plain retarded. The author probably is too young and/or clueless to understand what kind of hell Internet was 15 years ago.<p>If he wants to experience how it feels, he should just start build Flash based websites.
Isn't this already invented and called an iPad or Android? (Or more seriously, this is called NaCL).<p>But you want to have OpenGL (with shader compiler security bugs you could drive a truck through!). You want to read from cameras and disks, and post notifications and go fullscreen and read/write audio samples and do all the other unsafe things that browsers are currently implementing.<p>The syscall interface that's explored in the paper doesn't do any of those things; software rendering only, etc.<p>The challenge in this work is exposing the advanced high-end functionality (GL, media, window system, etc) in a way that's secure and can't be abused. The challenge <i>isn't</i> in defining a syscall interface that lets you run all-software apps (NaCL has solved that problem, but so did the JVM and qemu...). I feel like this paper completely ignores the real issues, and focuses on problems that have been solved well enough or are already understood...
He says that a minimal environment is required:<p>"...all you need is a native execution environment, a minimal interface for persistent state, an interface for external network communication and an interface for drawing pixels on the screen..."<p>Compared to a modern browser, you've lost the ability to upload and download files, the ability to open URLs in other apps (OAuth, anyone?), the ability to play sound, the ability to use hardware acceleration for video or 3D graphics....<p>He goes on to say: "This is an interface that is small enough that we would have a hope of making sure that it is bug free." I'm guessing that once you add all the extra stuff you need to reach feature-parity with current browsers, you're going to reach bug-parity with them too.
I like the idea. I think something similar is already in place: there are JS libraries (eg. jquery) which are required for a website to be rendered. If the library is loaded from a central place (eg. google.com, or the creator of the library), then it can be cached and it doesn't have to be downloaded every time. In the same way, there could be different downloadable browsers, and you could require one for your website.<p>As a developer, it would be amazing if I could make every user run the browser I used to develop and test my website.
Isn't this what Java was about? You were supposed to write code that would run on any platform, and that code would download data from the web and display in any way it wanted. The fact that it didn't work just shows that it is much easier to create UIs that use a common set of standards, instead of creating your own ad-hoc UI for every new web site.
The biggest problem with this is that it eliminates OS-specific facilities, such as the clipboard, OS-style text selection, right click menus, window styling, etc. Yes, these features could be reimplemented, but they are <i>very</i> hard to get right, and near impossible if you are trying to match OS behavior on many platforms.<p>This suggestion is fundamentally no different than exposing a <canvas> element to a page, and saying "OK, implement your own rendering engine and draw whatever you want in this box." Let's say you wanted to implement a modern browser engine in the canvas, like WebKit. Well, you'd need a huge number of OS specific hooks to get things like the ones I listed above. Do you expose those via an OS API? Well, you just opened a huge number of vectors for an attack. Do you replicate these hooks in your own code? Well, you just created a crappy UX since you're never going to match the OS behavior 100%.
Nothing to see here. Noob who hasn't really felt the pain of Flash or Shockwave, reinvents it. Seems like HN is more and more full of people that don't really have a clue... :-/
Web-apps (and the WWW browsers making them possible) were supposed to be the solution to the portability problem. They're not and layering more crap on top of them is not the solution either.<p>Leaving other (e.g., security) issues aside, this plan would make things far worse. Would new processors be second-class citizens? Would WWW browsers have to (incompatibly) transform themselves into emulators/VMs for the different kinds of "native code" found on the Wild Wild Web?<p>Most importantly: Would this be the first step toward a closed/proprietary WWW? "View Source" is still useful in this regard, despite the amount of useless JavaScript obfuscation we're seeing today.
It's disappointing that Microsoft Research is spending resources trying to solve a problem that Microsoft's Internet Explorer is creating: a frustrating cross-browser development process due to longstanding legacy clients.
The point of our current ecosystem is that you can write a moderately portable app, (HTML, CSS, JS) and it is able to run on an x86 OSX browser, or an ARM-based Android phone etc<p>Either you going to write native machine code, in which case you lose portability, or you going to be writing intermediary byte code. That approach has been done before and failed as a ubiquitous open web standard eg the JVM and the .net CLR.<p>Today's situation is messy, but it works. Every once in a while there is a crazy idea that is actually pretty decent,...<i>'downloading /browser.exe'</i> is not one of them.
I worked on Microsoft's Windows 2000 team back in the late 1990s. There was an experimental project to sandbox native code using just NT's ACLs. Fortunately, I don't think anything became of the project because it sounds like an infinite game of Whac-a-Mole.
Back in the day, this was the original promise of Java. Content would be rendered inside of applets. ISTR a startup called Texture that tried to do this. They provided pixel-level design tools, the result would compile to a .java file
>Howell knows how to do this<p>well why doesn't he explain how we safely present a system API to arbitrary code?<p>the whole point of the browser is to provide a method for cross platform ui, abstracting the display of an application from the underlying OS. if you were to want to support more than one operating system you'd have to recompile your binaries against all the system APIs that you wanted to support, sysV, winnt, darwin, etc...<p>and if you were to present an abstraction which would transparently handle the system API difference then you're looking at a JVM, which has been painfully demonstrated to be extremely hard to prevent escalations for, time and time again.
This is crazy. So crazy and ambitious. Running a full software stack on a very limited syscall set? You're basically re-defining an OS abstraction, much more so than Firefox OS or Chrome OS. It's the logical continuation of android and ios, at the browser level. So now your browser/OS is only comprised of $(curl | exec -) ?<p>But there are so many problems to solve. Skimming through the paper shows OP has thought of more problems than me, but there are so many more to solve(yet unforeseen) before this could become even a bit usable. But if you succeed, Microsoft should be worried (and kill your project).
Hmmm, an article that seems to pine for the days of true client/server applications...<p>Wait, what's this?
<a href="http://research.microsoft.com/en-us/people/howell/" rel="nofollow">http://research.microsoft.com/en-us/people/howell/</a><p>> microsoft.com<p>...oh.
Yes, this might make securing your device easier. But more and more, what I value is on the network - not my local hard drive. Be it a local network git repository, my ISP's email system or my cloud based apps (including my bank's internet portal).<p>So this hypothetical browser.exe with "only" network connectivity still needs just good a security model as today's browser. And how do know the new browser.exe I downloaded today doesn't have a poisoned security model that allows it to sniff and send my internet banking credentials to the bad guys?
So, does the web developer have to develop a browser for each platform then? Do each web dev have to write their own image library etc.? Intresting idea, but there have to be more effective ways of solving this problem.
.EXEs are for Windows.<p>How exactly is this supposed to work for clients on different platforms, like 32bit Windows, 64 bit OSX, linux, ARM, etc. etc.?? You
I can see this fixing some problems I've noticed with browser-based apps in the enterprise.<p>It could also be the next IE6, lying in wait for an unwary developer.
Dream on , there is and forever will be only one way , the js/html way. Because it is a priviledge to execute one's code on a client machine , not a right. And i want to know what code you are executing. So the future is transparent programming. If you want more capabilites ask browser vendors more APIs. Doesnt make html/js a great solution , both are just horrible and broken , but that's what we have to work with.