I'm really glad to see someone scrutinizing the use of JavaScript this way. JS is great for a lot of things - but part of the price of using it the way we are, using it do to more and more things formerly done by native code, is that it opens up more and more vulnerabilities that used to only exist in native code. Progressive enhancement is the way to go - and I think there are far too many egotistical developers out there who assume that their webapp is the special one that really, really, really needs JavaScript. It probably isn't. Every time I see that it reminds me of Raymond Chen's stories about one poorly-designed Windows app after another whose developers were convinced that their app was the special snowflake that should be allowed to break the rules in whatever way.<p>Get over yourself - and more importantly, stop putting barriers between customers and your bank account. Virtually everything that you'd do to make a site progressively enhance with JavaScript, is something that'll make the site more robust and tolerant to unexpected visitors and their environments. It's a bad idea to reject those visitors just because you didn't expect them.
The author of this piece seems to be glossing over something rather important: If you need to provide both a javascript-enabled and a no-javascript method of doing the same work, then you're effectively writing duplicated code. Whether this is something you actually want to provide should depend on your target audience, and how much you really care about chasing after the folks who disable JavaScript or are otherwise unable to use the JavaScript-enabled solution. There are probably also accessibility concerns here (how compatible are JavaScript-heavy apps and screen readers?). But none of this means that you are a bad person if you write a web app that requires JavaScript to function.