The "everybody has bugs" response is intellectually dishonest. Yes, everybody has bugs, but most people's bugs aren't an intentional feature that a trained monkey ought to have known was a bad idea.<p>- Someone implemented a YAML parser that executed code. This should have been obviously wrong to them, but it wasn't.<p>- Thousands of ostensible developers used this parser, saw the fact that it could deserialize more than just data, and never said "Oh dear, that's a <i>massive</i> red flag".<p>- The bug in the YAML parser <i>was reported</i> and the author of the YAML library genuinely couldn't figure out why this mattered or how it could be bad.<p>- The issue was reported to RubyGems multiple times and they <i>did nothing</i>.<p>This isn't the same thing as a complex and accidental bug that even careful engineers have difficulty avoiding, after they've already taken steps to reduce the failure surface of their code through privilege separation, high-level languages/libraries, etc.<p><i>This is systemic engineering incompetence that apparently pervades an entire language community, and this is the tipping point where other people start looking for these issues.</i>
This quote caught my attention:<p><pre><code> There are many developers who are not presently active on a Ruby on Rails
project who nonetheless have a vulnerable Rails application running on
localhost:3000. If they do, eventually, their local machine will be
compromised. (Any page on the Internet which serves Javascript can, currently,
root your Macbook if it is running an out-of-date Rails on it. No, it
does not matter that the Internet can’t connect to your
localhost:3000, because your browser can, and your browser will follow
the attacker’s instructions to do so. It will probably be possible to
eventually do this with an IMG tag, which means any webpage that can
contain a user-supplied cat photo could ALSO contain a user-supplied
remote code execution.)
</code></pre>
That reminded me of an <i>incredible</i> presentation WhiteHat did back in 2007 on cracking intranets. Slides[1] are still around, though I couldn't readily find the video.<p>[1]: <a href="https://www.whitehatsec.com/assets/presentations/blackhatusa07/0807blackhat_hacking.pdf" rel="nofollow">https://www.whitehatsec.com/assets/presentations/blackhatusa...</a>
When I look at the Ruby/Rails community, the word that comes to my mind more than any other is <i>hubris</i>.<p>You see this in things such as security issues being marked as wontfix until they are actively exploited (e.g. the Homakov/GitHub incident), in the attitude that developer cycles are more expensive than CPU cycles, and on a more puerile level in the tendency towards swearing in presentations.<p>I've always had the impression that the Rails ecosystem favours convenience over security, in an Agile Manifesto kind of way (yes, we value the stuff on the right, but we value the stuff on the left even more). One of the attractions of Rails is that it is very easy to get stuff up and running with it, but some of the security exploits that I've seen cropping up recently with it make me pretty worried about it. I get especially concerned when I see SQL injection vulnerabilities in a framework based on an O/R mapper, for instance.
I think the recent Rails, Java, RubyGems and other vulnerability issues have been an absolute boon to the industry. And not just for the increased business I think most security consultants are going to be seeing.<p>The exploits have happened in ways that have exposed and hammered home the myriad places many applications expose unexpected side channels and larger attack surfaces than you'd think. These issues have opened a broader range of people to vulnerability, and I think opened a lot of people's eyes to the need for a sense of security and what that really means.<p>Top that with the level of explanation we've seen in at least the Rails and Ruby exploits, it's been a tremendous educational opportunity for a lot of people who will benefit greatly from it, and by proxy their users.<p>When the idea of a "SQL Injection" first became really prevalent, we saw an uptick in concern for security amongst framework developers, as far as I could tell. I think this will help get some momentum going again.<p>Speaking as a non-expert on the subject, security is all about a healthy sense of paranoia, across the board :)
It would be interesting if someone wrote a worm that just took all the vulnerable rails apps offline. That way we would have less worry about a million compromised databases. It could be launched from a bookmarklet run from tor browser, and would probably exhaust every ip address in a few days. It would also land whoever did it in jail for a really long time.
This is a pretty good example of why I hate big frameworks. They are simply too big to prevent stupid issues like YAML extraction in JSON and XML.<p>If you are like me, you would expect that YAML was used in the configuration files and <i>nowhere</i> else. A small framework like Sinatra wouldn't have been big enough to hide an issue like this.
> The recent bugs were, contrary to some reporting, not particularly trivial to spot. They’re being found at breakneck pace right now precisely because they required substantial new security technology to actually exploit, and that new technology has unlocked an exciting new frontier in vulnerability research.<p>What technology is he talking about here?
This was a hugely helpful big-picture overview of the recent vulnerabilities. Everyone, please go read it.<p>I had been meaning to get some context for the recent spate of security problems and this provided that in spades. Thanks for taking the time to write it up and post it.
> The first reported compromise of a production system was in an industry which hit the trifecta of amateurs-at-the-helm, seedy-industry-by-nature, and under-constant-attack. It is imperative that you understand that all Rails applications will eventually be targeted by this and similar attacks, and any vulnerable applications will be owned, regardless of absence of these risk factors.<p>Who was the first reported compromise of a production system?
All the RubyGems stuff is happening at a high rate and I understand that over 90% of the Gems are now verified and it looks like nothing was backdoored but I couldn't find a good summary of the current situation so I have a couple of questions.<p>1) Is it currently safe to "bundle update" and be confident that only verified Gems will be provided? I don't mind errors on any unverified ones but don't want to download them.<p>2) Is there a drop in replacement for RubyGems? The problems that have occurred this month would have been multiplied if RubyGems was unavailable at the time Rails had an apocalyptic bug.
To me it seems that all of this is due to the obsession with implicit behaviour in Rails, and to some extent Ruby.<p>I hope they learn from this and stop chanting "convention over configuration" when told that explicit is better than implicit.
Is this YAML vulnerability something that can be patched in relatively short order without Rails itself having to be completely rewritten?<p>Or should I basically just not run Rails on any machine ever anymore, get a different web server, and start implementing my own request routing and ORM without any sort of YAML-parsing magic?<p>>One of my friends who is an actual security researcher has deleted all of his accounts on Internet services which he knows to use Ruby on Rails. That’s not an insane measure.<p>So anyone who uses Twitter, for example, could have their passwords and other data stolen through this exploit?
So I went through my heroku closet and cleaned everything up (pulling the plug on unneeded apps and making sure needed apps were up to date).<p>My question: do these security issues affect Sinatra apps?
"Any page on the Internet which serves Javascript can, currently, root your Macbook if it is running an out-of-date Rails on it."<p>Why are you running Rails as the root user? This is a bad idea.<p>EDIT: I'm not really into client-side JavaScript these days, but when did browsers start allowing JavaScript to connect to anything except the server from which it came? That would be yet another Bad Idea.
Is Rails moving to a YAML (or almost-YAML) parser that does <i>not</i> execute code for future major releases? I find it hard to believe that such functionality is used often. Until then, as the article says, people will just keep finding zero-days. This seems like the only logical choice for the Rails core team.
New internet law: any sufficiently sized platform or framework will attract increasingly more compromising/malware attacks. Anyone running Wordpress still knows this all day.
Some time ago I asked certain Ruby people how to dynamically load Ruby code (for configs). They told me it's Wrong. Seems that in practice the idea wasn't much worse than Yaml after all.<p>I am still convinced that configs and templates should be treated as executable code and are best implemented in the same language they're used from. At least it makes certain things blatantly obvious. (It also makes a lot of other things possible without any extra coding/learning.)
Is there no hardened version of Psych which lets you either disable object deserialization, or whitelist classes? That would seem like the safest option right now to guard against coming vulnerabilities in Rails in this regard.
Why do people write things like "We Should Avoid #’#(ing Their #()#% Up" instead of "We Should Avoid Fucking Their Shit Up"?<p><a href="http://www.youtube.com/watch?v=dF1NUposXVQ" rel="nofollow">http://www.youtube.com/watch?v=dF1NUposXVQ</a>
How feasible would it to have a gem that sits in middleware that would check for possible attacks before the string gets any further and block/share IPs of people fishing for exploits?<p>I could see it as a service company that shares blacklist info between sites and can even find new exploits from the "bad" requests.
Why don't we have "building codes" for software?<p>There was a time when anyone who claimed to have the ability could design and build things like bridges and buildings. After enough of them collapsed due to repeated, avoidable mistakes, we said no, you can't do that anymore, you need to be licensed to design and build buildings, and furthermore you have to follow some basic minimum conventions that are proven to work. And you and your firm has to take on personal liability when you certify that your design and construction follows those basic best practices.
As much as I was a fan of developing Ruby apps, I was constantly shocked by the lack of engineering, security concern, stability of API, basically serious software engineering within the community.<p>It would be good if all this was a clarion call to the Ruby community to improve things holistically, rather than the current trend of band-aid fixes they seem to apply.
Code execution while deserializating / parsing data is my first and uttermost concern. Nowadays I'm in Clojure land and it's still not entirely clear as to what I can and cannot do and what the language as to offer me so that data doesn't contain rogue code that is going to be executed.<p>In Common Lisp, for example, as far as I know you can set a flag so that the reader is set to "no evaluation ever" (if I understand things correctly) and, hence, if you're not using eval yourself specifically, nothing is ever going to be evaluated.<p>But how would that work in Clojure? And what about other languages? Ruby? Haskell? Java? C#?<p>I think the ability to execute code became <i>the</i> most important security issue (more than buffer overflow/overrun which can now be prevented --even sometimes provably impossible to happen thanks to theorem provers).<p>More thoughts should be put into explaining how/when a language / API can execute code and how it should/can be used to prevent such a thing from happening.