This should also be a reminder to everyone that you shouldn't be reliant on a single point of failure for your deploys. It's something that we in the Python community have already encountered (and hopefully learned from) due to the historical unreliability of our equivalent package repo, PyPI.<p>Have an internal repo that's accessible by your deploy servers, which in turn locally caches anything that you might have previously needed to externally fetch.
I'm surprised that rubygems.org of all places did not see fit to patch the vulnerability that's now known for multiple weeks and which has been declared to be incredibly dangerous and for which ready-made exploit kits exist.<p>rubygems.org is a central distribution platform trusted by tons and tons of projects. As such, that site is the one site you probably do not ever want compromized. Imagine the damage an attacker could deal by uploading backdoored versions of various popular gems.<p>I know - applying security patches is time-consuming and we are all afraid of breakage. But the moment rubygems.org stepped up to be a semi-official central distribution point for gems, I would have hoped they also took on the responsibility that goes along with that.<p>If this was some new unknown 0day exploit, I would be much more understanding, but this was known to exist, known to be dangerous, known to be exploited.
The DAY I manage to convince the big wigs where I work that we should switch from a typical shared environment to Heroku, this happens.<p>Talk about luck. :(<p>Hopefully I can spin this and not leave a bad taste in their mouths. We (engineers) understand what's happening, management doesn't and they don't give a shit.
I always assumed that Heroku would have an internal proxy for gems. Seems like 80% of users would probably be fetching the same gems that another user might have just fetched. Perhaps something like this could be versioned or snapshotted so that in the event of something like this, you could roll your cache back to that snapshot and let people deploy who had gems in that cache.<p>I'm just thinking out loud.
Heroku engineer here: Ruby deploys are back online if you don't require any new gems, i.e. can deploy from the existing cache. We're still working on resolving the large problem with Ruby Gems.
This is why you vendor gems, and the current accepted practice of not vendoring gems is dead wrong.<p>I've been chastised before in rails irc for this. I strongly believe the source of all depndencies possible should be in your repository.
Between this hack and the recent Rails vulnerabilities, it seems like a perfect storm. I wonder if either the hack attempted to tamper with the Rails gems to catch late updaters or to remove the ability to use RubyGems to update to the latest versions and keep vulnerable sites vulnerable.
Survey: Do you depend on RubyGems for every deploy? Or do you have your own gem server? Or cache them at some point earlier in your pipeline?<p>We rely on RubyGems and had a meeting yesterday about changing that when one of the gems we use had a version just disappear.
This is the responsible thing to do. Going through the gems and verifying they aren't compromised is a lot of work. We should be thankful for all the effort the rubygems maintainers and other volunteers are putting into cleaning up this mess.
A tangent, but I always thought "YAML" was pronounced /'jæm.ḷ/, however the post's use of "an YAML" suggests it's actually pronounced /waɪ.eɪ.ɛm.ɛl/. Weird.
If you are not updating gems, does it hurt to continue deploying with a custom buildpack? Heroku shouldn't repull gems if the gemspec hasn't been altered. Is that logic correct?<p>Does anybody have another suggestion for safely working around this issue? I don't have a clear sense for how long this will take to resolve and don't wish to slow down our release pace too much.
I thought at one point in time rubygems had a system in place to sign the contents of the gem? If not this might be an interesting addition. You could have a digest stored along side every gem file allowing you to validate the authenticity of the gem file... I'm sure others would have something to add to this idea...
Being totally new to RoR (trying to learn it), I'm trying to get my head around the scope of this.<p>When did the compromise happen? Was it compromised yesterday or only found out yesterday?<p>I have default gems installed on my system and haven't updated anything since the big Rails security issue that was reported a bit ago.<p>It'd be great to get some guidance on what to do.
Seems odd, however, that status.heroku.com lists the issue only on the development side, suggesting production apps are not affected? <a href="http://screencast.com/t/L36Hpx5dx" rel="nofollow">http://screencast.com/t/L36Hpx5dx</a>
lmao Ruby and RoR shame PHP in terms of security flaws. Making you unknowingly write security holes, ridiculous flaws discovered on daily/weekly basis, package management hacked etc I have never seen as ridiculous holes as RoR even in the CodeIgniter framework. Where are the RoR-haters when we need them?