It turns out I was wrong about log-runtime-metrics! Please see the followup post: <a href="http://siliconisland.ca/2013/04/26/beware-of-the-zombie-process-apocalypse/" rel="nofollow">http://siliconisland.ca/2013/04/26/beware-of-the-zombie-proc...</a>
Thanks for blogging this. Today I had my Account manager and another support person from Heroku pushing me to add that experimental feature, so I could see the memory usage of our dynos. I had reported on twitter and in a support ticket that the memory reported by newRelic didn't match the values that our Account Manager sent me. He sent me this because we were trying to add 4 Unicorn processes to a 1 Gig Dyno, and newRelic was reporting our app never went over 256MB. Our Account manager sent a different trace of memory usage.<p>I have lost all faith in the values that newRelic reports that come from Heroku.
In addition to the actual topic of the post (issues with a Heroku beta feature), I find this post pretty interesting for a number of reasons.
It seems that for blog posts are the preferred method of reporting deeper issues. I guess it's a way of publicizing the work of tracking down the problem and earning developer cred. (Do people have blog posts for pull requests too?)
Also, it's impressive to see how fast Heroku's Ryan Daigle replied. Hopefully this issue will be short-lived as the runtime metrics are pretty handy.
No. Freaking. Way. I noticed this same exact issue pop up on my New Relic interface. Multiple times a day our app would get H13's, crash, and then restart. Have been trying to get to the bottom of this ever since. Awesome detective work on this. Our app (and tech team) thank you very much!