Here is a profiler developed by dropbox using the same technique described in the article:<p>- <a href="https://blogs.dropbox.com/tech/2012/07/plop-low-overhead-profiling-for-python/" rel="nofollow">https://blogs.dropbox.com/tech/2012/07/plop-low-overhead-pro...</a><p>- <a href="https://github.com/bdarnell/plop" rel="nofollow">https://github.com/bdarnell/plop</a>
Reducing CPU load also<p>* reduces power usage, wear and tear on hardware<p>* gives more capacity for traffic surges<p>* gives more headroom for new features<p>* enables running on a smaller instance<p>This isn't an argument against slow code, more of a suggestion to tune out unnecessary work.
Here is another Python profiler intended for live use: <a href="https://github.com/what-studio/profiling" rel="nofollow">https://github.com/what-studio/profiling</a>
This question might be bit naive but how is this approach any better or different from monitoring tools like NewRelic which does the profiling for you?
>It’s a large Python application (~30k LOC) which handles syncing via IMAP, SMTP, ActiveSync, and other protocols.<p>In what context is 30k LOC a large application ? 30k LOC is small enough that one programmer can write and easily have an overview of the entire codebase. Maybe it's a typo and it's 300k LOC