<i>It's a really powerful way to debug as you can automate much of the debugging process and do things that aren't practical without scripting.</i><p>IMHO when debugging software needs to be automated to the point that a library needs to be written (and itself debugged), there's an underlying problem which can't be solved with adding more layers of complexity - as that will only introduce <i>more</i> bugs. When these bugs are in the software you're using to debug, things can quickly take a turn for the worse.<p>Is that code really taking the user's input (a string), getting object addresses and offsets (numbers) from that, then converting those into strings to build a command string, which then gets parsed back into numbers for the debugger to ultimately use to create a watchpoint? I think that is itself a good example of how the "more code, more bugs" principle can apply: all this superfluous conversion code has introduced a bug.<p>Here's a good article about that, although it doesn't mention the situation where the bugs you introduce end up being in the software you need to use to remove bugs...<p><a href="http://blog.codinghorror.com/the-best-code-is-no-code-at-all/" rel="nofollow">http://blog.codinghorror.com/the-best-code-is-no-code-at-all...</a>
He identified a bug in the function `evaluateIntegerExpression` which parses an integer literal from LLDB, erroneously always parsing in base-16.<p>But the pull request he created (<a href="https://github.com/facebook/chisel/pull/117" rel="nofollow">https://github.com/facebook/chisel/pull/117</a>) didn't fix that; instead it just replaced that function call with manually parsing the literal in the one specific place he was having problems with.<p>Did I miss something there? That seems like a really weird "solution". Why not just fix the original function?