Whenever this comes up, I think of this quote from <i>The Practice of Programming</i> by Brian W. Kernighan and Rob Pike [0]:<p>> As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.<p>I found this lines up with my personal experience. I used to lean on interactive debuggers a lot, and still enjoy using them. They're fun and make for good exploring. But the act of figuring out where you want to print really makes you think in ways that interactive debugging cannot. I find the two forms really complement each other.<p>[0] <a href="https://logging.apache.org/log4j/2.x/manual/index.html" rel="nofollow">https://logging.apache.org/log4j/2.x/manual/index.html</a>
Personally, I think my biggest reason for using print debugging is.. it works.<p>In C++ I often find the debugger doesn't find symbols on projects built with configure/Make. If I have a Java Gradle project I have no idea how to get it into a debugger. Python debuggers always seem fragile. Rust requires I install and use a "rust-gdb" script -- except on my current machine that doesn't work and I don't know why.<p>I'm sure in each case I'm sure I could get a debugger working given enough time, but the only error I've ever had in print debugging is failing to flush output before a crash, and it's never been hard to search for "how to flush output" in whatever language I'm current using.
I think the point about seeing the state over time is a great one.<p>But also I want to nitpick because the title is one of my “favorite” pet peeves: “The Unreasonable Effectiveness of ...” thing is now used (as in this article) by people who are trying to say that something is remarkably or surprisingly effective, but that’s not what the original essay was about at all!<p>“The unreasonable effectiveness of the mathematics in the natural sciences” was a philosophy of science piece whose thesis was that there is no reasonable (rational, provable) basis for the degree to which our math abstractions and syllogisms happen to correspond to the physical universe.<p>It is self evident that they do in fact correspond super well, but the original piece was about how weird and spooky that actually is, if you think about it at all. Math is super effective, and there is no reasonable basis that we yet know that it should be so effective. It’s unreasonably effective.<p>It’s such a perfect title for that piece, and it feels dirty or diluting when it’s just used to mean “remarkably effective.”
IDE vs Text editor. OOP vs Functional. Logger vs debugger. The holy wars that shouldn't be. Why can't we all be friends and accept that Vim is better than emacs.
Speed of iteration beats quality of iteration.<p>You can step through the program, reason about what's going on, tracking values as they change. But if you missed the moment, you have start again from the beginning (time traveling debuggers being rare). Or maybe you're looking at the wrong part entirely at this stage, and just wasting time.<p>With print debugging you write a bit of code to test a hypothesis. Then you run it, and you keep running it, and especially if it's an UI program you play with the UI and see how the values change during that run. Ideally the loop to change the code -> see the result should be a few seconds.<p>You can then git commit or stash your prints, switch branches and compare behavior with the same changes applied. And at the end of the day if you walk away, your prints will still be there the next morning. The debugger doesn't produce any comparable tangible artifacts.<p>Once you do know where the problem is, and if it's not apparent what the problem is (most problems are pretty trivial once located), that's IMO the time to break out the debugger and slowly step through it. But the vast majority of problems are faster to solve through rapid iterative exploration with prints in my experience (C, C++ for over a decade, Python, now JS/TS).
Print debugging is the only way in a distributed system the way we are building micro services these days. We just call it logging.<p>Edit: ..and do it in production
Print debugging is not that different from setting logging level to DEBUG and those logging calls should already be there in code and give meaningful insight so I don't get printing being often ridiculed.<p>For over ten years of commercial work I used a debugger only a couple of times and in most cases it was against someone else's code, usually when things were completely broken and I needed to get backtraces from multiple deadlocked threads or lacked debugging symbols and things like radare were also required. There were also times when I manually called a syscall using gdb.<p>My opinion is that if you can't reason about the code helping yourself with just a couple of additional messages the code is probably broken/too complicated to begin with and requires serious refactoring. I've never understood people stepping through a program hoping to find some mysterious creature somewhere along a huge stack of calls. In my career I have often seen people always debugging an application as a whole instead of separated modules. Dividing a problem is the key. The same key that allows me to still program using vim without autocompletion, keep APIs sane and coherent, and avoid dead code.<p>One really useful exception is when dealing with electronics. My friends programming hardware use debuggers all the time and in this case it actually makes perfect sense because there is no way to print anything and things like hardware interrupts come into play.
I've never understood print debugging, at least in a web dev/nodejs context.<p>I don't begrudge people having their own approach to things, but almost universally when I see people use print debugging they seem to take quite a bit longer than just break pointing at the problem area.<p>If your code is in an unexpected state, it's much easier to hit a breakpoint, examine local values, and then backstep through the call stack to see what went wrong. I dare to say that in a single threaded context, it's almost objectively more effective.<p>Versus the alternative of using printlines, you basically need to map/model the state flow out in your head which is prone to error (limited capacity of human working memory).<p>Is it not easier to directly see the problem rather than doing mental math to make assumptions about the problem? I can't see a case for that being more effective.<p>Most of the time I see people print debugging it seems to be because they haven't used the debugger much... either they aren't comfortable with it, or didn't bother to set it up, or see the mental mapping approach as more "mathematical/logical"... or something. Takes you back to the school days of solving algorithms on paper :)<p>That being said for simple problems, I've used print debugging myself (again, usually because I'm too lazy to setup the full debugger). Or for multithreaded contexts etc, where thinking it through can actually be more effective than looking directly at the problem (multiple contexts)
Probably the most interesting thing about development as a discipline is the near radio silence on how to debug.<p>There is a decided lack of academic success in engaging with debugging as an object that can be studied. There are channels to learn about debugging as a stand-alone topic. Programmers don't often talk about debugging techniques in my experience.<p>For something that takes up the overwhelming bulk of a developer's time the silence is in many ways deafening. It may be that nobody has a method superior to print debugging.
ITT: a non-controversial opinion shared by most programmers.<p>Print debugging is fast in many cases and requires little mental overhead to get going.<p>But for some/many systems, there's a huge startup and cooldown time for their applications - and compiling in a print, deploying the service, and then running through the steps necessary to recreate a bug is a non-trivial exercise. Think remote debugging of a deployed system with a bug that requires select network and data states that are hard or impossible to replicate in local/dev.<p>For things like this, being able to isolate the exact point of breakage by stepping through deployed code, and doing immediate evaluation at various points to interrogate state can't be beat.<p>This post strikes me as either (a) a younger programmer who still thinks that tool choice is a war rather than different tools for different jobs (b) someone making a limp effort at stoking controversy for attention.
Almost all the reasons people use print debugging can be overcome by improving debuggers --- and to some extent <i>already have been</i> (in the words of William Gibson, the future is already here, it's just not evenly distributed yet). I think it's important for people to understand that the superiority of print debugging is <i>contingent</i> and, for many developers, will not persist.<p>Record-and-replay debuggers like rr [0] (disclaimer: I started and help maintain it), Undo, TTD, replay.io, etc address one set of problems. You don't have to stop the program; you can examine history without rerunning the program.<p>Pernosco [1] (disclaimer: also my baby) goes much further. Complaints about step debuggers (even record-and-replay debuggers) only showing you one point in time are absolutely right, so Pernosco implements <i>omniscient debugging</i>: we precompute all program states and implement some novel visualizations of how program state changes over time. One of our primary goals (mostly achieved, I think) is that developers should <i>never</i> feel the need to "step" to build up a mental picture of state evolution. One way we do this is by supporting a form of "interactive print debugging" [2].<p>Once you buy into omniscient debugging a world of riches opens to you. For example omniscient debuggers like Pernosco let you track dataflow backwards in time [3], a debugging superpower print debugging can't touch.<p>rr, Pernosco and similar tools can't be used by everyone yet. A lot of engineering work is required to support more languages and operating systems, lower overhead, etc. But it's important to keep in mind that the level of investment in these tools to date has been incredibly low, basically just a handful of startups and destitute open source projects. If the software industry took debugging seriously --- instead of just grumbling about the tools and reverting to print debugging (or, at best, building a polished implementation of the features debuggers have had since the 1980s) --- and invested accordingly we could make enormous strides.<p>[0] <a href="https://rr-project.org" rel="nofollow">https://rr-project.org</a><p>[1] <a href="https://pernos.co/about/overview" rel="nofollow">https://pernos.co/about/overview</a><p>[2] <a href="https://pernos.co/about/expressions" rel="nofollow">https://pernos.co/about/expressions</a><p>[3] <a href="https://pernos.co/about/dataflow" rel="nofollow">https://pernos.co/about/dataflow</a>
I recently discovered a Linux debugger & tool which allowed me to solve problems 10x faster than print statements: pernos.co (which is layered over Mozilla's rr time-tracking debugger).<p>Pernosco's tool is described pretty well on their website, but basically it allows you to view a program inside and out, forwards /and/ backwards, with zero replay lag. Everything from stack traces to variable displays (at any point in time in your code execution) is extremely easy to view and understand. The best part is the lightning fast search functionality (again: zero lag).<p>On top of this: extraordinary customer service if anything breaks (in my experience, they fix bugs within 24 hours and are highly communicative).<p>If you value your time I <i>highly</i> recommend you check out this tool.
I had a crazy idea the other day that perhaps there could be something like "CSS for program execution traces". If you think of function identifiers as XML/HTML tags and arguments for individual function activations as element attributes, then perhaps something similar to CSS selectors but acting on the tree representation of a program's execution could trigger at certain clearly defined points during the execution and format some human-readable output of what the program was actually doing, or a "cross-section" of it at least.
Most under appreciated aspect of proper debuggers is not about the code line of interest but the context they give you about the whole application, ie: the stack frames and their state. When handed a new codebase I often fire up the debugger and attach and set various breakpoints in interesting places and then execute the application to see where / when they get hit. It's a great way to learn a codebase - things that are hard to discover ("when is the database driver created and how does it know its password") just pop out where you might have to spend ages working it out if you were just examining the source tree.
In my experience, people who downplay debuggers don’t have the option to use effective debuggers. Debugging C++ and especially C# in Visual Studio is wonderful. Debugging Java in Eclipse can be great. Meanwhile GDB and most other language debuggers are painful and every IDE integration I’ve seen of them has been horribly unreliable.<p>I’ve heard there’s a culture in parts of Google where kids go through uni using GDB because “Woo Linux!” then go straight into Google where everyone is “Woo Linux!” (I do like Linux, btw) so they are either still using GDB, or more likely have given up on it and reverted to printf. So, everything takes <i>forever</i> to figure out and that’s just “normal”. This was coming from a console gamedev who was shocked by the transition after moving to Google.<p>Meanwhile, I’ve spent a good part of the past couple decades debugging large volumes of code that I will literally only see once ever. With a good debugger, that can be done effectively because watching and even modifying the code’s behavior can be done at a glance rather than a re-compile.<p>I’ve also worked on a very big project that used extensive logging because they had a very bad debugger setup and productivity was in the toilet compared to every other job I’ve had. The only way I could keep productive was to take the time to break out systems into small independent programs in my own environment so that I could use a debugger on that rather the run the code where it is.
I feel like the author gets close to the point but fails to drive it home: step-through debugging is <i>unbelievably</i> cumbersome. During a typical step-through debugging session, 90% of the time is spent on lines you are completely not interested in. Oh, did you accidentally skip the important point because of how tedious it was to keep spamming step-over/step-in? Better start over again. With print debugging, you set up your print statements strategically and -zing-, you get your results back. Feedback loop shorter. 100% of the lines are the ones you are interested in, because you put the print statements there.<p>I'm still waiting for the feature where you can conditionally stop at some breakpoint -only- if some other breakpoint/watchpoint was crossed over. It's not a conditional breakpoint, because conditional breakpoints can only watch variables, not other breakpoints. You could of course set some variable depending on whether some section was entered and then conditionally break based on that variable. But then you're back to print debugging land, having to manually insert code in order debug the program.<p>Debuggers are superior when it comes to interrogating the exact state of some variables, as well as the decision paths the program takes. For anything simpler, print debugging simply offers the better developer experience.
Another aspect, where printf debugging can be better than debuggers are use-cases where timing is relevant. Some bugs don't occur when break points stop the program at certain points in time. For completeness is should be added, that there are also cases where the printf can change the performance and make it impossible to find a bug.<p>I think the two methods are complementary and should be use in combination.<p>However, the big issue is that basic printf debugging is very simple to use and debuggers have a steeper learning curve in the beginning. Therefore, people start using printf debugging and don't invest into learning how to use debuggers. And when developers don't invest into learning how to use debuggers properly, they are missing the skills to utilize them and still use printf debugging in cases when debuggers are clearly superior.
There are two separate questions: whether you want to see some kind of trace of the program or you want to step around in its state, and whether to use a "real" debugger or not.<p>In most cases I prefer to do something trace-based, and in the IDEs I've used the debuggers have much weaker support for that than they do for stepping around.<p>In particular, setting up tracepoints tends to involve fiddly dialog boxes which are much less convenient than using the main text-editor interface to say what you want.<p>I think there's plenty of scope for debuggers to provide a better interface for trace-style debugging. For example I'd like to be able to toggle a tracepoint after capturing the run, and have the lines it created appear or disappear, or add a filter expression or additional information to display without having to rerun the program.
Qt Creator debugger fails on me constantly, it's 2021 and the leading C++ plaf. is completely unreliable in that many more cases.<p>That's why 'I must' use print debugging, because the 'powers that be' still provide a broken, half-baked solution 30 years in.<p>Print debugging is however so powerful, I think there almost should be a mechanism built into languages and tooling around it so that it becomes part of the process instead of a 'kind of workaround'. It's something we all do, constantly, and yet you'll never hear about it when people are arguing about Rust or Go.
I think it has most to do with way user thinks.<p>I need to see big picture, whole state, all the stuff and rapidly jump back and forth. I also, supposidely, have ability to keep a lot of state / scope/ abstraction in my head. So I find print debugging sufficient and fast. Rarely encounter situation I feel need for "stronger" tool.<p>Where other people focus on one thing, all that simultaneous output is just noise and distraction to them. And based on the continued use and popularity of step-based debuggers, these people are much more productive (and happier) using those type of tools.<p>It's very important to understand neither system is inherently superior. Although one or the other is superior to each individual. [btw over 35yrs of tech industry / software development I've found this true, that tools/paradigms are not universally superior but are superior based on individual) for many subjects. All the ones that have internal debates in techdom]
printf debugging always have a place, but for some reason, I found the debugging experience to be worse than 20 years ago. Tools like Visual Studio still have great debuggers, but I didn't notice significant improvement since the early days, and newer toolchains are worse.<p>A couple of years ago, I had to maintain a bit of Java code using Eclipse. That is, the old IDE everyone loves to hate. And while some of that hate is well deserved, for debugging, it was the most pleasant experience I had in a long time. Nice object inspector, edit-and-continue, conditional breakpoints, and step-by-step that works. Much better than fumbling around with GDB or one of its less-than-perfect UIs.<p>Also note that printf debugging and the step-by-step and breakpoint kind are not mutually exclusive. With an edit-and-continue feature, you can get the best of both worlds, but that's not something common these days, unfortunately.
The beauty of printf debugging for a novice C programmer is that the recompiling with printfs changes the memory layout so your buffer overflow no longer segfaults you.<p>ALternatively, your printf can use the wrong formatter string, and cause unrelated crashes. Such joy!<p>Makes me nostalgic for the good old days.
I agree about being able to see the whole program execution. This is particularly useful for multithreaded code since it provides a linear view into how the program actually executed. How are you supposed to figure out that A happened before B in a multithreaded program using only a debugger? With adequate logging, even if you don't log the precise times for A and B, you can often infer the ordering of these events based on other logged data.<p>For a lot of glue type code, I don't actually care about stepping through something line by line. I really want to see how components interact, not each step of execution. Though I do wish languages had better support for doing something like printing out all local variables in the current function along with the stack trace, sort of like a very shallow, low-cost dump.<p>Another big advantage is that logging is usually much easier to turn on (or even keep on by default) for production scenarios. Good luck getting some bank to let you run a debugger or even get a dump for anything.
Both suck. With a debugger, you need to set up a debugger and step through (and often, they don't work quite as well as you hope). With print debugging, you need to add the print statements.<p>In both, you can't retroactively debug already executed code.<p>This is one of the areas where I'm really proud of what we did in Dark. In Dark (<a href="https://darklang.com" rel="nofollow">https://darklang.com</a>), all execution is traced and you can see the value of any expression on any trace by putting your cursor in the expression. Advantages:<p>- no struggle to reproduce the error<p>- no need to set up a debugger<p>- no need to add print statements<p>When I write Dark, I can debug in seconds. When I work on the Dark implementation (F# or ReScript), I spend at least minutes on each bug because I need to do a bunch of setup to find enough information to diagnose the error.
A few more reasons why print debugging is used. If you are debugging multiple things at once, you’ll have breakpoints set that aren’t necessarily needed at the moment, meaning you have to continue a bunch of times to get to the right spot. Or your breakpoint needs to be in a loop that is called multiple times and conditional breakpoints are a pain and subject to code errors in the condition itself. Many debuggers are not great at examining state of objects, for instance a deeply nested object for which you want array index 42 within a dictionary of an object. Or you need to see a value that is calculated rather than just present in the current state.
The idea that print debugging is about being able to understand the time dimension of your code resonates, definitely. It reminded me of how the redux dev tools browser plug-in is an interesting pointer to a better kind of debugging. And essentially all that is is a rich UI around printing out the entire redux state after each operation. But because the redux state advances in discrete steps it’s very easy to express exactly what happened, and explore precisely what state change happened in response to each action. I do find myself wondering whether there’s a much richer debugging capability along those lines that could be applied more generally.
I have never spent much time learning debuggers honestly. I'm not sure if what I want exists:<p>I would love to have a debugger that offers a partial text editor experience, eg. it shows my code, I move the cursor to some statement, then I press some key binding and the debugger starts printing (in another window) all the state changes in that statement. Another key binding prints all the state changes in the entire function, etc. All of this while the program is running.<p>Are there debuggers that can do this? I have used gdb in the past, but having to set up breakpoints by hand and remembering names makes it too tedious.
I think there's a conflation of processes and tools which leads to the false comparison. Print debugging is a process, which uses a tool called print statements. Stepping through code is a process, which uses a tool called the debugger.<p>Print debugging excels at triaging the problem. And every language has print statements. Ubiquitous first tier support. They help you narrow down where your assumptions about the program behavior may be wrong.<p>Once you know what area to focus on, you pull out the debugger and step thru the code.
For python: i specifically recommend <a href="https://github.com/zestyping/q" rel="nofollow">https://github.com/zestyping/q</a> a lot, which is like print debugging on steroids:<p><pre><code> All output goes to /tmp/q (or on Windows, to $HOME/tmp/q). You can watch the output with this shell command while your program is running:
tail -f /tmp/q</code></pre>
I find that the greater majority of the time there are better tools to solve a problem than using print statements even when considering the fact, that a project needs to be refactored to be debuggable.<p>If I have a bug I can reproduce I can write a unit or integration test, try narrowing down the issue and use a debugger on the test itself for further help. Intellij has great support here, VS as well and there's plenty others.<p>If the bug exists in production only using a debugger I can connect to it remotely and dump the state (thread dumps in Java or core dumps with Delve for Go). If there's an option of using a profiler it makes the experience even better especially for diagnosing performance issues.<p>For distributed systems monitoring libraries, log aggregators are much more useful than raw logs. Proper metrics allow fast pinpointing of issues and log aggregators give me an option to either look for rare/common errors easily.<p>The only case I'd resort to prints nowadays is as a last resort if there are no better options.
I stopped doing step debugging at all many years ago. For me it looks the same as visual vs. text programming. Text and text search tools are just miles ahead of clicking buttons.
The Python package PySnooper is pretty good for "fancy" print debug statements: <a href="https://github.com/cool-RR/pysnooper" rel="nofollow">https://github.com/cool-RR/pysnooper</a><p>I've caught quite a few bugs using this show-me-all-locals() approach...
Personally the biggest obstacle of using a debugger is that it cannot be automated easily. You have to be present when it's triggered. You have to navigate it manually. When the program crashes again, you have to repeat the process. I know some debuggers can be automated but then you'll have to debug the debugger script.<p>Logging is much nicer because you can turn the exploration process into a text analysis problem. Logs can be searched, stored and compared. For me, sifting the log is much easier.<p>Whenever I try to write a medium-sized program for a serious kind of purpose, the first thing I do is to set up a nice and reliable logging system. This is the decision that you won't regret for the rest of development.<p>I would argue that the use case of a debugger is much narrower than logging/printf debugging.
I usually only do print debugging when I encounter a Heisenbug. I mainly develop in Java, maybe my choice is related to its really really great debugging tooling.
I find it mildly disturbing that so many comments are saying "But breakpoints!"<p>One would assume that anybody who used a debugger for more than a day knows about breakpoints. TFA isn't saying you <i>have</i> to step through every line in a debugger.<p>It's saying that, even if you employ your amazing debugging skill to find exactly the point you want to look at, you will only be looking at <i>that</i> exact point in execution, and not other points at the same time. Sure, it will be a very detailed representation of that particular point, which can be extremely handy, but sometimes you want to look at a hundred different points in execution, <i>at once</i>. That's when printf comes handy - you just need a large monitor (or a small font and good eyes).
Interesting hypothesis.<p>I think a big part of the issue is that printf debugging has always been "good enough" for me. I have used gdb in the past, but I've never felt the incentive to become good at it, so my knowledge of it atrophies and it has become a less interesting option over time. On the other hand, my knowledge of how to printf messages and extract them from the running process never atrophy because I do exactly that every day.<p>So maybe the situation changes if ever I come across a bug that's so mindbogglingly convoluted that printf debugging is not viable. Then I'll be forced to learn to use a step debugger well, and that could change my choice of tools going forward.
Personally, using a good debugger and knowing how to use have been more useful to me that anything else. I mainly code in C and C++, and Visual Studio integrated debugger and GDB are my main debuggers (depending what I'm doing).<p>For me is faster to double click the border of a line in VS or writing "break 123" or "break fooFunction" in GDB and stepping and watching how some values changes than adding and removing "printf" lines.<p>Adding some asserts are other thing. They always are good, and often necessary to find some "Heisenbugs".<p>In other languages I probably won't think the same, but I haven't done anything big enough outside C or C++ to give a proper opinion.
> I do want to point out that print debugging has one critical feature that most step-based debuggers don’t have: you can see program state from multiple time steps all at once.<p>At Google, we have time-traveling debuggers neatly integrated into our cloud IDE: You can step forwards and backwards, you can inspect variables for all the values they've had or will have until program termination, and you can also see all invocations of methods (along with their parameters) that have happened or will happen.<p>I still use logging for debugging. Cool tech aside, I think what you really need, above everything else, is the fastest possible iteration cycles.
I disagree slightly with the emphasis on "print debugging". I think what is missing is a body of theory around logging as a methodology. When I write code, I like to be able to look at the log file and "see" what the code is doing, when on DEBUG or higher. I think logging is a difficult but very important skill, and one which we are losing over time. If anyone is aware of any good books on logging (even if very old), do let me know. Seems like "logging theory" is a missing subject in Software Engineering.<p>I also don't see any contradiction between liking good logs and using the debugger when needed.
I'd say that except some heavily multithreaded cases, then print approach may be due to lack of mature tooling<p>I can't understand why would anyone prefer to write some print, when you can have Visual Studio's<p>* break point<p>* conditional break point<p>* ability to place another break points when you're already on the other<p>* expression evaluation at fly!!<p>* decent possibility to modify code at fly<p>I still remember case where I modified function with line with bad SQL (breakpoint after executing this SQL), added call to the same function with the same parameters after this breakpoint, let it execute again, caught the breakpoint once again and removed that call to itself<p>and all of that without recompiling program! it felt like magic
Debuggers are next to useless when dealing with today's distributed systems, all operating asynchronously in parallel. For the kind of bugs (race conditions, corner cases) that aren't easily caught by compilers, linters, unit tests or code review (in other words, the "Heisenbugs" that can stop a release in its tracks), aggressive logging is the only tool I've ever seen that is useful in-the-wild.<p>I would put forward that proficiency with this style of debugging (closely related to useful performance profiling) is a major factor separating mediocre programmers from the quasi-mythical 10X rockstars.
Print debugging is useful for the same reason backtraces are useful: both allow you to see what happened in the past, which is usually where the problem you're trying to fix actually happened.
This is pretty much the only way I debug;<p>- cross language/platform<p>- forces you to come up with hypothesis up front, then test these very systematically<p>- you know what you are doing<p>- debugger doesnt interfere<p>- works across threads and processes
Besides "behaviour in time", print debugging is effective because it's typically an extract of the most interesting for programmer things. I have a debugger window open this very moment and I can see about a hundred lines with various information about one structure, but I'm interested only in two (and I have to rerun this several times, because state got corrupted somewhere earlier).
One thing I haven’t seen mentioned here yet: I use print debugging all the time in Haskell, and find it works really well there compared to other languages. There’s a couple of reasons for this, I think:<p>• Nearly everything is immutable, so once I print the value of an expression I know it won’t change in the future. This is not the case in other programming languages, where a variable can be mutated after I print it.<p>• The base library provides a really nice range of functions for print debugging [0] — so I can just wrap any expression I want printed in ‘traceShowId’, and it’ll get printed. (Yes, these functions break purity; that’s why the module is marked ‘Debug’!)<p>Of course, sometimes print debugging isn’t sufficient, in which case I fire up the GHCi stepper debugger. But for the vast majority of cases print debugging works well.<p>[0] <a href="https://hackage.haskell.org/package/base-4.15.0.0/docs/Debug-Trace.html" rel="nofollow">https://hackage.haskell.org/package/base-4.15.0.0/docs/Debug...</a>
There is a dimension that gets overlooked in these discussions: tests. Every bug should start with a mind set to create a new test: unit, integration, or end-to-end. These are regression tests. Now, whether the test is needed or not is a decision that will fall out the bug fix. There is a distinct difference between the skill of debugging and skill of writing tests. I focus most of my efforts in writing test code. Someday perhaps IDEs will be the test platform for all the test types. That's not today though. The question in my mind is not print debugging versus IDE, but test code debugging versus ad-hoc debugging. IDEs encourage ad hoc debugging because once a bug is fixed, the test code needs to be written from the ground up, a step this is often left out due to time limits. I debug in test code and when the debugging is done the test is written. This applies to new code as well and mirrors the paired programming notion of starting new development using test code.
The limitation of print incentivizes me to write smaller functions and code that are generally free of mutations, so traces doesn't get stale fast.<p>Debugging on the otherhand, well.. I've just been told by my senior to write bigger functions, because the line-by-line debugging tool jumps around too much when moving between functions to functions.
(I have already replied to another comment with the same suggestion).<p>Pernosco offers the best of both worlds ( debugger, print), along with a few magical features.<p><a href="https://www.pernos.co" rel="nofollow">https://www.pernos.co</a><p>With it you can print anything present in your recording and step, and do anything you'd do in a regular debugging.
A lot of debuggers will also print/log and can even inject those statements into a running app where hot reloading manual print statements would otherwise not work.<p>From there there are situations where a debugger will save a LOT of time. I'm thinking of trying to figure out what's causing a behavior in a large dependency injected application with plugins when you have little to no familiarity with all the code involved. And then of course all the other things a debugger can do for you.<p>> Clearly Real Debuggers offer a superior experience to print debugging in so many ways. But print debugging is just easier to get started with, and it reliably works anywhere, so that’s why we use print debugging so much.<p>I think the tone of the first sentence and the word "superior" unnecessarily creates a strawman.
I've not figured out a way to effectively debug a distributed system except via printf. Debuggers are basically a nonstarter, because stopping one component to inspect it almost always triggers knock-on effects in other components that change the overall state of the system.
I'm working on Swift interpreter and the codebase is fairly difficult to debug. There's a lot of reused bits. So if you put a debug point somewhere trying to capture one behavior, odds are that that line will run 10 times for other work before the relevant part uses it.<p>So I tend to write a LOT of print statements that flush of debug variables right before I where I want to debug. Then I set a conditional breakpoint so that I can have the logs "stop" right where I want the program to.<p>Example:<p>// debug print<p>let someValueICareAbout = variable...<p>print(someValueICareAbout)<p>print("") <- conditional debug point here "if someValueICareAbout == 3"<p>I think it's technically still "print debugging", because I'm only using the debugger to stop the program so I get a chance to read my output.
I hate coding in an environment that does not easily support step-wise debugging. And yet, I use printf 10x-100x more frequently. Printf is actually causing you to do some thinking, and writing a little bit of code to conduct an experiment that hopefully will tell you in one shot what the problem is on a simple run. Step-wise debugging instead forces you to think about the problem, but then go through carefully and run a lot of mental load at each "next step" push to figure it out.<p>That being said, there's almost no good reason for a platform to not support step-wise debugging, so it's a big code smell that you're going to have a bad time in general there (even if in practice you'd largely use printf anyway).
If you think print debugging is "unreasonably effective", it's probably because you have a shitty debugger.<p>Try Visual Studio under Windows. Go on, try it. You'll be surprised at just how stone-knives-and-bearskins the standard tools on Linux really are.
For me this isn't an either or.<p>I constantly use both together. For problems that a quickly and reliably reproducible I'll often just use the debugger (if rr is suitable, even better).<p>But there's plenty problems that take a while to reproduce, involve many threads / processes, etc. Where the initial set of potential issues is too wide to easily target with a debugger. There sprinkling printfs around can provide data at a lower overhead than doable with a debugger.<p>Just yesterday I was debugging something where rr didn't finish replaying a workload that originally takes 10s within an hour (loads of io). Switching to print debugging I pinpointed the issue in < 10min.
This is why products like OzCode for Visual Studio[0] are interesting. With their ability to put in a breakpoint and see multiple variable's values instantly and "time travel" (i.e. limit step back through logic), it kind of gives you the print debugging benefits in regular debugging.<p>I've not seen anyone else try anything like this. There's a YouTube demo here:<p><a href="https://youtu.be/82jq5cvl67E?t=1561" rel="nofollow">https://youtu.be/82jq5cvl67E?t=1561</a><p>[0] <a href="https://oz-code.com/ozcode-production-debugger" rel="nofollow">https://oz-code.com/ozcode-production-debugger</a>
IDEs are amber and throw away effort. All the break points, data integration and what not is thrown away after the bug is fixed. Further this IDE effort is not shared between developers. Log debugging is reusable out of the gate. Log debugging can easily be promoted to a production statement if deemed important. IDE developers it seems to me need to work on how all the time and energy IDE developers spend on a bug can be generalized to the point these things can be shipped with the code itself. Until then it is throw away work trapped in the amber of the IDE.
Recently I tried out rr, the time travelling debugger. It blew my mind. I never imagined you could just run until an assertions fails, set a breakpoint on the variable the assertion checks, and the run backwards until the last time the variable was modified.<p>Shameless plug: If you're writing rust I wrote a tiny wrapper that finds the appropriate binaries and provides the right config to make it as easy as `cargo rr test my_test`. <a href="https://crates.io/crates/cargo-rr" rel="nofollow">https://crates.io/crates/cargo-rr</a>
I don't get the usefulness/effectiveness print debugging. I work in Ruby and JavaScript and I find it much more efficient to know the whole state of the world and the objects in it at a certain place, because I generally know where the problem may be. For example I use pry in ruby and the debugger; statement in JS.<p>Maybe it is just the way my brain works? I'd rather stop and see what I need behind a condition than have to filter through a lot of possibly unformatted console output.
I learned early on with an expensive microprocessor emulator, just have the code raise the voltage on an IO pin as a print debugger rather than spend days debugging the emulator.
Print debugging is basically variable watch points, but IMHO easier.<p>The only time to really beware is embedded and real time systems where printing can throw timing way off or cause other side effects.<p>I heard of a case once where printing via JTAG caused an issue due to the power draw of sending all the extra data out. But that was trying to debug a novel board design and its software at once.<p>You won’t hit that kind of thing on normal computers like desktop, mobile, or cloud unless you are writing drivers.
This should be a non-debate.<p>A debugger is for when you want to inspect local state in detail. That can indeed often be very useful, and they are sophisticated technology.<p>However, the people who think that a debugger is the only way to debug just aren't good programmers: often you want a picture of the overall behavior of your program. As has been said by someone other than me, a debugger allows you to fix a bug; print statements allow you to think about the right fix for a bug.
So I'd be curious. I usually work in scripted languages Bash, Ruby, JS (bleh) a bit of python.<p>Sometimes I do some Java work though and I usually end up going to print debugging because trying to figure out all the Java logging framework, or not ending up like 40 layers deep in some magic framework dependency that is interecepting my code which is what always happens when I use a debugger.<p>That being said do those who work in compiled languages make more heavy use of debuggers?
You can already do print debugging in runtime using tools such as Googles stack driver, Lightrun and Rookout (probably others too). These tools let you inject new logs into your running server so you can do print debugging without redeploying a cluster. Pretty darn cool.<p>They also let you place a breakpoint which doesn't stop execution so you can get a stack trace, variable states etc. without the pain.
With the Jetbrains products breakpoint debugging is so easy that i use it for development all the time. Evaluating expressions inside a breakpoint while developing provides many answers in a mich tighter feedback loop, even with go or something equally fast. If i don't have the jetbrains tools i default to print debugging because everything else is too much of a hassle.
Maybe we need a debug oriented programming language ? say<p>`{ ... }?` denotes a scope we want to inspect, debugger gets launched and we get a generic reification of the tree path at that point; with ability to tweak parameters up that path and see multiple new trees rapidly (think Brett Victor live coding)<p>Honestly I think printf debugging is a pity. I do it.. but it feels like processing xml with sed.
For print debugging in Python I recently discovered a nice little time-saver: the icecream package. Rather than having to type "print( "x: ", x )", you can instead type type "ic(x)".<p>[1] <a href="https://github.com/gruns/icecream" rel="nofollow">https://github.com/gruns/icecream</a>
This feels like an instance of "worse is better". It works well enough, it is easy to start, it is robust, and it is naturally integrated into your workflow (which is, run the code you wrote.) Debuggers are like a perfectionist approach, and still lacks things like timeline-like view that the articles mentions.
I love tracepoints, which is basically print-statements but dynamically from he debugger. Sadly I almost always end up in having performance problems, so I still need to an if or so to the code for the tracepoint to perform well. And then were back at printf debuggging again...
Yeah it works great until you install 30 frameworks and they all tell you so much useless crap that you can't see your own messages. Why do they log these useless messages? Because, they're bad programmers who are 1000 AU from being able to realize it.
The best way of finding faults for me is writing a test that fails the problematic condition and then use prints in all parts that I think are being executed and may have key information to help solving the mystery.<p>I tried using debuggers, but it was always too much hassle.
Here's a hack I do when I'm running a tight loop. In something like a video game at 60 fps, print is useless cause it spams so much in the terminal it's unreadable. So I use my hack:
If math.random() > 0.99 print(debug_msg)
Stupid question: why don't more programming languages and/or compilers natively support the alternative to print debugging, which is (afaik) tracing? I guess some languages have it, but some don't, or they are onerous add-ons?
it works, its convenient, its easier to learn, easier to setup, it has less side effects in multithreaded programs meaning you can debug those too. You can even log to a file and then get these logs from your end users... The article does make a good point, errors that only cause failure a few hundred calls after the originating problem are easier to find this way too. Every few years I make an effort to learn to use whatever the current most popular debuggers are, but at the end of the day, its really just very specific kinds of errors that the debugging tools are better for finding and I generally go back to debug outputs soon enough.
Print debugging is a tool in the toolkit . It’s good enough for many scenarios , and much easier to deploy most of the time . I still recommend setting up and familiarizing yourself with a step through debugger , but use both
I wrote my own print debugging tracer. It’s now my goto for debugging most things.<p><a href="https://github.com/elonvolo/logitall" rel="nofollow">https://github.com/elonvolo/logitall</a>
Have never found writing to logs to be "effective". More a necessary evil ;)<p>What is really effective is "visual debugging". Say for example you are testing for bias in an RNG. Rendering a large format image of random rgb values will immediately show any cycles, even to the untrained eye.<p>Consider GPGPU workloads, for ML or ray tracing for example. There are myriad levels of variables to track: resources, allocations, command buffer state, synchronization, compute kernel per vector, and so on. All primitives that very much lend themselves to graphical representations!<p>Right now editing live code in a profiler usually involves textual editing of the graphical shaders. But it's easy to see how this evolves to a purely visual shader editor, not unlike those found in Unreal or Godot.
I can just write print(whatever) and get the job done, I don't want to put breakpoints and search for a data structure I need. Why can't I write something like this:
breakpoint {
debug(var)
}<p>?
I agree with the point about time travel debugging. I find it so intriguing that I've been playing with it for a little tool to make VR games. Anecdotally, it has helped me a lot with debugging.
I don't usually resort to a debugger to hunt for bugs, but I use them a lot to explore APIs in "real time". I find them much more convenient than the likes of Postman.
I have been developing large-scale django apps on ec2 for a while and the solution that has been working best for me is a lot of logger.** statements sent to papertrail.
Why "unreasonable"? There's nothing unreasonable nor wrong about print debugging. Moreover, it's a great first step towards logging and testing.
I always feel like I should get better with a debugger, but whenever I'm debugging I always fallback on printf and crashing. It just feels so immediate