When I am starting a new project I have learned to identify the unknowns.<p>"I don't know how to do X, Y and Z."<p>Before I begin working on the real solutions for X, Y & Z I start by making a test program that does X. It's ok if it requires a ton of scaffolding, or canned data... the important thing is to do X. Then I do the same for Y & Z.<p>Now that I understand the problem I do a software design. I do this on paper and I purposely do it super quick and not caring about how neat it is... and I throw away designs rapidly as I iterate and improve. Eventually the paper crumbling slows down and I approach a real design.<p>This is finally the point that I can write "real" code. The unknowns are gone and all the software design iteration happened on paper.
Other good nothings:<p>Can you check in even a tiny change to the source-code repository? A new project's first commit often triggers a slew of politics: wait, you're building <i>what</i>? Where's your design? Who approved this? Why are you naming it this and not that? We already built this; why are you building a second one?<p>What's your installer story? For installable software, do you know how to produce the installer for it? Too often the installer work is left until just before launch, at which point it's way too late. For a service, do you know how to deploy anywhere but localhost so your team can start doing internal demos?<p>Can anyone besides you build your hello world code? The release team often has their way of doing things, and it's not what Visual Studio spits out from its new-project template. Might as well integrate early, rather than wasting time solving problems for a build system you were never going to use.
I feel like this is one of the things beginner programmers eventually figure out on their own without having to be told about it, otherwise they'd never be able to complete fairly basic programs but large programs which touch a lot of systems.<p>You'd have to be <i>really</i> new to write a bunch of code without ever testing it working incrementally, likely because you've never encountered it before. A couple of painful debugging sessions will quickly make you realize you need to test your assumptions one by one in the right order if you are to ever have a chance of making an unknown program (i.e. a program you are writing for the first time) work. <i>All</i> the stars have to align, because one wrong error will throw everything right off. Debugging is the act of visiting each possible source of error and validating it works. The best way to do this is to build the program incrementally (which again, I expect programmers to quickly realize).
Getting the "Hello World" to compile and run is often the most difficult part of a project. After that it's only easy incremental improvements.
Related truisms I've come to:<p>* Watch the test fail first. Write a test FooMethod_Should_Do_X. Confirm it fails. If it passes, it means you're testing the wrong thing. Once it fails, make it pass. Once it passes, make it pretty while still passing<p>* The simplest thing you can do is nothing. As a developer, your first job is to talk customers down from that feature request without annoying them too badly. "What bad things happen if the app doesn't do X?", "Explain the business value of X to me. About how much time or money will this save?".<p>* This applies not just in code. In personal interactions, saying nothing or doing nothing is often better than prematurely doing the wrong thing. There is a time to say and do something, but it is usually later than we think and not on social media.
> The program hasn’t even gotten to the point where it can comprehend the possibility of executing that line of code. I mutter to myself, “How did you let it get this bad?”<p>I support a lot of scientists-who-sometimes-code, and I get this all the time. They bring me in and show me the 5000 line Python (or Matlab if I'm unlucky) script they've written. "It doesn't work."<p>Turns out they've written 5000 lines of code without having ever once tried to run a smaller piece while they were writing it. Not even once. They write the whole thing and THEN when it doesn't run on the first try, they don't know what to do.
This also can be articulated as a sort of incremental TDD.<p>- Write the scaffolding test to call the new "thing"<p>- Add basic assert to check your thing was called with right input<p>- Add more logic to "thing"<p>- Incrementally add more asserts to check for output, side-effects etc.
I wish people who wrote these things would give a bit more context. Seems like the great temptation when writing about programming is to assume that your kind of work is the only kind of work any programmer ever does.<p>I write tools that help me test things. There are a lot of people like me who don’t work on big systems or write production code. Our priorities are different.<p>I need to duck tape a thing to another thing quickly. I happily accrue tech debt because getting my testing done today (not regression testing, but primary testing) is paramount. Later I will declare bankruptcy (abandon the code).<p>I work alone, mostly.<p>The dynamics change when I work with other guys, or when my software has to serve as a persistent framework into the future.<p>I’m just saying there are a lot of programmers like me. We are also interested in good craftsmanship— but we operate in a “battlefield” context, not a drawing room context.
That's usually how I go about things too. It's annoying that a lot of the new crop of youtube programming videos skip these steps. I've seen a few "How to write X using Y" videos that basically write the whole thing from top to bottom one line at a time.<p><a href="https://www.youtube.com/@JustForFunc">https://www.youtube.com/@JustForFunc</a> and <a href="https://www.youtube.com/@fasterthanlime">https://www.youtube.com/@fasterthanlime</a> are good exceptions to this. I want to see the process of building out something complicated one bit at a time, including all the mistakes and dead ends.
Embedded robotics firmware version: Show me that you can deploy an update to BlinkyLED, via the cloud manager, over LTE, to the robot with rock-solid reliability every damn time. Step two: update the bootloader over the wire without bricking the robot.
"Bringup" of a complicated build or deployment system can consume quite a lot of work to "do nothing". Not unusual with a new FPGA system to spend days trying to blink an LED, or a new 3D system to get a single triangle to display.
I think this is a good reframing of Gall's Law for junior devs. The problem is, it's hard to put in practice if you don't know what the shape of the thing your building should be. That may actually be a non-trivial problem for a junior dev.
Reminds me of an article from Mitchell Hashimoto: <a href="https://mitchellh.com/writing/building-large-technical-projects" rel="nofollow noreferrer">https://mitchellh.com/writing/building-large-technical-proje...</a>
How many proof of concepts must a project knock down, before you start to integrate. (with profuse apologies to Bob Dylan)<p>This is a very good thing to keep in mind when estimating.
Almost boils down to my approach: write a few lines, test. Never write more than a few lines without making sure it still compiles, runs, and does what you intended. I almost never use a debugger. Just a terminal or log file output, an editor, and a Makefile.
Speaking directly about the title, it is true. Once you start something, it is the most important thing. It can be dropped at any moment if desired. However, the optimal path is towards completion. This means that you are unable to do something else. Thereby planning to do nothing else. You have to make sure you can focus on your task to completion. Otherwise, you will end up having to alternate tasks. Surprisingly, we alternate tasks constantly. Mainly, it is because of boredom, time constraints, or lack of capability(knowledge, power, resources, etc).
> <i>Start with something that does nothing. Make sure you can do nothing successfully. Only then should you start making changes so it starts doing something.</i><p>Excellent advice!<p>Just last week I did JSON transformations in a succession of little Node JS programs that are chained by pipe. Each program does one step of the transformation; it starts its life by outputting what it receives, then evolves to actually manipulate data.<p>It's an approach that's quite safe and rarely breaks, and when it breaks it's easy to find where the problem is.
Especially true when working on an unknown code base.<p>I frame up small projects that replicate the key parts of the software I need to interact with.<p>So often you find yourself trying to integrate with outdated code vases full of unknown dependencies and customisations that look nothing like the docs or sample code.<p>Even the smartest people can feel overwhelmed!<p>Gives your confidence when you can get a feature working from a fresh project!
There's a time and a place for hackathon-style coding.<p>But I find that I take a similar approach to Chen. Maybe not quite so rigorous.<p>- Start with a console app (actually I always keep a console app in the solution for isolating bits of code I want to debug)<p>- Scaffold the part where said code needs to go<p>- Once the code from step 1 is ready, stick it in the scaffold and debug until it looks good<p>- Work on further tests (load test, code coverage, variety of inputs)
Back when I was doing a lot of full time dev, my approach was to figure out how to build Hello World (usually the device driver equivalent) and then debug it and then start for real. But always started with how to build it and debug it.
I like how people take this in different ways. For me, it is simply - compile often. Although, sometimes if I'm feeling greedy I won't compile for quite a long time.
baby steps baby!<p>this is great strategy for debugging & so code reviewers have an easier time reviewing your incremental changes.<p>On newer teammates, I usually reject code reviews if they don't follow these practices (if the company's engineerimg culture is compatible with it)
the blog post illustrates a failure of mentoring junior(or even midcareer) engineers (EDIT2: but doesnt admit it - for shame!). EDIT: no one (regardless of their tenure or pedigree) should be left to their own devices to paint themselves into a corner.
I have several friends, programmers, who brag about their big concentration powers.<p>But can you turn it off?<p>To concentrate one on thing is to ignore 1000 things. If that can't be turned off, becomes habitual, permanent, fades to invisibility as constants and habits generally do, one might be suffering from a massive distortion of perspective and not even know it.
This Raymond Chen guy should probably look into what debuggers are available on his platform of choice, he could save time by avoiding all this printf() style debugging.