Short background: I’ve been a developer for 3 years now. Full-stack, but my current job involves a lot more JS, than backend work.<p>I’ve read a lot about the importance of tests. You can assume I have a very abstract idea of topics like TDD, testable code, design for testability & the importance of tests in an evolving codebase.<p>But, I’m still very much deprived of motivations to actually spend time, writing tests.<p>#1. A lot of it is just procrastination, because I’ve never written any useful tests before & it feels like I need to know a lot more, for writing any useful tests.<p>#2. When I hold myself & sit down for writing tests, this is my thought sequence.<p>”The fake data that I’m going to pass as input for this function, is a static, dead, useless, hardcoded json.”<p>“For any useful feedback from the tests, I need to be feeding in fake data, that vary wildly and also it should look closely like real world, live, production data.”<p>“And I have to test the same function multiple times, with multiple types of fake data (dirty testing and such)”<p>“And the module I’m gonna write tests for has like, 148 functions!”<p>“And I have to mock this long list of interfaces & external calls”<p>“Jeez, that’s a lot of work!”<p>“Continue with #1”<p>I’m convinced that I have to change my perspective entirely, to overcome the procrastination.<p>Developers who’ve made the transition: How did you make it? What tricks can get me change my perspective? Advice please.
The cool thing about tests is that they let you write code bottom up instead of top down. So instead of having to first build a UI so you can see if your algorithm does what you need it to, instead you can just implement the algorithm and write a little code to exercise it and make sure the results are what you expect. Then you can C&P that little bit of code and adjust it a bit to make sure edge case behavior is sane.<p>Things that helped me make the adjustment:<p>- a good test harness that made writing and running tests relatively effortless
- working for a few months on a team that used a methodology that wasn't strictly TDD but had a rule that pull requests would not be accepted without test coverage. (If your own team doesn't have this requirement, you may be able to find an open source project to contribute to that does.)
- Working for a different few months on a codebase that was already well tested and thus a) already had the tooling necessary to make writing tests easy, and b) contained plenty of examples of sensible tests.
It's not going to produce "perfect" tests according to the test pyramid folks, but a good way to write your first tests is this:<p>* Consider: when you make a change, how do you verify that it worked?<p>* Write a test that does THAT and verifies the result. Don't stress out about DRYing out setup yet, write a test that WORKS first.<p>* Repeat. You will start to spot places where you can apply patterns like factories etc.<p>* The tests you write this way will pretty much be "integration tests", with all the baggage that carries (brittle, slow, heavy). You'll want to keep an eye out for seams where you can add more focused testing - for instance, can you isolate the part that needs to be tested with "multiple types of fake data" and test the output of <i>that</i>?<p>You may also want to try this out on a side project first; adding testing an existing codebase that doesn't have any is pretty much "hard mode".<p>I'd also recommend checking out talks on testing from Justin Searls (@searls).
For me, I started the practice of writing tests when I got to the point where I was applying for senior level positions and failing to get past code reviews with feedback like "looking for someone with more experience". This motivation could be described as external.<p>My internal motivation for writing tests came from doing a lot of parallel/distributed computing. I <i>had</i> to write tests just so I could know what the heck was going on in my data transformations. I couldn't run my code on the full dataset while developing but with tests and small subsets of data I could develop with more confidence that my code would run efficiently and correctly on the full size set.<p>Though it took some time to really sink in, the end result was that I began to write code naturally that in addition to being easier to write tests for was clearer and more easily understood by colleagues or even my future self.
1. Tools like Hypothesis (<a href="https://hypothesis.readthedocs.io" rel="nofollow">https://hypothesis.readthedocs.io</a>) and other QuickCheck-inspired libraries are really good for generating lots of fake, realistic data.<p>2. Only test public interfaces<p>3. Only test things you <i>want to be stable</i>.<p>4. Most code has two parts, purely functional calculation and calls to outside world. Depending on where tricky bits are, focus on first part.<p>5. If testing is hard, cheat. Big picture covers-everything test. Then once you have some coverage you can refactor and see if you can make code more testable.<p>More ideas on what to do when testing is a pain: <a href="https://codewithoutrules.com/2016/02/21/painfultesting/" rel="nofollow">https://codewithoutrules.com/2016/02/21/painfultesting/</a>