Do you have tests that run every time you push and ensure that the functionality on your site works?<p>There's always a lot of debate around testing and I'm interested to see how much people do and how satisfied they are with it<p>IF YOU'D LIKE TO ENCOURAGE OTHERS TO ANSWER, PLEASE UPVOTE - TY
I answered "a few critical things" ... but, for the most part, testing is tedious, frustrating, and a time-sink for me. I recently paid someone $100+ an hour for some remote TDD coaching. It's helping a bit but hasn't really change my attitude towards testing (yet).<p>What bugs me:<p>- Testing frameworks and "best practices" change way faster than language frameworks and I simply can't keep up. What rspec version do I use with what version of Rails? Now I have to use Cucumber? I learned some Cucumber ... oh, now Steak is better. [rage comic goes here]<p>- Most bugs/edge cases I encounter in our production apps are things <i>I'd never</i> think to write a test for ...<p>- I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite<p>- More code is moving to front-end Javascript stuff ... so, now I have to write Rails tests AND JS tests? Sounds delightful<p>Feel free to try and convince me otherwise, but I don't ever see myself in the "test ALL the things" camp.
I test things that seem like they're important to test. I also do a lot of manual checking which boils down to "does it work?" When the manual checking is too tedious I'll write code to help. I don't do unit tests (but I don't think most people who think they're doing unit tests are, either). In general I have three big problems with the philosophy of testing, especially test-first. (Though I don't feel incredibly strongly about these--software is a big field of possibilities, to suggest One Way is the Only Way is pretty crazy.)<p>The biggest is that it encourages carelessness. I want to grow more careful and work with careful people, not the other way around. Tests don't seem to make people better at doing science--that is, people test the happy-case and don't try and falsify. Testing doesn't seem to make people better are writing code, and may even be hurtful. Secondly, testing instills a fear of code, like code is a monster under the bed that could do anything if you don't constantly have a flashlight under there pinning it down. Sure, I guess your entire project might depend on that one innocent-looking line of code you just changed, but if that's true, you have some serious design problems and testing is going to make it hard to fix those. Because, thirdly, it hinders design, it's very easy to code yourself into a corner in the name of passing a test-suite.<p>Related to the design issue is a simple fact of laziness. Your code makes a test fail. Is your code wrong? Or is the test wrong? Or are both wrong? If just the code is wrong, the correct action is to fix your code to fit the test. (Which may have serious ramifications anyway.) If just the test is wrong, the correct action is to change the test. (How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.) If both are wrong, you have to change both. Obviously people will be motivated to assume that only one is wrong rather than both because both means more work.
I'm pretty much the one man code shop for our startup and I still write a lot of tests. The way I think of it is this: if something is tricky enough that I need to verify it in the repl, may as well capture that validation in an automated test. The trickier, more painful tests to setup are integration tests that make sure everything is hooked up correctly, from the datastore layer to the handler to the template arguments etc. I went through the pain to set this up so that we at least have smoke tests, e.g every page is visited with some data populated to make sure nothing blows up.<p>A good reason to write tests beyond QA is to verify your code is at least somewhat modular - being able to get code under test ensures at least one additional use beyond being hooked into your application. For that reason, I would recommend having at least one test for every module in your code. It also makes it easy to write a test to reproduce a bug without having to refactor your code to be testable after the fact.
Early on, I asked most YC founders I met whether they did testing in the early days, and almost all of them said "no". I've also not written tests in the past simply because it's a time investment--why test if you could be working on something entirely different in a few weeks? Code can be very volatile in an early stage startup.<p>Think it makes more sense the later stage your startup is where you're more certain of what exactly it is you're building.
These options are flawed. I am somewhere in the middle of of the first two: mostly integration tests, with critical domain logic unit tested. Certainly not 100% of the app's functionality, closer to 80%
I was in the 'testing is too much overhead' crowd for years until one day I finally got it. I realized that as I code, I'm always testing. Who doesn't make a change and then test it? So, you consider writing a test too much overhead? How much overhead is it to manually test? How much overhead is it to fill out that registration form you're testing? Maybe there are two or three steps to it. How much time does that take each and every time you test? Being one that enjoys automating repetitive tasks, writing that test _once_ suddenly became a no-brainer.<p>This realization only made all the other arguments for testing that much stronger.
I don't believe anybody that says they test <i>all</i> functionality. Most? Sure. All? No way. Not in a non-trivial codebase.<p>Article about the group that writes the space shuttle software, sort of relevant?: <a href="http://www.fastcompany.com/magazine/06/writestuff.html" rel="nofollow">http://www.fastcompany.com/magazine/06/writestuff.html</a>
Unfortunately our sales people are obsessed with agreeing to whatever customers dictate in order to make a sale. The customer wants a full featured, fully customized, fully automated E-commerce solution and they want it for a flat $5000? Sold. Customer says "What is this 'testing' sh*t on the quote? It should just work the first time, or do you only have a Jr developer on staff who needs everything double checked for them? We can go some place more professional" and sales person replies "Oh yeah, that - you're right. Our developer is a wizard and I forgot to take that off."<p>No matter how many times I explain or quote higher or tell them the feature creep is becoming unreasonable (oh by the way, we have 18 products with complicated interactions, not the 3 we asked for on the quote, but we expect to still pay the same), such that I can't possibly write it all and test it all, they just don't listen and they leave me holding the bag. So, while I'd like to do testing, just getting the thing kind-of working isn't in the budget, never mind getting it working well.<p>Sorry for the rant and... come to think of it, it may be time for a new job.
We actually made a company to do other people's testing: <a href="http://CircleCI.com" rel="nofollow">http://CircleCI.com</a>. Really easy Continuous Integration for web apps. Email paul@circleci.com for a beta invite.<p>That said, I subscribe to the philosophy that testing is only there to support the business, not and end in itself. We often prototype features with no testing at all, because they get rewritten 3 times anyway. Often, writing the tests is what highlights flaws in our logic, so without it we would often we flying blind.<p>Testing slows down coding by about 135% (yes, more than twice as slow), but makes that time back in spades when you have to work on the same code again, or when changing lower layers (models, libraries, etc).
I think the response anyone is likely to give to this poll depends a lot on the kind of work they do.<p>When I write a software package/library, I'll usually test the hell out of it for the very same reason so many others have given: if you're testing in a REPL anyway, why not just turn those snippets into unit tests? Hardly any effort.<p>But I usually don't bother with too much automated testing for websites or web apps, because (1) it's more difficult to actually catch the errors you care about, have good test coverage and keep tests up to date than it is for back-end stuff and (2) I actually like clicking through my app for a while after I've implemented a new feature or changed an existing one.<p>Manually testing a web app allows you to catch many different kinds of mistakes at the same time. Almost like an artist looking at an unfinished painting. Does the UI look off? Does X get annoying after doing it ten times in a row? Does everything flow nicely? What is this page missing? Did that customer's feature request you got three days ago actually make sense? Questions you should be asking anyway, even with automated tests. And basic functionality is tested because the underlying packages are tested.<p>... but then again, if I was writing a website backed by a RESTful API, testing that API is as easy as doing a couple of HTTP requests and checking the responses, so you'd be stupid not to go for that quick win.<p>So my answer is "We have a test suite that tests all functionality" and "Tests? We don't need no stinking tests." at the same time.
People ... don't have tests? o_O In 2012?<p>I am seriously considering putting together a "Software Engineering for Small Teams" course or set of articles. With a little bit of expertise, you can inject testing in to most projects, use the minimum of Agile that'll help, and generally massively raise your game - and by that I mean code faster, better, and more reliably, with considerably less stress.<p>(edited: turns out I forgot which year we're in :-P)
I've never done automated testing, but as I've grown as a developer and started dealing with more complicated codebases, I have come to see the importance of testing in a huge way.<p>With a small codebase that you know every inch of, its easy to test most of your interactions before you push something live, but when you get just one order of magnitude higher you start seeing how easy it is to write code in one section of your app, test it rigorously, but not catch some subtle breakage in another (seemingly unrelated) section of your app.<p>In production software, especially if you have paying clients, this is simply unacceptable; which is why I've recently been boning up on BDD, TDD, and continuous integration and am trying very hard to slowly integrate them into my development process.<p>To one of the comments before, in my experience, automated testing should actually makes you bolder with code not more fearful. We have this codebase where I work that is a frickin mammoth of interrelated modules and its so scary to go in there and add or change something, because I just know something else is going to break and I'm going to be stuck fixing it for days after I made the first edit.<p>This is the other reason I started exploring automated tests ... because I realized that if I had a test suite that could catch regressions when I refactor code, then I could actually spend more time whipping old code into shape instead of patching it up until such a time when I'd be able to just rewrite the whole thing.
I do test almost anything in my apps and I can't imagine to write my software without it nowadays. I test my Ruby code in the backend, the CoffeeScript code in the frontend and I have integration tests to verify that the whole stack works fine.<p>It took me a lot of effort to learn it properly, I have read many books about testing, have read the tests of plenty of open source software to see how others do it and I wrote thousands of wrong tests until I got at a stage where I can say I have mastered testing.<p>I was always fascinated about test driven development, but to be honest, it does not work for me and I seldom do it. In most cases I normally write new functionality, then I describe the behavior of it and finally do some refactoring until the code quality meet my needs. When you can refactor a class without breaking a single test, you know you've done it right.<p>It's important that you find your way and don't try to follow the rules from others. Take your time, mastering software testing is a complex discipline and it won't happen overnight.<p>Even with a high level of test coverage, I always encounter new bugs when using the software. But after fixing it and adding some tests, I know at least that I will not see the exact same bug again.<p>I believe that writing tests speeds up my development. This may seems illogical at first, but without the tests my development would slow down with increasing complexity (Lehman's Law), and instead of adding new functionality I'd find myself fixing old stuff. So testing allows me to manage a large and complex codebase, it allows me to do a complicated architectural refactoring and I know everything important still works as expected.
I do the testcases based on where the project is at that point in time.
Here are the three stages, that can help you decide how much tests needs to be there.<p>[1] Initial stage where we are trying to make things work. At this stage code base is very small < 1000 lines. This is like prototyping. It works with limited functionality. No tests needed at this time.<p>[2] Heavy development phase. At this stage, we have proved the concept. Now we are adding a lot of new features. We identified some features as must have. Also, code is getting re-factored/re-designed based on what we learn. At this stage, we add tests for the must have functionality. Thus, we can ensure that important features are not broken by newer code.<p>[3] Mature phase. The code is mature. Most of the features are working fine. Code base may be large 100000+ lines. At this stage re-factoring/re-designing is not easy. Mostly incremental changes are happening. At this point, we should have upwards of 70% code coverage. Typically, the test code will be more than the code when we have 70%+ code coverage. But, it is very important to have tests, since it ensures that all features are tested even when a minor code change is made.
WOW! I must say that I am actually surprised how many people have replied that they do little or no testing.<p>Perhaps this is because I am in the enterprise development world as opposed to the start-up world.<p>The cost and frustration involved in delivering a critical bug into a QA or production environment is much higher than the cost and frustration of writing and maintaining tests.<p>Every action in business has a cost associated with it. The more people involved (customers, UAT, Managers, etc.) the higher the cost. The sooner you can discover the bugs and fix them the less people are impacted the lower the cost.<p>This is how you make yourself as a developer more valuable and justify your high salary/rate by ingraining habits into your daily routine that reduce costs for the business.<p>In this I also imply non monetary costs, like the personal costs involved in asking a VP to sign off on an off-cycle production release due to a bug that could have been identified by a test prior to the integration build.
In my experience, on projects with often-run automated unit test suites with good coverage, development goes faster. Part of this might be because for code to be highly testable, it usually also has to be well-designed and architecturally sound.
I used to get code back from developers EACH AND EVERY TIME with massive bugs like: unable to register, unable to login, unable to add content. I wrongly assumed that they at least ran through and checked for any bugs they introduced before sending me the new code. So each and every time I got code back I had to go through manually and check it, sign in, log out, register, add content, delete content, edit content, add category, etc...<p>I wish someone could make a simple service that allows me to set up my web app, set up test parameters that it tests each and every time, and tell me if it failed or not. I want to automate my babysitting.
We would like to test a lot more but I really don't know how to test some of the critical stuff.<p>Just as an example, how do you test a parser that processes large amounts of sometimes sloppy semi structured text? Whether a particular defect should be classified as a bug in my parser or as a rare glitch in the source data is undecidable until I know how often the defect occurs.<p>What I need is a kind of heuristic test framework that makes sure the parser doesn't miss any large chunks that I only find out about weeks later if at all. I cannot supply individual test cases for everything that could possibly be found in the source data.
I don't test as much as I probably should, because it seems cumbersome since I am mostly dealing with APIs like Facebook. For example, if a user revokes their Facebook OAuth app token they get an email notification about that from me, informing them that the app will no longer be able to function because of the expired token.<p>I am not automatically testing that, perhaps I am missing something, but automating the steps to log in to Facebook and revoke the token and then also making sure that SendGrid sent the email correctly just seem impractical.
On my latest project (Rails 3.1) I test thoroughly the back end code, but only in a limited way the CS front end code. I'm using jasmine there, but <i>that</i> is a lot of overhead.
We run tests at Absio (the place I work). Everything is supposed to have full unit test coverage, but with ship-it mode that has slipped a little lately.<p>When you commit to a personal clone of mainline and push it up to the server Jenkins picks it up, builds it, and runs the tests, and if there are any failures notifies you over Jabber and or email to let you know it is broken and for you to go look at it.<p>We also integrate Jenkins with JIRA, so as soon as Jenkins builds something, pass/fail if there is a JIRA ID in the commit message a comment is automatically added there as well, which if people are watching the bugs they will get notified about.<p>This effectively allows people to see how they are coming along in terms of their progress and lets them see when stuff is broken almost instantly. Automated builds are nice because we can distribute the builds across a variety of different environments at the same time to see that if maybe something worked on Mac OS X that it doesn't build on Linux, well that needs to be fixed.<p>It definitely has made me code more defensively, nobody wants to have your Jenkins build show up as red on the status board, and nobody wants the extra scrutiny on code review when asking to merge something back into mainline. So far it has worked fairly well with most developers doing testing.
Testing? Shoot, we sometimes code in prod!<p><a href="http://www.bnj.com/cowboy-coding-pink-sombrero/" rel="nofollow">http://www.bnj.com/cowboy-coding-pink-sombrero/</a><p>(article's not mine, but might as well be)
I do automated testing as much as I can, the main thing standing in my way is the problem of testing GUIs. GUI testing frameworks are inevitably painfully slow, don't test the appear of a GUI and don't test things like responsiveness and the varieties of behaviors of user message loop. I'd like to have the ability to test even more.<p>That said, I think TDD is trendy-consultant-crap. Writing a test before your write the code only works for simplistic code that doesn't need much testing and probably won't produce the right test for your code once you have written the code.<p>Also, for code I've just written, a ad-hoc manual test using the GUI is often much faster than writing a full test and I likely wouldn't ever need to run those tests again. The test suit takes quite a while to complete and if I could add every manual test I've ever run, it would take absolutely forever.<p>Something like "Zen Test", which runs the relevant tests in the background on code being changed sounds good but I don't think there's anything like it for c++. I'm a bit doubtful it could work on complex code in any language. A lot of R-and-R magic sounds like its creators never went code involving one model method, one controller method and one view method.
Testing isn't easy, but it's also a skill you get better at over time. You get a feel for what you should and should not test. You get quick at writing units. Toughen up. Learn to test, noobs.<p>How can you refactor safely without tests? You can't.
How can you safely upgrade your tools (which often change in subtle ways), without tests? You can't.<p>"Every programmer knows they should write tests for their code. Few do. The universal response to "Why not?" is "I'm in too much of a hurry." This quickly becomes a vicious cycle- the more pressure you feel, the fewer tests you write. The fewer tests you write, the less productive you are and the less stable your code becomes. The less productive and accurate you are, the more pressure you feel. Programmers burn out from just such cycles. Breaking out requires an outside influence. We found the outside influence we needed in a simple testing framework that lets us do a little testing that makes a big difference."<p>Quote from: <a href="http://junit.sourceforge.net/doc/testinfected/testing.htm" rel="nofollow">http://junit.sourceforge.net/doc/testinfected/testing.htm</a>
I love testing I just find using it in the right way can be very tricky sometimes. Especially in a team setting where there are weaker members than others.<p>When you work with a team of people that didn't understand what to test you end up with really bad tests that add very little value. Do you delete those test? Write sane ones?<p>When you end up with a legacy code base where doing something like functional UI testing is easy but doing unit testing on the actual code is almost impossible, do you even attempt to unit test it?<p>If you see a piece of code with that must be rewritten, but unit testing it costs too much time, do you simply start writing tests what you think the assumptions were and then just go about with the rewrite?<p>In the end I see a huge value in testing what you write, and being automated is preferred. My problem becomes picking up something else that was clearly done in a misguided fashion and reliably rewriting or refactoring it. I know there are probably some guides/books out there that demonstrate it so any suggestions are welcome.
For projects I am doing on my own (albeit, these are quite simple, just fun ones) I am doing TDD approach.<p>It is really natural to me, because part of developing idea is a research. It often includes doing some test of how particular library works, what format of data expected, etc. I always was finding myself doing small isolated programs/scripts to test specific question I have about it. And now it was so natural to start using TDD approach.<p>As other commenters noted, due migration of much logic to JS (client) side, testing it together with server app might be a challenge. For my particular case I "solved" it by using V8 library. I am developing with Perl language and there is great libraries available on cpan. V8 and pure Perl ones. I am using V8 for performance reasons (doing encryption), but before I used pure Perl JS library and it worked perfectly too.<p>So if your language of choose have libraries to hook into one of JS libraries, I would highly recommend to try to include JS tests into application tests kit.
I work in science. We agree that testing would be beneficial, but nobody codes well enough to actually get it done.<p>To all language designers, there is a HUGE space for a better scientific language. Make it easy for Matlab users to understand, but include better encapsulation and library support. Tie in testing and proving from the core.
Testing is really the last stage. Few software suites get there. Yes, yes, I know you're supposed to build with it in mind from day 0. And if you do that, you may never get to the finish line. You exert every ounce of energy you have to making a viable product. You worry about everything else afterward.
One other beneficial side effects of having an
automated test suite is that they come in handy
during any profiling one need to do against the
code base. Trigger the automated test suite from
the profiler and analyze its output to any performance bottlenecks in the codebase.<p>Also the practice and triggering the of automated
tests regularly (continuous integration) and tracking
the time it takes for the tests to run helps to
detect early in the development cycle if any of
the changes made were suboptimal. All environmental
factors being equal a new small feature implemented
shouldn't drastically increase the time it takes
to run the test suite.
For LedgerSMB, one of the really critical problems we run into is that of the legacy codebase. We test some critical things, but the legacy codebase has scoping issues that don't impact normal use in a CGI environment but impact test cases. It's one reason we are getting rid of it.<p>90% of the testing we do is actually on the stored procedures and the general framework. The reasoning here is that these areas have to work right and therefore we have to get this right all the time. Workflow and the like is more fluid, less easily spec'd out, and the like. Test cases aren't as meaningful there but we do have some.
I write tests today to make sure it works tomorrow. As projects progress, inevitably no matter what kind of ninja coder you are, a requirement that is beyond what you could have imagined will pop up. You can either say, no we can't do that... loosing a competitive advantage. Or you can can code without fear. Because when you're done, you have a full suite of sanity tests waiting to make sure you didn't mess things up. Unit tests can make an average guy like myself appear to be that Ninja coder those job ads are always asking for, the guy with the oakley glasses.
In production; its good to involve your customers in the development process so they feel included.<p>Seriously though, for small web projects I usually aim for 100% unit-test coverage on the models, 70-80% on controllers, and then depending on the application jasmine or selenium to verify the UI components are happy.<p>For larger projects, add in more integration tests ( models -> controllers, controller -> views ) and on something like mechanize to do full stack tests ( models -> view ).<p>Additionally for either small or large, running some sort of lint/static analysis at the CI can be beneficial.
I don't have a problem with the idea of test, as such, but I couldn't use them since I write webapps and what gives us issues isn't the Javascript code (about 80%) of the time.<p>It is the CSS, or failing that, the interaction between Javascript and CSS, which I haven't seen any way to test automatically (such test would be able to answer 'given this code, does the resulting DOM look like picture $N').<p>Usually when there is something wrong with the Javascript it blows up in our faces.<p>So if anybody knows of a testing framework that can do this, please tell me about it.
The best type of test depends on the type of software being developed. For the sort of statistical software that I have been involved with, I think that system level tests (with synthetic and/or real data) give tremendous bang for the buck. This is particularly true if the data is high volume, relatively homogeneous (in some sense), and most of the top-level interfaces are fixed fairly early on. Many other projects are not like this, and so may benefit more (proportionally) from different approaches to testing.
the answer to "how much" is aways "it depends".<p>* tech stacks evolve changing the amount of testing that is needed => most of the stacks allow to only focus on the "meat" of the logic, rather than things like integration (Spring Integration / Camel), network (Netty), cache (Redis) or even data structures (various language built ins).<p>* human is getting better with years of coding => I spot flaws and mistakes during code reviews N times faster than I did 10 years ago. I code in little pieces (usually functions), which "talk" back to me immediately even before they are finished.<p>* REPL is getting really good => Clojure, Scala, Ruby, Groovy, etc.. REPLs save lots of time and prevents mistakes: where a 5 minutes REPLay session reveals a nice and polished approach a lot quicker than a "let's try this / now rerun the test" formula.<p>* Domain knowledge and "'ve done this exact thing before" greatly impact amount of testing needed => e.g. deeper domain knowledge allows for [better] tests, while no domain knowledge requires lots of prototyping (even if you think it is the "real thing" at first, it is not, it's a prototype), and would greatly suffer from a large number of tests, as most of the time will be spent rewriting test instead of learning the domain.<p>In the end, the rule of thumb I always use is "do whatever makes sense". I don't buy TDD, ADD and other DDs. They are fun to read about, but they are too removed from the "real thing". If any DD term is needed, what I use is MSDD => "Making Sense Driven Development"
Another interesting question: how often do your tests run? Most folks probably run unit tests with continuous integration but what about functional and performance driven tests?
Unit tests. Tick.<p>Integration tests. Tick<p>Automated acceptance tests. No Tick.<p>Tried to sell concordian as a framework to support BDD - but that is a hard x-discipline change which would have required more effort to push through. So as a short term measure have started to write/express unit tests using a standard BDD style - GIVEN x WHEN y SHOULD z. This has helped to assign value to each unit test. There is now a direct connection between the test name and acceptance criteria specified in a user story.
You need an option for...<p>We have a test framework and a devoted team of people dedicated to encouraging the use of said framework but the rest of our engineering staff don't get it.
I just want to say, it's always a nice feeling when I get all-green output from rspec and jenkins. The problem is that tests, like your code, are subject to the laws of entropy that comes from bit-rot.<p>So, I test things that matter and don't change <i>too</i> often - <i>core</i> business logic.<p>100% test coverage is just a goal, a bar to aim for.<p>And I'm totally with Zed Shaw when it comes to TDD - not worth it when you're still trying to get a full understanding of your problem domain.
We need to test more. I've run projects before that had over 1500 automated tests..mostly written by myself, it was beautiful and so simple to make invasive changes.<p>We have a lot of catch up to do right now, but i think thats what "startups" often do. We will catch up with the tests in the next month or so, at the end of the day I know perfectly well that without them pivoting and making invasive changes will simply be next to impossible.
Not only does testing help with managing large codebases by being able to make actual assertions about certain parts of the code (to be able to <i>prove</i> correctness) but it also improves the <i>quality</i> of your code. If you're writing code that must pass certain tests, you inherently start to think about making that code more modular and de-coupled, ie injecting dependencies rather than creating them for a start.
I don't find an applicable selection for my company.<p>We write and run so many tests that it is a full time job curating the test suites that should be run prior to code delivery. Basically, if you don;t like writing tests you will be miserable at our shop.<p>The tricky part is keeping testing standards consistent when you get beyond 30 or 40 developers.<p>Developers tend to be more opinionated about testing practices than even editor selection and curly brace placement.
I really wish automated testing was significantly better for Java and the ilk. To steal from the Haskell world, I want to augment JUnit/TestNG with Small and QuickCheck.<p>The tests would go something like this:
1: SmallCheck exhaustively tests the small cases
2: JUnit/TestNG tests the main use-cases.
3: QuickCheck produces a lot of random tests and hammers the APIs.<p>Sadly (for Java at least) this appears to be a rather difficult ask.
Bottom line is: test code is code and you have to maintain it. If you write code that does not give you anything in return, or gives you more headaches than anything else, you wasted your time when you wrote it.<p>When you are writing any code, you should try to predict if what you will gain out of it will be worth it. In other words: evaluate the risks of anything you do in your life.
One thing that automated tests do well is repeating bugs that your user finds.<p>Sometimes it can be tricky (replicating the conditions of their data set comes to mind), but it's quite good for preventing regressions.<p>That said, they can give you a false sense of security. If your test is wrong, it can allow bugs to slip through the net until your user picks them up at the worst possible time.
I write web apps and I don't do any testing at all. I am also a unit testing newbi. I just run the app and make sure what change I made works. No automated testing what so ever. it just works and I believe it will be an unnecessary over head Is this bad? If yes, how can I unit test my JavaScript?. Plus i always thought UT is for code that compiles, right?
Here's what I clicked:
We have a test suite that tests a few critical things
We are happy with the amount of testing we do<p>Here's what I would have clicked, if present:
We have a test suite that test a lot of things, but probably only represents %75 coverage at best.
We'd like to do more testing, and we're continually adding more, but the biggest barrier is cultural.
I think TDD at times is overkill, but the core components of any app that others stand on MUST BE TESTED. The deeper your component is, the more critical tests are. Because if code a few levels deep breaks, it is much harder to fix/detect than something on the surface, which is usually immediately visible, immediately obvious, and low risk fixing.
Isn't actually running your program and checking if it works a form of testing?<p>The term "test suite" seems to refer to formal testing techniques like creating unit tests and the like. I don't do that. But I do test my program on every functionality by running and checking if it does what it's supposed to do. Does that qualify as testing?
Testing is a pain and it takes time. Plus, we don't always even find the bugs. But if you write unit tests or want to start unit testing, Typemock's newest release (released last Monday) makes unit testing easy and finds the bugs for you. <a href="http://www.typemock.com" rel="nofollow">http://www.typemock.com</a>
I do simple output and performance testing. I md5 the output produced by our programs and test their run time, memory use, etc. so that when we change the code, we can verify the output is the same and performance is still OK. I try to do some unit testing too, but do not have time to do that as much as I'd like.
I used "and also..." as my second answer, because none of those others applied. My real second answer is "We'd liked to do more testing and we're working on it as fast as we can consistent with producing the new features and products demanded". There's a decade of code that has very little testing, still...
I highly recommend this book "xUnit Test Patterns: Refactoring Test Code" (<a href="http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Code/dp/0131495054" rel="nofollow">http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Code/d...</a>) to anyone who wants to start to use tests on daily basis.
Currently working a webapp with a legacy(2002ish) java code base with a fair amount of testing but it's not even close to full coverage. JMockit has gone a long ways to towards making it easier to expand the test coverage but it's difficult to find time to make significant impact.
I have some projects with extremely thorough test suites, and some projects with no automated testing at all.<p>I find my desire to work on those projects directly proportional to test suite coverage. Once you start writing against automated tests, there's no going back...
I would have liked an intermediate option between "test all" and "test a few critical things". Pretty much we follow the 80/20 rule with unit and integration tests, and it's served me and different teammates well over years of software development.
It's scary that so many programmers don't write automated tests when their entire profession is about abstraction and automation. If you want it to work, test it. If you want to maintain sanity, automate it. It's not particularly complicated.
I don't usually test my code, but when I do... I do it in production.
<a href="https://www.google.com/search?q=I+dont+usually+test+my+code" rel="nofollow">https://www.google.com/search?q=I+dont+usually+test+my+code</a><p>(Sorry for the obligatory meme reference)
I think your definition of test is flawed. The majority of "testing" is just using your site to see if it is broken. You can write automation to pin point errors faster but it is not the way the majority of the world tests software.
yes, we have integration + unit tests for everything that we do. before we release any code the entire suite gets executed and tested which helps catch bugs quickly.<p>we even use a subset of those tests in production to make sure all sub-systems are working, not just a ping to the api.getsocialize.com domain which isn't sufficient.<p>When I first started testing I was rather skeptical. But now that I do it, I wouldn't code without it. There are so many other benefits of testing like cleaner code, incremental builds for native apps, and just general confidence in your deployments that allows us to deploy anytime of the day without worry.
I always test my code. It saves time and money.
<a href="http://hustletips.tumblr.com/post/19348536703/vet-your-work-you-will-save-time-money" rel="nofollow">http://hustletips.tumblr.com/post/19348536703/vet-your-work-...</a>
I Don't Always Test My Code, but when I Do, I Do It in Production.<p><a href="http://troll.me/i-dont-always-test-my-code-but-when-i-do-i-do-it-in-production/" rel="nofollow">http://troll.me/i-dont-always-test-my-code-but-when-i-do-i-d...</a>
After reading this thread, I've realized I have to make sure my next employer actually believes in testing.<p>I don't know how anyone can move forward in a long term application without having regressions done for you in the form of testing.
I answered "We'd like to do more testing but it's too much overhead" but it's not really true. The true answer would be "We'd like to do more testing but it's hard to convince people to really write tests."
My company has a pretty complex system for testing. But almost all code that makes it into production is done with ridiculous deadlines that forces us to skip all of our testing. Ain't that life?
yes yes yes and yes again
we're not obsessed about code coverage
- but get disappointed when it falls below about 75%<p>We've a code based which is a mixture of javascript MVC (Backbone) and PHP (Zend)<p>A healthy attitude to unit testing and dev-ops has saved our back more times than I care to mention.<p>Also - it's a very useful way to "train" new developers. Spending 2-3 weeks writing tests is a great way to get a feel for
a) the code
b) house style
where new developers can be immediately productive - without risking touching production code on day 1
We have a test suite that checks for as much stuff as we can but our application is really rather complicated, and to test it fully we would need a full time test engineer.
Its never possible to remember to test for everything. And according to Murphy's Law, what will break is the one thing you forgot to test for. So then, why test at all?
A good dashboard is better than testing.<p>Whenever I deploy new code, I make sure I pass all the unit tests, but then I watch the monitor the dashboard and incoming requests to confirm.
Testing is ideal for established business.
Testing is inefficient for some early stage startups which very often change the way their product works (features/ui).
I would need some sort of spec or at least a vague understanding of intended behavior before testing. A good day is a day my lead dev doesn't bork the repository.
Yes I test drive my code to describe behavior and relationships between collaborators. I use my tests to validate my code and design, not to "catch bugs".
Everything through unit tests<p>All important features through functional tests<p>All critical path features through endurance tests<p>Ad Hoc + user feedback for the rest