We write a lot of what we call "smoke tests," which are essentially integration testing in most people's parlance. It's a long story, but we have a custom web framework that's widget-based, and one thing that allows us to do is to write UI-level tests directly in-process against that widget model, rather than having to use an external client to deal with the HTML output (so they can run and be written like unit tests). Our internal language has an open type system, allowing us to essentially do type-safe metaprogramming on the set of pages and widgets, such that if someone changes the page (like removing a button) the test will cease to compile. In general, maintenance is a huge problem for UI-level tests, so that's been a huge win for us.<p>To test the framework itself, we mainly use Selenium to test the actual HTML and javascript output. To test things that Selenium can't easily handle, we've also used the Watir library (<a href="http://wtr.rubyforge.org/" rel="nofollow">http://wtr.rubyforge.org/</a>).<p>Our real unit tests are basically built on top of JUnit, though we have our own extensions to it that are specific to our platform.<p>To run the tests and report results, we have a custom application developed on our web framework and the rest of our software stack. Originally we used CruiseControl, but we outgrew that fairly quickly. It does a lot of stuff for us, but the key abilities are 1) parceling the tests out to a farm of servers and 2) assigning test breaks to users (basically a guess based on who changed what files in source control since the last successful test run); if you have more than a few people on the team, without actually making the test breaks one person's problem to resolve you end up with massive diffusion of responsibility where no one thinks it's their break.<p>We also have an internal performance testing harness for running load tests, which we write using the same framework as the smoke tests I described above (though in that case the clients are naturally remote).