Is writing browser automation test enough (Eg: Assert that a page has these inputs and a button. Before the button is clicked, there are 0 records in DB table. After clicking the button, there is 1 record.), or do we want to write tests in levels below as well (Eg: After writing browser automation tests, write feature tests, and then write unit tests.)?
TDD means writing tests before writing actual code, which presupposes writing very granular unit tests.<p>In practice though, what you want is sufficient coverage of your business logic. If you are well versed in TDD, your code architecture will be sufficiently decoupled even if you don't do TDD per se, so you should be able to test whatever is important for your application.<p>However, note that testing each layer independently generally gives you the best bang for the bug: trust that you can update code without worrying about breaking something. Then you only need minimal integration tests to ensure integration points are not broken.<p>What you describe sound like full system tests which are slowest, and you should have only a few of them in comparison, basically to replace manual QA.<p>Note that everybody uses slightly different definitions for which tests are which, but I hope you get the point.
It is the opposite.<p>Browser automation tests are horrifically expensive. Unit tests are cheap if (1) you don’t let them get expensive (yesterday I wrote tests a process that needed key pairs, I genned keys once and hard coded them) and (2) the design is appropriate.<p>One design might be untestable. Another design might sacrifice design for flexibility. There is another design which is testable and good in other respects.<p>The worst cost of tests is that they take a long time to run.