I'm sure that object decomposition does not per se, automatically, lead to code obfuscation. If the design is decent, it should be, if not obvious, then at least possible without too much ado to determine where in the file tree methods shaved off larger objects should be placed so that somebody who knows how the file names are chosen, but doesn't know the code itself, can find them by looking at a list or a diagram.<p>I find a "limit" at indentation depth 1 a bit overly harsh, but I at least casually consider it whenever I reach depth 2, and at depth 3, I <i>really</i> try bailing out, foregoing "else", too. And sometimes I even do it at depth 1, and get a nice fuzzy warm feeling from that. <i>Edit: This was underplaying it too much. "Callisthenics", cleverly used, really</i> do <i>help to organise the code if you have something conceptually larger, e.g. a DSL, to capture the knowledge as it is abstracted from the data.</i><p>Then again, I prefer using function composition over inheritance whenever I can, so my whole work method is geared towards managing many small files. Systems of names, essentially. I think a lot in terms of names and naming systems.<p>Transparency, AFAICT, needs to be built into the design, via an ongoing effort for the seperation of concerns (systematic naming is crucial in that). Which has me trending towards smaller objects over the years. That's just my experience; YMMV. But I don't see how moving from, say on average, 50-LOC-objects to 100-LOC-objects now would make my codebase in any way less "obscured".