I've been at Google for almost 4 years, mostly working at startups before that. First and foremost, there's nothing special about Google (compared to any other tech company) and it's best not to evangelize any one company. Learning comes from experience, which means trying things (even if you end up making mistakes) and figuring out the shortcomings of a particular solution. I've seen a lot of bad (and a lot of good) code at Google.<p>What I learned, design-wise, came from a coworker. They planned everything out in advance and would draw out components on a whiteboard and define what they'd do. A few weeks would pass before they'd write any code. In contrast, I would start with writing skeleton classes and intuit the breakpoints for building a new class. We ended up with similar code structure at the end of the day, since we were both designing the architecture (even if our methods were different). But in code reviews, I would focus on if the new code made sense within the scope of the change; are there any clear bugs, code duplication, is there a more simple approach, etc. My coworker always went back to the whiteboard and drew up the components again, making sure the new code fit within the originally defined scope or if a larger change would make more sense. Because my skeleton classes were quickly replaced with implementations, I lost my original frame of reference and only focused on what was in front of me. I could still tell when something was obviously wrong (stop sending me CLs with global state to cross talk between components!), but there was creep in the CLs I approved and the code would slowly get more complicated until I realized it needed to be refactored.<p>There's a tradeoff (like in anything). You can't design your own solution for every code review, as it takes too much time and removes autonomy from your peers. But remembering to step back and look at the big picture was something I needed.<p>There were other things I learned, like structuring code in ways that it's hard to write bugs. For example, you could write a method like...<p><pre><code> void foo(Callback callback) {
if (error1) {
callback.onError();
return;
}
if (error2) {
callback.onError();
return;
}
if (error3) {
callback.onError();
return;
}
doWork();
callback.onSuccess();
}
</code></pre>
but it's easy to forget to call 'callback.onError()' in a future CL. A small refactor, like...<p><pre><code> void foo(Callback callback) {
if (bar()) {
callback.onSuccess();
} else {
callback.onError();
}
}
boolean bar() {
if (error1) {
return false;
}
if (error2) {
return false;
}
if (error3) {
return false;
}
doWork();
return true;
}
</code></pre>
has the compiler help you catch mistakes. Unit tests obviously help too, but doing both reduces the number of bugs that slip through. Similar tricks include annotating methods as @Nullable so you don't forget to nullcheck, annotating which thread is calling a method (eg. @FooManagerThread, @UiThread, etc), and doing all the if checks as close to the top of a method as possible so that you only do work if you're in a good state.<p>Oh, and here's one last tip that I only realized needs to be reiterated because most of my coworkers forget it. Validate incoming data! Every API needs a wrapper around the entry point that...<p><pre><code> * Verifies the caller is allowed to call that method
* Verifies the method can be called at this point in time (eg. hackerNewsApi.postComment(threadId, msg) only works if the threadId is valid)
* Verifies that the arguments make sense (eg. 'msg' is not empty/null/above the max comment size).
</code></pre>
And this is needed at every layer (application, server, etc). Trust no one, even if the only caller is supposed to be yourself.<p>Glossary:
CL = changelist ~= pull request