Lots of interesting stories.<p>Disclaimer, I worked at Google for 4 years ('06 - '10) and interviewed a lot of folks (it was always a part of the job) and did a number of phone interviews too.<p>The process then (as perhaps now) was broken and some folks within Google understood that. The process and goals were pretty simple, hire smart people that get things done.<p>The process was aimed at finding smart people who get things done. That, like the phrase "largest integer" is easy to say and rolls off the lips but when you need to actually write out what it means gets a bit squirrely.<p>The first challenge is what does "get things done" mean? Well for college students it means you got your diploma and at the same time you contributed to some FOSS project. For people with 0 - 5 years experience it means you shipped a product where you did most of the coding. For people with 5 - 15 years experience it means you shipped a product where you did most of the coding. For people with 15 to 25 years experience it means you shipped a product where you did most of the coding.<p>Did you see what I did there? Google wanted smart people but the definition of smart was "you write a lot of code" and "get things done" was "that code shipped in the product/project." Fundamentally they didn't have any way to judge or evaluate the 'goodness' of what someone did if it wasn't writing code. Designers don't write a lot of code and they don't generally have a good metric for what constitutes good which can be empirically tested. The process has a hard time accomodating that. And if you're "good" at spotting problems in a process or getting folks organized around some better way of doing things? That's not measurable either.<p>There was a company, BASF, a chemical company which had an advertising campaign around the fact that they were part of the process and materials that made quality products, their tag line was "We don't make the products you buy, we make them better." [1] And I noted that Google was exceptionally bad at hiring "BASF" people, which is to say people who bring the quality of other work up, or products up, or processes up.<p>The people who did those roles in Google all started out as coders and that is how they got hired. It was only after they were working there that they (and Google) discovered they had this leveraging effect.<p>In order to keep bias out of the process, Google isolates the steps where bias can creep in; separated the folks who decided hire / no-hire from the folks who decided on compensation; the folks who decide to hire and the folks who decide which project they work for. For all my time there, you could not interview for a specific job, you interviewed to get 'in' and then your name showed up on a list and the allocation process would determine which project got you.<p>Often a candidate would ask during the interview "What would I be working on?" the only truthful answer was "That is impossible to say."<p>Before you even get to that point though you get into "the system." Since Google keeps a record of everyone they have interviewed or has shown up as a lead and not interviewed. There is a long, long list of people (I once joked that it was everyone in the market). If you are an employee and you might know that person, common employer, common university, etc. The system could automatically send you an email asking for your opinion on the candidate.<p>This isn't really any different than any other company, person X shows up in the candidate list, people who work at the company who worked at person X's company are asked if they knew this person when they were there. But it can have unintended consequences.<p>Lets say there is a person X, who gets hired, from company Y, and person X really didn't fit in at Y and felt really abused by the company. Now new candidates from Y generate an email to X with the standard "You worked at Y when candidate Z did etc etc." Now person X is still pissed off about how Y treated them and so they respond to all of those emails with "Yeah, candidate Z was a crappy engineer, everyone had to carry for them they never did anything useful." Maybe someone else from Y says "candidate Z was great, everyone turned to them for advice." The process of separating the interviewers from the decisions means that this feedback bubbles up all equally weighted. Hard to know that employee X has said the same thing about every candidate that has come from Y, and if the committee sees two comments one positive and one negative and there isn't anyone on the committee who knows any different then how do you evaluate?<p>The simplest solution if either has an equal probability of being the 'correct' assesment is that you pass on them <i>because you can't know if you have bad data.</i> And that was a part of the process that was fundamentally broken.<p>Because Google gets a metric crap load of resumes and candidates all the time, passing on someone who is +1/-1 like that makes sense because you can't know which of the two feedback comments more accurately reflects the real candidate behavior. The result is that hiring someone with a grudge can poision the feedback pool for a bunch of possible hires. If you weren't Google and didn't have this huge backlog of candidates, you might dig deeper to find out which one was the more accurate representation, but if you are Google you just move on. Externally that sometimes appears that you just stop answering the phone.<p>It also means that you miss out on quality people who would be good for the company and ultimately Google will have to find a way to address that issue (if they haven't already) because they are running out of people to interview.<p>As with most things Google, you combine a data-driven, automata friendly process with fuzzy data and alternate agenda actors, at the scale Google runs at, and you get lots of weird artifacts.<p>[1] <a href="http://www.youtube.com/watch?v=6ksUNyhQjLE" rel="nofollow">http://www.youtube.com/watch?v=6ksUNyhQjLE</a>