TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Measuring the long-term benefit of interview code tests?

57 点作者 traviskuhl超过 3 年前
If your company does coding tests during the engineering interview process, (how) do you measure the long term effectiveness of the tests? Do you keep internal metrics comparing candidates' score to their long term impact/success at the company? If yes, what have you learned from the results and how have those learnings impacted your hiring process?

19 条评论

kasey_junk超过 3 年前
I don&#x27;t know if my current company does, but when I first implemented them for a company I worked for ~15 years ago we definitely did.<p>At that company (which was a ~200 engineer, privately held, software company) we found a few things: - in person tests were less predictive than take home tests. - tests that did not provide automated test cases as examples were less predictive than those that did. - there was virtually no predictive power to &#x27;secret test cases&#x27; that we ran without providing to the candidate. - no other part of the interview pipeline was predictive at all. Not whiteboarding, not presenting, not personality interviews, not culture fit testing, not credentials, or where experience came from, nothing. That was across all interviewers and candidates.<p>A few caveats about this: - this was before take home testing had become widespread and many companies screwed it up. At the time we were doing this it was seen as novel and interesting by candidates, not as just one more painful hoop they had to jump through. - we never interviewed enough candidates to get true statistical relevance. - false negatives were our biggest concern, they are extremely hard to measure (and potentially open yourself up to lawsuit). The best we ended up doing was opening up our pipeline to become less selective to account for it. This did not seem to reduce employee quality.<p>In a more meta-sense, that experience led me to believe that strict hiring pipelines are largely not useful. Bad candidates still get through and good candidates don&#x27;t. Also, many other things have a much bigger outsized impact on productivity than if a candidate was &#x27;good&#x27;. It turns out, humans do not produce at consistent levels all the time and things outside of what you can interview for make more impact (company process, employee health, life events, etc. all have way more impact on employee productivity than their &#x27;score&#x27; at interview time).
评论 #29121976 未加载
评论 #29121571 未加载
评论 #29122875 未加载
评论 #29121572 未加载
评论 #29122343 未加载
评论 #29121363 未加载
lbriner超过 3 年前
We don&#x27;t use coding tests in this way. We use coding tests as a screening process to ensure the candidate is in the correct ballpark.<p>If we are recruiting a senior, we would expect them to easily complete basic technical tests. If they are more junior we might use them only as an indicator of their ability.<p>I don&#x27;t particularly expect a strong correlation between how well they did in the tests and their long-term ability since their value is made up of many things, only one of which is their ability in the tests.
评论 #29120891 未加载
cap10morgan超过 3 年前
In my experience &quot;Does your company measure the long term benefit of X?&quot; is 99.99999% &quot;no&quot; for any X.
评论 #29121052 未加载
psadri超过 3 年前
I&#x27;d like to point out that success in a role depends on more factors than the technical interview.<p>I have found that investing the time to correctly onboard new team members makes a huge difference. Correctly onboard an average&#x2F;good hire and they go on to produce solid output and often thrive. On the other hand, you could have a great new hire but because of no&#x2F;poor onboarding they &quot;sink&quot; instead of swim.
vannevar超过 3 年前
The company would have to be fairly large (&gt;100 employees) and long-lived (&gt;10 years) to generate an amount of data with any hope of statistical significance. Employee &quot;success&quot; depends on many factors, and an employee who can seem to be a failure in the short-term may end up becoming very successful (or vice-versa), simply because of external circumstances---the nature of the projects, the clients, colleagues, etc.
评论 #29122086 未加载
boldslogan超过 3 年前
and maybe a follow up question (to measure the false negatives)<p>Do you check the applicants who were denied based on their test and see where they ended up working at. E.g. you are a mid tier start up who rejects someone who ends up working at amazon as a high level engineer – do you mark that a failure?
评论 #29121809 未加载
评论 #29120840 未加载
评论 #29120691 未加载
daviddever23box超过 3 年前
For developers, coding tests that include deployment &#x2F; infrastructure components (i.e., deploy your solution to a cloud container, or, build and compile your solution for desktop platform testing) are uniformly consistent with long-term impact &#x2F; success. Problem solving at the algorithmic layer may be inversely correlated to success, if a candidate lacks a production skill set.<p>Unless one&#x27;s focus is research and development, there is a non-zero cost to training for production skills, so it&#x27;s best to start with someone who understands the delivery process.<p>Linear metrics are probably less useful, inasmuch as it will become rather obvious as to which employees are self-starting and work well with others, versus those that require motivation or are staunch individualists.
评论 #29121444 未加载
评论 #29121301 未加载
评论 #29121399 未加载
评论 #29120938 未加载
kqr超过 3 年前
The more fundamental question: is your company meaningfully able to measure the long term impact&#x2F;success of its employees? If so, how?<p>The submitted question seems to just brush over this aspect, but so far when I&#x27;ve tried to evaluate interviewing techniques that has been the primary obstacle; people just can&#x27;t agree on what success means once employed, so anything that tries to correlate interviewing to that will be an equal amount of junk.
poulsbohemian超过 3 年前
I think my favorite story of code tests was where one interviewer presented the test, gave me 24 hours to complete it, and I was then supposed to be &quot;graded&quot; by second team member. The second guy obviously didn&#x27;t understand the requirements of the coding test (despite presumably receiving the same written instructions I received), so rejected me outright. Which I guess kinda gets to my thinking on coding tests, where you often learn a lot about companies by the crappy &quot;tests&quot; they think have merit.<p>I interviewed hundreds of technical people in my career, across dev, test, and ops skill sets. I saw limited correlation between tests and aptitude. If you talk to someone about a project they&#x27;ve done, you know pretty quickly:<p>1) Can they communicate technical ideas? 2) Can I develop a rapport with this person and work together? 3) Do they understand what they built? Can they talk about tradeoffs they made? Did they learn anything from the experience?<p>A fizz buzz test isn&#x27;t a terrible idea, but you also have to have an interviewer that understands how to administer it within the wider context of the interview. If the interviewer themselves doesn&#x27;t understand it, they aren&#x27;t qualified to actually administer it.
dreen超过 3 年前
There is no score or measurement. The task is to write a stopwatch in any technology you want and explain it along the way. Then we put in some bugs and ask for troubleshooting. It&#x27;s all about the approach to the problem.
andrew_超过 3 年前
I&#x27;ve never worked for a company that did (18 years in the industry this year). Of the 8 companies I&#x27;ve worked for, only one had interviewing figured out, and they didn&#x27;t track or measure metrics on coding tests, challenges, etc. They did allow the challenges to evolve and they were tailored to the position that the tests&#x2F;challenges were for.
AnotherGoodName超过 3 年前
The big FAANG do for what it&#x27;s worth. They have entire ML pipelines looking at hiring. The following isn&#x27;t about interview effectiveness but is one example of the analysis done:<p><a href="https:&#x2F;&#x2F;catonmat.net&#x2F;programming-competitions-work-performance" rel="nofollow">https:&#x2F;&#x2F;catonmat.net&#x2F;programming-competitions-work-performan...</a>
xeromal超过 3 年前
As long as we keep finding good people and are not understaffed, it&#x27;s working for us. Not more metrics needed than that.
ipnon超过 3 年前
Not empirically, but my manager focuses primarily on the engineering expensive for our team and potential hires. This results in explicit feedback gathering, modifying our process accordingly.<p>We have short, standardized, broad interviews. We look for what can be added to the team rather than poking holes, and we&#x27;re still trying to improve.
Aeolun超过 3 年前
We don’t do coding tests at all. We do one 30-60m interview that covers some general tech questions and motivation.<p>So far we’ve hired 7 decent and 3 great people. No truly bad people have made it through that pipeline yet.<p>I can’t say anything about why, and I’d be prejudiced in any case.
评论 #29121885 未加载
评论 #29121943 未加载
nonameiguess超过 3 年前
The only way this could even conceivably be done in a scientifically valid way is randomized controlled trials, which would mean not giving the same interviews to all candidates, which is only possible if hiring at a large enough scale as to even be able to sample meaningfully from multiple &quot;interview type&quot; groups, and it would of course require it to be legal to give different interviews at random, which I&#x27;m not sure is true. I guess as long as it&#x27;s actually random, you&#x27;re not discriminating against any specific group, but it isn&#x27;t exactly fair and you risk killing goodwill of your employees when they realize you&#x27;re running experiments on them.<p>Of course, it&#x27;s really not possible at all to do this at the level of rigor expected of, say, clinical trials. Each new hire will know what type of interview you put them through, and there is no reliable way you can prevent them from telling others.
a_c超过 3 年前
I would say anything having indirect correlation has no easy way to measure. Ultimately a company is either looking for product market fit, customer growth or revenue&#x2F;profit&#x2F;cash flow depending which stage the company is in.<p>On top of being hard to measure, the data points generated through hiring is just too few and the data collection process is too long and subjective<p>Just ask your team if they like the new hire, can they make progress together. Things like do you like working with the new hire? Is the new hire bringing in new insights to the team? Is the new hire easy to work with? Is the new hire learning new things.<p>And most importantly, can the team let go of mismatch fast enough. Overall I would say it is just not worth it in measuring hiring.
nitwit005超过 3 年前
Of course not.<p>However, we do hire some contractors essentially without an interview, and it is fairly apparent that&#x27;s a bad idea.
评论 #29121147 未加载
maxgfaraday超过 3 年前
The main thing that matters is training managers properly. Training management to be clearer about how they communicate goals and how transparent they are. The fault is not with candidates. Making sure a candidate can communicate clearly and effectively and has some passion for the position is all that you can really do at the interview level. The rest is quite frankly having better management and a culture of being helpful. Metrics on your org should be about how clear are the processes and planning toward goals and how well do they get communicated and executed. I worked for a MAAAN company and this one didn’t get it right. I figured they just made the decision that it is better to crank through people than actually grow them - since they were never short of candidates. This was pretty clear from their promotion culture and assessments that rewarded selfishness. Bottom line... train managers. Build the scaffolding to grow competent, empathetic, managers. Communication and clarity and empathy wins over everything else. F** programming test hazing. Commit to the people in the organization. Done.