TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Bayesian Inference for Hiring Engineers

50 pointsby Harjalmost 7 years ago

11 comments

KirinDavealmost 7 years ago
Am I the only person getting progressively more creeped out by the series of bizarre, unsourced, pseudo-scientific ads TripleByte has been running?<p>On Reddit right now they&#x27;re running this weird faux-linear-algebra thing where they imply that they can build a vector of your skills vs. a vector of job requirements and get a meaningful answer via the dot product.<p>Which is a bit like saying you can predict the weather by taking the dot product of a vector of ocean and air. What even are the units? What does any of this even mean?<p>Hiring is a challenging, multi-dimensional thing. It involves a high-risk and ideally informed decision by multiple parties. Doing it effectively is hard. Doing it effectively and respectfully is harder still. And yet TripleByte comes in and says, &quot;We sound vaguely like machine learning. We got this.&quot;<p>Honestly, they make HackerRank, which was another extremely sketchy organization making a lot of very questionable decisions, look reasonable by comparison.
评论 #17201253 未加载
评论 #17200561 未加载
评论 #17202180 未加载
评论 #17201180 未加载
yositoalmost 7 years ago
Not a fan of Triplebyte. I&#x27;m a full stack engineer with 10 years of experience building web apps, and while I&#x27;m not the best in the world, I&#x27;m still pretty damn good. I took Triplebyte&#x27;s interview a few months ago, and they rejected me with a link to a tutorial on how to build your first webpage (<a href="https:&#x2F;&#x2F;learn.shayhowe.com&#x2F;html-css&#x2F;" rel="nofollow">https:&#x2F;&#x2F;learn.shayhowe.com&#x2F;html-css&#x2F;</a>). This company knows absolutely nothing about hiring good engineers.
评论 #17201116 未加载
compumikealmost 7 years ago
Companies seem to be adding more screening steps to try to reduce their false positive rate -- the rate at which they interview people who aren&#x27;t hireable.<p>But most don&#x27;t seem to understand that mathematically, there&#x27;s a tradeoff for a higher false negative rate.<p>Screening false negatives are people who would have done well in an interview, but don&#x27;t make it to that stage. These are more hidden to the company, but quite expensive for the hiring process, and painful for people who are wrongly rejected. If we put this in probabilistic terms, I hope we can have a deeper conversation about what&#x27;s happening and how this issue impacts engineers.
评论 #17201466 未加载
评论 #17199932 未加载
评论 #17199622 未加载
crispyambulancealmost 7 years ago
I kind of resent these attempts at optimally cherry-picking &quot;the right&quot; candidates using data.<p>Hiring is _intrinsically_ a subjective act.<p>People are very much a moving target. They change over time. Work experiences, even bad ones, shape one&#x27;s skills and ability to cope in organizations. Almost everybody has a bad-fit job at one time or another. The experience of a bad-fit is actually important to the growth of the individual and, I think, their coworkers and employers.
评论 #17200978 未加载
评论 #17200945 未加载
评论 #17200288 未加载
评论 #17200441 未加载
评论 #17200993 未加载
acconradalmost 7 years ago
On a side note, I took the 20-30 min front-end exam that is referenced in the article and a few things bug me (in case anyone from TripleByte is reading):<p>1. I did &quot;exceptionally well&quot; but I don&#x27;t know how many I got right or in what percentile I fell in. Why? Firstly, I&#x27;m immediately suspect if I actually did that well. For all I know, the &quot;top percent&quot; could just be anyone in the top 50%. And the top 50% only gets 10 questions right. But more importantly, I already stated in the beginning I was taking it for fun, so why can&#x27;t I learn what I got wrong so I can improve? I imagine there were areas I got wrong that were repeated, which brings me to my next point...<p>2. Why so React-centric? I happen to use it but plenty of people use Angular or Vue. You can&#x27;t expect them to know about the React events lifecycle.<p>3. Okay, so I don&#x27;t live in SF or NYC. But I know some of those bigger companies have offices in Boston, where I do live. Why can&#x27;t you make that work if you already know what companies are in your pipeline and which offices they have? Seems super expensive and wasteful to lose out on a great engineer when you know your list of 200 companies totally has an office not in SF or NYC (e.g. Facebook).<p>4. Okay, so I want to work remote. Why can&#x27;t I just take your final Google hangouts exam so that way you have on file &quot;okay cool this person is great, we can fast track this person.&quot; If you green light companies who are remote-friendly, you don&#x27;t have to worry about this issue. Plus isn&#x27;t TripleByte a YC company, working with other YC companies? I know for a fact that GitLab is also YC and is a remote-friendly company. Not to mention <i>you&#x27;ve</i> advertised for remote engineers! <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15066073" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15066073</a><p>I dunno, given that the article is all about touting how effective the 30 question exam is at screening out candidates, you&#x27;d think you&#x27;d want to do something useful with that quiz instead of locking people out, even if they don&#x27;t fit your current criteria.
评论 #17200639 未加载
评论 #17202260 未加载
评论 #17200556 未加载
评论 #17200668 未加载
kofejnikalmost 7 years ago
Just my $0.02 about triplebyte - I went through all interviews, seemingly doing fine, but after the last informal talk (which also seemed ok to me), there was about 10 days of total silence. Finally, I emailed to enquire and immediately received an &#x27;Unfortunately, ...&#x27; letter.<p>So, in the end, I&#x27;ve had 4 hangouts sessions over 3 weeks, plus time spent preparing, and was rejected with no feedback at all. I&#x27;m still curious, was it the bloom filter?
评论 #17297660 未加载
asadlionpkalmost 7 years ago
I will be harsh. This looks more like &quot;hey, watch us force some math onto this topic and look cool&quot;.<p>Sadly, this might impress some dumb CEO to use them though.
burntealmost 7 years ago
The problem isn&#x27;t the computers, it&#x27;s the people. You put HR-bots on the task of listing the job, and they don&#x27;t know Atom from Adam, so they list all sorts of silly requirements like 15 years of SAP experience, 15 years of Ruby on Rails, 15 years of COBOL, and all for a $20&#x2F;hr entry position, or a list of certifications that no human could ever accumulate. Then what happens is applicants start keyword spamming their resumes just to get noticed, and now as a technical person I get a stack of resumes that are absolute trash.<p>Two years ago I was hiring for a sysadmin. My HR department put my requirements up on Indeed. I got 70 resumes that passed their screening. Of those 70 I found 5 that I wanted to interview, 3 that showed up, and none were hirable. I left a company several months ago, couldn&#x27;t deal with the management anymore, and the past few months of job searching have been excruciating.<p>We need more technical people screening resumes and comparing to actual job requirements.
closedalmost 7 years ago
Edit: I think I&#x27;ve run afoul of an anti-triplebyte sentiment. I should clarify that I think this post did a good job building a very simple example of the statistical theory behind assessment,but I have no idea whether their product &#x2F; approach is reasonable or not. Building a good assessment is much more than just statistics, and it sounds from other comments that there are serious concerns about the validity of their tool.<p>Really enjoyed the build up from simple cases, to more complex models!<p>If you&#x27;re interested in the statistics behind estimating skill, and how well questions tell apart novices from experts, check out item response theory :)<p><a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Item_response_theory" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Item_response_theory</a>
mlthoughts2018almost 7 years ago
My experience hiring machine learning talent over several years has been that people over-hype the cost of a false positive. Both the article&#x27;s false positive (expending the cost of interviewing on someone who ultimately reveals to be the wrong fit) and also a more fundamental false positive: actually hiring someone who would hypothetically fail a lot of these interview pipelines.<p>The discussions about making these pipelines more quantitative, with assessments and quizzes, always couches it with a tacit assumption that the <i>worst</i> outcome would be to actually hire someone who fails at one of these interviews. Rejecting a good person sucks, as they say, <i>but not as much as hiring one of the multitude of sneaky, low-skilled fakers out there.</i><p>And of course, everybody&#x27;s got their hot new take on how to spot the supposedly huge population of fakers.<p>What I have learned is two-fold:<p>(1) That person who aced all your interviews and finally looked like the perfect person to hire probably just spent 3-6 months utterly failing at a bunch of other interviews, just to get into &quot;interview shape,&quot; refresh on all the nonsense hazing-style whiteboard trivia about data structures that they had never needed in years at their job, etc. So it&#x27;s totally asinine to believe that someone passing through all your filters must be the sort of person <i>who would rarely fail</i> some filters. <i>That person almost surely did fail filters, and the companies where they failed believe they dodged a costly false-positive bullet, while you believe you just made an offer to the greatest engineer.</i> Hopefully you can see the myopia here.<p>(2) The cost of passing up a good-but-failed-at-interview-trivia engineer is often far greater than the cost of hiring them. For one thing, &quot;suboptimal-at-interviews&quot; engineers are pretty damn good engineers, and they can do things that differ from esoteric algorithm trivia, such as helping your business make money. Another thing is that many engineers can generalize what they learn, generalize from example code or templates, etc., very efficiently. So while they might reveal a weakness by failing part of an interview (and <i>everybody</i> has such weaknesses), why do you really care? They can probably become an expert on that weakness topic in a matter of months if they work on it every day, or if you have existing employees who can mentor them.<p>But the biggest thing is part of what Paul Graham wrote in &quot;Great Hackers&quot;: good engineers tend to cluster and want to work with other good engineers.<p>So if you&#x27;re sitting there without already having a few good engineers on your team, then most likely, the cost to mistakenly rejecting a great candidate who happened to have a bad day, or a great candidate who happens to hate writing tree algorithms on whiteboards, leaves you running a huge risk of losing out on a good engineer who could help kickstart the phenomenon of getting the next good engineer.<p>When your team is in this stage, you absolutely can manage with a few &quot;dud&quot; hires who need a lot of help or who have skill gaps in key areas. The cost of adding them to the team and managing their &quot;suboptimality&quot; is <i>far less</i> than the continued search costs brought on by rejecting good candidates with and overly risk averse hiring threshold, and leaving your team in the state of affairs where it still doesn&#x27;t have a good engineer to help attract more.<p>In other words, the loss function penalizes false negatives more severely than the combined penalty from effort spent on true negatives and suboptimality &#x2F; management costs of false positives.<p>But all these skeezy interview-as-aservice businesses what you to believe that the opposite is true, that if you accidentally hire a &quot;faker&quot; because your hiring process was too easy, then Cthulhu is going to rise out of the sea and lay waste to your company.<p>Of course they want you to believe that. That&#x27;s how they make money. Preying on your fears over what would happen if you just unclench and treat candidates like human beings with strengths and flaws and don&#x27;t hold them up to ludicrous standards that lead to self-selecting macho 22-year-olds getting hired because they just spent 10 months on leetcode.<p>When you start to realize this, it becomes obvious that onerous code tests, brainless data structure esoterica, hazing-style coding interviews, and especially businesses that offer to outsource that nonsense, like TripleByte, is all just snake oil junk.
greatamericanalmost 7 years ago
I hope this company runs out of money. Writing an article like this shows how deeply they misunderstand the problem they are seeking to solve.
评论 #17200739 未加载