Has anybody tried to use code challenges like HackerRank or Codility to screen developers for a position on your team? Can you share your experience? I wonder if these challenges are good enough to measure candidates experience.
I'd say they aren't, because most software developers don't write new algorithms to solve tiny problems in ten minutes.<p>They work within problem domains they know about and use libraries, frameworks and other useful abstractions. They edit existing code, they architect apps, they think about stuff for a long time, working things over in their mind or in code. Being a fast coder is more about memorization and making every problem a nail for tools like memoization and big O analysis, then about long term sustained development skills. Real skills are not solving the problem in front of you asap. It's solving that problem and five other potential future problems using the same abstraction. It's not knowing the fastest solution, it's knowing several solutions and applying the best one.<p>Real code tests would be like what engineers do everyday.:
Here's some code we want to push to production, or plan to refactor for speed, write unit tests on the computer so we know it won't break.<p>Here's a design problem, how would you set up a db schema or object oriented design? Okay, now we need to add this feature, how would your design change?<p>Here's a problem involving calculating something based on a db call and returning it over the network... what's the solution you use if network latency is an issue? What about disk io in the db? What about if it's cpu bound? What about it uses too much ram? What about if at some point this may need to be asynchronous code? What if it's distributed code? What if you need to catch every edge case? What if things are breaking in production and you need to solve 80% of the problem now and worry about edge cases later?<p>Okay... here's an api, write code to use it to solve the problem, even though the api isn't meant to be used this way.<p>Here's super complicated code... figure out what it does and document it.<p>These are all things a software developer may do... I guess I'm advocating testing their ability to work by using stuff as close to work as you can.
I am not in a hiring position but Mr. Peter Norvig says that being good at programming competitions correlates negatively with being good on the job at Google.<p><a href="http://www.catonmat.net/blog/programming-competitions-work-performance/" rel="nofollow">http://www.catonmat.net/blog/programming-competitions-work-p...</a>
Let's say for the sake of argument that there is a positive correlation. Should you use it?<p>I argue probably not. You will be selecting for one "type" of person. Maybe people that do well on these tests are bad at, I dunno, creative or lateral thinking. Or maybe they are good at following directions but not at finding flaws and pushing back. Maybe they are great (or bad) at follow though (completing projects). Who knows?<p>My categories are a bit silly, and surely those test will not select exact carbon copy duplicates of people, but you get the idea. Most businesses will do best with a range of different abilities vs hyper-selected people all with the same skills, strengths, and weaknesses. Need creative problem solving for a new algorithm? Get Zahra on it. Need somebody to plod though tons of data and tease out why this hardware is failing intermittently? Ted excels at that. Need somebody to design a new caching scheme? Hmm, nobody has that skill set, but Tyrone is great at reading research and synthesizing it into a working solution for a specific problem. Have a boring problem with a difficult client? Joel loves working with people and building consensus, and doesn't care too much about the tech. And so on. Variety is far more valuable (IMO) than uniformity, and writing lines of code to solve combinatorial problems in O(N) time in O(log N) space is a tiny, tiny part of the problems a business needs to solve.
<i>I wonder if these challenges are good enough to measure candidates experience</i><p>Doubtful. I think the code challenge is critical, but obviously not the only thing to consider when evaluating candidates. If you're making your decision based solely on coding challenges, you're gonna have a bad time.<p>IMO the right thing to do is to use your screening challenge to "set the bar" and eliminate people who (in the opinion of your organization) are unable to code their way out of wet paper bags. The bag thickness will be different for everyone, of course. Once the candidate has escaped the bag, so to speak, then you can (and should) evaluate more holistically.
I've used Codility and other tests before. Doing amazing on a test isn't always going to mean they're going to be amazing at the job but if they do badly on the test then they're probably not worth continuing with. Tests like these act as a good automated time saving gateway to filter out obviously unsuitable candidate as conducting good interviews takes up a lot of time. You can learn a lot from their test answers as well; chatting to a candidate about their code in the interview can be a good way to understand their coding experience.<p>By the way, when reviewing tests answers, I always tried to be realistic and not penalise someone too much for e.g. missing a subtle optimisation trick or slightly misunderstanding the problem.
Debugging is highly underrated. Give a developer a bug, make em solve it. It's a challenge, it's not some weird algorithm that you may or may not know, it's practical, and troubleshooting tactics demonstrates a lot of analytical skills.