Last time I hired SWEs, remote work wasn’t a thing, AI didn’t exist, and interviews were in-person.<p>What’s your go-to approach for assessing remote engineers? Take-home assignments? Live coding?<p>Do you try to separate AI-generated work from the candidate’s genuine work, or do you just care about the final output?<p>I thought about a take-home assignment and ask candidates to reflect / document how they used AI, followed by a live discussion to gauge their understanding of their own solution.
Why would it be any different from assessing us in person? The only good thing to come out of AI so far IMO is that leetcode puzzles are basically useless now.