I've been performing technical interviews for the past week, and half of the LinkedIn applicants where using some kind of AI tool to a more or less obvious degree. We ask them to share the screen and we paste (or for some questions ask orally) toy questions like reviewing a piece of CSS or writing some python method.<p>I've seen many people use a secondary screen to copy-paste answers, read literally from ChatGPT in a very obvious manner, type into some other window... even after communicating clearly that we're against this pretty obvious behavior.<p>What's your current way of preventing this? I've thought of asking them to join the call with their phone too and keep the camera pointing to their hands/keyboard, but it's too intrusive.
If you observe them doing something you explicitly told them not to do ... why not terminate the interview right then? Actually, if they do this (copy from ChatGPT) it's really good insight into their personality - saves you a ton of potential hassle if you were to hire them.<p>Right now, there are SO MANY smart and qualified coders looking for work. Not all of us, I mean them, are cheaters. The recruiter or whoever is managing your candidate "top of funnel" needs to do a better job <i>before these candidates reach you</i>.
It's been about two years since I regularly interviewed people (and longer since I was interviewed), but imagine some of the following might still be helpful.<p>Presumably, you're interviewing people you want to work with on your team, which is not just about technical skill, but also trust, the ability to communicate, etc.<p>Repeatedly ask candidates to explain what they're doing and thinking. Rote reading off some text in another window or copy-pasting should be a dead giveaway you don't want to work with the person, particularly if you've asked them not to use AI tools.<p>Make your interview problem open-ended enough that you can calibrate for somewhat varying ability / nervousness / surprises. If your workplace allows use of AI tools (I imagine the list of those that don't is shrinking rapidly), consider allowing them during the interview but ask them to disclose their use, just as you would expect your colleagues to do while working with you. Someone using, say, Copilot should be able to explain why they are accepting or rejecting the tool's suggestions and show they can adapt the results to suit, on the fly.<p>If I were a candidate in your interview, asking for anything intrusive such as a phone camera pointed at the keyboard, etc. would likely be an instant deal-breaker for me. Remember that they are (or should be) interviewing you as well.<p>In my last few jobs we had interview segments which were purely hands-on coding, and other segments which were more design oriented and conversational. I imagine (and now I'm thinking more broadly than your company in particular) that as the tools get better, you'll see more places allowing AI tools for hands-on work, but an increased emphasis on communicating and problem solving. These design problems can be hard to come up with and facilitate, but they can also tell you a lot about how the candidate will communicate, and think on the fly.<p>Good luck!
One approach is just to set problems that ChatGPT doesn't answer correctly. I admit that I haven't tried the following with ChatGPT 4 directly, but here's an example of Copilot screwing up:<p><pre><code> Prompt: Write a function to swap two bigints >= 0
using only multiplication, division, addition
and subtraction, without using any
temporary storage.
function swapBigInts(a, b) {
if (a === BigInt(0) && b === BigInt(0)) {
return [a, b];
}
a = a * b;
b = a / b;
a = a / b;
return [a, b];
}
</code></pre>
The fix for the zero cases is easy for a human to figure out. (E.g. add one to a and b at the beginning then subtract one at the end.)
Only thing to truly stop it is by having in-person interviews like we used to do pre-COVID.<p>IMO, online video interviews are the worst, for both sides, and doesn't allow for that close connection to ever take form.