I keep coming back to this phrase used in this post: "it was scary".<p>Yeah, hiring is scary. Hiring is insanely expensive on all fronts. Firing people is difficult, it's expensive and legally exposing. Hiring the wrong person, allowing them access your systems and potentially exfiltrate your IP to them is a hazardous but necessary venture.<p>The thing is, none of these things really changed with AI. People have been lying about their experience for literally centuries. IMO the advent of AI-laden candidates is going to nudge the hiring process back to how we did it 10 years ago, with a good old fashioned face-to-face interview and whiteboard questions. This means a lot of things that we've grown accustomed to in the past 5 years is going to have to melt.<p>- people are probably going to have to fly out for interviews, again.<p>- awkward and/or neurodivergent people are going to have to learn social skills again.<p>- And yeah, you guys, it's time to buy a suit.<p>Companies should consider reverting to forking the upfront $13-1500 dollars for a set of plane tickets for their hiring team and rented conference rooms for a week. It's a whole lot cheaper than spending 50k because you hired the wrong person for half a year.
"Preparing with AI" sounds like an issue here, and it's not. The issue is lying about your experiences, which people have done since the beginning of time. I "prepare with AI" by having it help give me hints when doing leetcode problems, which is very helpful. Interviewing is not a presentation, it's a conversation, and having a simulated other side can be helpful.<p>This shouldn't be surreal at all. A candidate just wasn't able to make up relevant experiences on the spot.
It's really easy to catch these scammers. Ask for a non-trivial code or work sample, something they have written. Actually take the time to read through the code and understand at least a part of it. In an interview, ask them some questions about it. People who actually wrote the code or did the thing can talk at length about about what they did, the history behind it, trade offs, have colorful stories about it, etc. I don't even care exactly about the technical details of it, I'm looking for signals that they are a liar.<p>If they say they don't remember, that's a red flag. If they can't describe how something works, that's a bigger red flag. You're not looking for photographic memory, but it's very obvious once you do it a few times who is real and who is lying.<p>It's common sense, if you don't put in at least a tiny bit of effort in your hiring process, you can only expect to attract similar low effort candidates.
Potentially important side points, since not everyone knows, and we don't want anyone to learn a mistake by example:<p>1. Don't use blur to redact documents. Whatever blur was used can probably be reversed.<p>2. Don't try to hide the identity of someone you're talking about by redacting a few details on their resume. With the prevalence of public and private resume databases, that's probably easy to match up with a name.
As someone who has conducted interviews with candidates almost certainly using AI in both the phone screen and coding portion. The biggest giveaway is the inability to explain the why of things. Even some of the simple things like "why did you initialize that class member in this method rather than in the constructor?"<p>I think at this point we are in a world where the cat is out of the bag and it's not are you or are you not using AI but how are you using it. I personally don't care if a candidate wants to use AI but be up front about it and make sure you still understand what it is doing. If you can't explain what the code it generated is doing an why then you won't be able to catch the mistakes it will eventually make.
We get a few thousand fresh grads applying to us each year. It’s practically impossible to interview every one of them. At the same time, any sort of coding assignment we give is easily defeated by AI—so that’s not useful either and there are very few signals there.<p>What we do instead is send out a test - something like a mental ability test - with hundreds of somewhat randomized questions. Many of these are highly visual in nature, making them hard to copy-paste into an AI for quick answers. The idea is that smarter candidates will solve these questions in just a few seconds - faster than it would take to ask an AI. They do the test for 30 minutes.<p>It’s not expected that anyone finishes the test. The goal is to generate a distribution of performance, and we simply start interviewing from the top end and make offers every week until we hit our hiring quota. Of course, this means we likely miss out on some great candidates unfortunately.<p>We bring the selected candidates into our office for a full day of interviews, where we explicitly monitor for any AI usage. The process generally appears to work.<p>On a different note, things are just getting weird.
Some people think it's perfectly normal to stretch the truth on a resume, and to lie in an interview. Other people think an interview is just a matter of finding the "magic words" to get the job.<p>What I don't understand is, <i>what did the candidate do with AI?</i> Did they use the AI as a coach? Did they use it to suggest edits to the resume?<p>---<p>I once interviewed a candidate who was given my questions in advance. (I should point out that it was quite time consuming for me to design an interview, so I couldn't just make up new questions for every candidate.)<p>When the candidate started taking the "schoolboy" tone of a well-rehearsed speech, I realized that they had practiced their answers, like practicing for an exam. I immediately threw in an unscripted question, got the "this wasn't supposed to be on the test" response, and ended the interview.
The guy had to invent “cool” scenarios because companies think they are Google and working in backend doing normal things won’t get you hired. One could easily have prepared the whole interview with AI without failing to explain details (like what data was being paginated) just by lying a bit more. Not lying on your actual knowledge but on what your previous jobs were about. E.g., I have used k8s in pet projects but not at work, but this job ad for a backend position asks knowledge about k8s, so I’ll put k8s as skill under my last job and invent a credible story that I can talk about based on my experience during my pet projects.<p>I think the message here is: don’t ask for the moon, you are not Google.
I've interviewed some candidates (more senior than TFA) and I agree with OP that it is a uniquely uncomfortable experience.<p>Candidates who rely on AI seem to just be totally turning their brains off. At least a candidate who was embellishing in the old days would try to BS if they were caught. They could try and fill in the blanks. These candidates give plausible-sounding answers and then truly just give up and say "ummm" when you reach the end of their preparation.<p>I've been interviewing for 10+ years across multiple startups and this was never a problem before. Even when candidates didn't have a lot of relevant experience we could have a conversation and they could demonstrate their knowledge and problem-solving skills. I've had some long, painful sessions with a candidate who was completely lost but they never just gave up completely.<p>Developers I've worked with and interviewed who rely on AI daily are just completely helpless without it. It's amazing how some senior+ engineers have just lost their ability to reason or talk about code.
I linked this to my team and got back "I had almost identical experience with some candidates though no one admitted faking" and "One candidate just disconnected and was never heard back from after being asked to remove virtual background".<p>Interviewing is hard. Over the years the one thing I have learned is that for a technical role you want to interview people for how they THINK and REASON. This is hard and requires a time investment in the interview.<p>Back in the day when interviewing people for roles in networking, data center design, etc. I used to start by saying I am going to ask you a question and unless you have seen this very specific issue before you will NOT know the answer and I do not want you to guess - what I care about is can you reason about it and ask questions that lead down a path that allows you to get closer to an answer - this is the only technical question I will be asking and you have the full interview time to work thought it. I have people with 4+ CCIE family certs (this is back when they were the gold standard) and 10 year experience have no idea how to even reason about the issue. The candidates that could reason and work the problem logically became very successful.<p>For coding at my company now we take the same approach. We give candidates a problem with a set of conditions and goal and ask them to work through their approach, how they would go about testing it, and then have them code it in a shared environment of their choosing. The complexity of the problem depends on the level the candidate is interviewing for. For higher level engineerings besides the coding, we include a system architecture interview, presenting a requirement, taking the time to answer any questions, and then asking the candidate how they would implement it. At the end we do not care if it complies, what we care about is did the candidate approach the problem reasonably. Did they make sure to ask questions and clarifications when needed. Did their solution look reasonable? Could they reason on how to test it? Did their solution show that they thought about the question - IE, did they take the time to consider and understand before jumping in.<p>Anyone can learn to code (for the most part). Being able to think on the other hands seems to be something that is in short supply.
I don't think this has anything to do with using AI for prep. 20 years ago I was interviewing candidates who had somewhat lied on their resume, knew some of the things that they'd written about, but had everything fall apart under a little more questioning of what exactly they'd done and why.
I used to do a lot of hiring interviews long before ai and this exact situation has happened many times. People have been added to some project doing x haven’t really done much or engaged in it. They then see you need someone doing x then they add it to their resume.
However, I do agree not being able to fully talk about a thing you have been working on and worse misrepresenting the extend of your involvement are red flags.
Has nothing to do with AI though. Also sounds a bit like they wanted to say: “Ai encouraged me to exaggerate a bit” which again just means they wanted to shift the blame which is another red flag.
I don't know how this is something related to AI - you could polish and embellish your resume before LLMs too, I'm fairly sure. I guess this gets the clicks.<p>Not being to remember small details about certain projects is also perfectly fine for people who have worked for more than a couple of years. Unless you can discover a pattern of lying like the author supposedly did then I would just be perfectly fine moving on to another topic.
Oddly one impact on me from reading this is that Kapwing seems like probably a nice place to apply for a job -- simple enough application process, human review, sane and respectful take-home and no live pressure coding. I'm not affiliated in any way nor am I a FT software developer, but this seemed like a pretty sane process (which sadly the article reveals may not be sufficient to properly vet candidates).
Incidentally, I really-really like that they asked questions based on the person's resume.<p>That was typical before some students got handed a lot of dotcom boom money.<p>(And then somehow most interviews throughout the industry became based on what a CS student with no experience thought professional software development was about. Then it became about everyone playing to the bad metrics and rituals that had been institutionalized.)<p>You can ask questions based on a resume without them disclosing IP, nor the appearance of it.<p>That resume-based questions thwarted a cheater in this case was a bonus.
AI is a problem, so is lying, but this is a non issue already solved by the ancient tradition of in person interviews.<p>I assume the folks at kapwing are monitoring the responses, so if you're really open to ideas then i offer the following for your consideration:<p>The best interview I've had to date has been a live debugging challenge. Given an hour, a printed sheet of requirements, and a mini git repo of mostly working code, try to identify and solve as many bugs as possible, with minimum requirements and bonus goals for the ambitious.<p>This challenge checks all the boxes of a reliable and fair assessment. It cant be faked by bullshittery or memorized leetcode problems. Its in person so cheating and AI is out of the equation, but more importantly it allows for conversation, asking questions, sharing ideas, and demonstrating, rather than explaining, their problem solving process. Finally its a test that actually resembles what we do on a daily basis, rather than the typical abstract puzzles and trivia that look more like a bizarre IQ test.<p>Stumbling upon this format was such a revelation to me and I'm stunned it hasn't been more widely adopted. You'll meet many more "Sams" as your company grows - many will fool you, some already have. But a well designed test doesn't lie. Its up to you and your company to have the discipline to turn down cheap and easy interviewing tactics to do things the right way.
I'm not sure what this has to do with AI, except for being a buzzword to add to a title.<p>People have been lying about their experience since time immemorial. You don't need an AI to do it, you can just ask a friend with experience to invent a few plausible projects you could have worked on, and solutions you might have found. Or just look at a bunch of resumes online and read a few blog posts of people describing their work.<p>I'm not surprised this happened. I'm surprised by why the author was surprised. Maybe "Sam" was exceptionally bad at "faking it" in person, but I've done tons of interviews where the candidate had exaggerated their experience and couldn't answer basic questions that they should have been able to.<p>Honestly, this is why some companies do whiteboard coding interviews <i>before</i> getting to the interviews about experience, because it does a decent initial job at filtering out people who have no idea what they're doing.
I recently interviewed an engineer who was somehow using ChatGPT realtime on another laptop beside him. The irony was that the questions were pretty simple overall and our rubric also wasn't very strict, so he likely would have passed if he just used his memory and common sense. Though the answers weren't wrong overall, I still felt cheated because of the deception and had to reject him later.
An interesting story!<p>I've also had an AI cheater during phone screen, but they were pretty clumsy... A question of form "You mentioned you used TechX on your resume, tell me more what you did with it" was answered with a long-winded but generic description of TechX and zero information about their project or personal contribution.<p>Another thing that I can take away from that is "take home project" is no longer a good idea in AI times - the simple ones that candidates can do in reasonable time is too easy for AI, and if we do realistic system, it's too hard for honest candidates.
On the topic of interview prep - is it weird that I’ve never been able to bring myself to do it? I can’t be the only one, right? As best as I can tell it’s never really hurt me (okay, there was a Google interview I failed where grinding a few leetcodes might have helped…).
>integrity and reputation goes a long way.<p>Was it really necessary to take the moral high ground and lecture the candidate? As if companies are honest and well-meaning in interviews. You caught him and that's the end of it.
I had a similar but different run in with bad AI use in interviewing earlier this month. I was interviewing a candidate during a technical screen, and I had earlier noted that it was ok to use AI, as that was how modern development is going forward, I would just observe how someone would develop with it. In my technical product screens I try to tell the developer, it's time for them to show off what skills they feel the most comfortable at.<p>What happened though was the candidate decided to paste the entire challenge prompt into cursor and I watched cursor fail at completing the assignment. I tried to nudge them to use their own skills or research abilities, but alas did not come to fruition, and had to end the interview.<p>The crazy part was they had 8 years of experience, so definitely have worked before not using AI, so it was very strange they did that, especially since they remarked that the challenge was going to be easy
The title seems to say that it's a bad thing to use AI to prepare for an interview, when in fact it can be quite useful to use AI (and before AI there were dozens of "Preparing for the technical interview" books). The real issue is that the candidate lied about their experience, not that they used AI to prep. They could just as easily have lied about their experience without using AI to prep.
I had done a few remote coding interviews in recent months where I suspect the candidate was cheating using AI. It's a bizarre experience: each individual answer is produced confidently and quickly, makes sense in isolation, occasionally is even optimal, but the different answers don't connect into a coherent whole. Contrived example: the candidate confidently states that one should use algorithm X to solve a particular type of problem because of such and such reason - and then five minutes later when it comes time to write some code, they rapidly type in, with no erasing or backtracking, a solution which uses algorithm Y, and seemingly no awareness that they switched from X to Y...
> For prospective job candidates, my advice is still that "the truth will set you free"<p>Is that really good advice?<p>If you have the wisdom of knowing when to embellish and when to blur, then you're more likely to get a job and more likely to fit in.<p>I'm a spectrum, and generally I'm over-truthful and I notice my habit regularly affects me negatively.
Had an interesting live coding screen where the candidate was coding a solution, dropped from the call and screenshare for 20 minutes, showed back up with a full solution different from what they had before dropping and carried on as if nothing happened.
"It didn't make sense that the Twilio API would not be able to handle sending 30 SMS messages at once - this seemed like a scaling issue that would be easily resolved through upgrading the plan"<p>Twilio indeed can't handle batching of SMS requests -- even to this day several years after I asked them to :)<p>To be specific, what I want is what sendgrid offers, copy + replacements, so I can send the copy I want to send, a list of recipients and a list of replacements for each recipient in a single request.
This guy sounds like a good manager, that took his responsibilities in vetting candidates, seriously. Kind of a "unicorn," these days, it seems. Many managers are tossed the CV, ten minutes before the interview, and are yanked off of whatever critical project they were stressing over, to do an interview.<p>I would probably have been fooled by the applicant's screening interview, but it would have rapidly come apart, in the ensuing steps.<p>My team was a very small team of high-functioning C++ programmers, where each member was Responsible for some pretty major functionality.<p>This kind of thing might be something they could get away with, in larger organizations, where they would get lost in the tall grass, but smaller outfits -especially ones where everyone is on the critical path, like startups- would expose the miscreant fairly quickly.
>Had we moved this candidate forward, I have no doubt that they would have been able to use AI to pass the take home project with flying colors.<p>Yes, developers use AI in 2025 and this will only increase as the technology gets better. Shaming the use of AI is like taking away a plumber's toolbox because you'd prefer they work with thier hands alone. Developers at all levels have a use for AI, and given two developers with the same skill level why wouldn't you prefer one who could use AI as a tool.<p>If you are already hiring an engineer on their output over their comprehension, rate the output that they give you
The company cheats by being a tightwad and by conducting an online interview (which have always been prone to cheating or embellishing, and companies perfectly know it) and the candidate cheats by using this opportunity.<p>I can't stop repeating it, just invite the candidate to your office. That's it, that's how simple the problem is solved.
A more motivated candidate might have had an LLM ideate potential follow up questions for their resume and then think about the answers themselves. I’ve done this live with ChatGPT voice mode, it’s quite nice for practicing.
It says "Prepared with AI" in the title, but the article is about someone who blatantly lied about their past experience in the interview.<p>The AI was used as a tool to generate false stories, but that's not what I assumed when I read the title. It's common for people to "prepare" with LLMs by having them review resumes and suggest changes, but asking an LLM to wholesale fabricate things for you is something else entirely.<p>I do think this experience will become more common, though. There's an attitude out there that cheating on interviews is fair or warranted as retaliation for companies being bad at interviewing. In my experience, the people who embrace cheating (with or without LLMs) either end up flaming out of interview processes or get disappointed when they land a job and realize the company that couldn't catch their lies was also not great at running a business.
I am in the process of recruiting a software engineer. You're on spot when saying "ask about human experience".<p>To add to your experience, I became increasingly suspicious of the "perfect fit" resumes. it's insane how so many people just put the right keywords. I think it might work to pass in larger companies where HR use automated systems to triage applicants.
Am given this testimony from the deepest part of my heart, first i want to thank Dr Wealth for raising me from nothing to something, i used to work in a hospital as a cleaner until i met Dr Wealth online through a testimony i read about a woman he prayed for and gave a lottery winning number and the woman won 950,000$ i also decided to give it a try, so i contacted Dr Wealth through his email and he instructed me on what i need to do which i did without stress, the second day which was on Monday night he sent me some numbers and said i should go and play them on Tuesday draw which i did without hesitation, and i went to check the result the following day and to my greatest surprise my numbers were on the board as the winning numbers i never believed it at first i had to clean my eyes properly to confirm and behold i was the winner, i really can't thank Dr Wealth enough, i promise him i will share my good news to the world, if you have been playing lottery without success this is your opportunity to also be a winner, you can contact him through his Email at drwealthmag@gmail.com
WhatsApp number: +18077891081
> I told them that I feel that its important to be honest with their experiences<p>Oh my... I don't think I've ever seen a resume that didn't embellish or straight up lie about the applicant. AI does make lies more convincing and allows to go further with lies though.<p>Also, I'm impressed and upset that it takes so much effort to get a job doing something that sounds like entry-level Node.js / React stuff :( And the effort on the part of the applicant to manufacture this fake identity and experience to apply for this kind of job... and they are a <i>masters</i> student! Like... shouldn't this alone qualify you for the kind of low-stakes undemanding job?
The position's still open. It's ironic that it requires "Stay up-to-date on new AI technologies, including LLMs and generative models. Prototype and test new technologies to evaluate quality and improve performance."
A colleague of mine got his job using an AI assisted cover letter. I was part of the interview where he still convinced everyone that he knew his shit. I am happy with his hire now, a year later.
I see quite a few comments about how this is nothing new and it's easy to catch scammers, etc, etc.<p>Scamming may not be new, but a person using AI in this way is able to penetrate quite deeply into (long, tedious, time-consuming) interview process if folks aren't keeping an eye out for it (and this article, like many personal experiences, indicate that people aren't yet). Having an AI voice in your ear, rapidly providing you answers in real time is something new; at least in terms of how easily accessible it is.<p>It's amazing to me that folks have the audacity to come to interviews like this. I think some candidates genuinely feel that it is a reasonable thing to do along the lines of stuffing their resumes with keywords to get through the various recruiter filters. It's like hey, everyone in baseball is doping, so I have to do it to keep up!<p>The behaviors are obvious once you've seen them before, but as an engineer and not a "talent acquisition" person, I feel deeply uncomfortable implying that some candidate I'm interviewing is lying or cheating, so it took me a bit to speak up about it.<p>These types of articles need to continue to come out and the conversation elevated, if just to save some poor devs hours of interviews with candidates who were able to bluff their way through the less technical initial conversations.
Regarding "Insist on camera ON phone screens.", DON'T do that.<p>Remember you try to hire a ${coder, admin, } not the next tv-news-presenter, beeing on screen is not a mandatory needed skill in most jobs.<p>By asking for something, that makes people uncomfortable, you will exclude a lot of likely brilliant candidates.<p>People who refuse to do video interviews may be for example:
- people who value privacy, not only their own, but most likely yours too
- people who feel very uncomfortable beeing watched by strangers and who think or even know that they will perform significant worse than in an audio-only interviewsituation
- people who simply don't own a camera
- people who use textonly computers offjob
- poeple who have experienced that your 'standard'-videochat-app may not work, maybe because they use linux, bsd, os/2 or nonstandard operatingsystems
- people who don't have broadband internet, yes there are still people like that
- people who pay for every bit send, and yes having a not so cheap phone/internet contract is still common in some areas
- people who feel uncomfortable to let strangers in their bedroom, even virtualy
- people who have disabilities or cosmetic issues that they fear may distract you
- people who have disabilities where moving and out-of-sync pictures distract them
- people who tend to refuse unreasonable requests and who therefor regard you as unqualified to be their next employeer
- ...<p>All of them have good reasons not wanting video interviews.<p>You, as an employer, may miss your best fit.
AI definitely makes take homes and non live coding exercises less viable (and even live ones to an extent).<p>Not my favorite AI driven change as I think live coding is so high pressure it can give wrong signals.
> Had we moved this candidate forward, I have no doubt that they would have been able to use AI to pass the take home project with flying colors.<p>Off topic, why have such a take home exercise then?
I've run into something similar twice now in the last month. A candidate pauses or says 'let me think about that' on a relatively simple question as if to give an LLM time to respond. After the pause they give an overly long detailed answer - again like an LLM response.<p>One candidate was absolutely stumped and could not answer why and when they became interested in technology. They couldn't say anything about themselves personally. It was baffling.
Ha! One of my clients who was interviewing about a dozen candidates had the same experience with most of them, they have a few left to interview.<p>All the candidates did really well on the online intake questions and the general meet and greet over video. However, once they arrived for the in-person part of the interview, and it got relatively technical, most did nowhere nearly as good as they did on the online. Only one or two admitted to using AI.
Something I don't see mentioned here but is implicitly assumed is that the candidate wants the job. Given the lottery of passing leet coding interviews, interviews are a place to practice interviewing. Some candidates may not want the job but simply want to try different things during the interview and see what happens with the goal of practice for an interview for a role they really care about.
Weird that they wouldn't just use whisper to pipe the interview questions into AI to reply better. If you're gonna cheat at least do it well.
Anyone that’s been on the market lately know that <i>some</i> companies encourage AI use in various ways<p>so all I can say is fix your assessments because this whole “they cheated” idea isnt universal, and more likely matches what people do on your job already<p>but for anyone that didnt read this article yet, this one is just about embellished experience custom tailored to get the interview, and there was no technical assessment
> The next stage of our interview process, had this candidate moved forward, is to implement a take-home project that we have specifically designed for prospective candidates to complete. Had we moved this candidate forward, I have no doubt that they would have been able to use AI to pass the take home project with flying colors.<p>So why bother with it?
This is why I will not interview for any job that mentions any of: React, Angular, Vue, Spring, or Rails.<p>The people in these positions are scared to death to write original code and then have the balls to whine about people who use AI to provide unoriginal answers.
This article highlights the AI part just as clickbait. The interviewed could have used job descriptions or interview guides/videos to exaggerate its resume, with the same result.
Saying you used AI nowadays to find ANY information shouldn’t be more surprising than saying that you Googled a question.
> We've also been the target of hiring scams in the past, so one policy we have is to only conduct "phone screens" on live video calls with the camera turned on.<p>Why are we calling these "phone screens"?
Use this to your advantage. Tell the interviewer you’d much rather meet in person, because you’re 100% confident you have the required skills/experience and you’d like to avoid a bad culture fit situation.
Integrity and reputation goes a long way?<p>Except it doesn’t if he hadn’t stretched the truth in his bombastic resume he would never have received an interview.<p>I will defend him because companies do the same thing of stretching the truth.
Was it really preparing?<p>Because the preparation ended up not being sufficient.<p>Assuming people doing the hiring can be outsmarted in all cases like this is part of the problem.<p>Maybe 'preparation' can evolve to the candidates asking AI for a crash course and way to start using it instead of talking the talk.<p>It never ceases to amaze me that it's surprised it's hard to BS your way through tech jobs at tech companies. Maybe it works with tech positions at non-tech companies.
>The next stage of our interview process, had this candidate moved forward, is to implement a take-home project that we have specifically designed for prospective candidates to complete. Had we moved this candidate forward, <i>I have no doubt that they would have been able to use AI to pass the take home project with flying colors.</i><p>I have no doubt as well, but I couldn't help but noticing, "Don't bother with take home tests," wasn't on the list of remedies.
I don't understand how relevant is that this person used AI for preparing etc.<p>I think you're drawing the wrong conclusions from this experience, and if you believe it's right so, it means you didn't interview before AI.<p>It was exactly like that. The only difference was the lack of availability of tools that can give you the answer right away, fake the voice, etc.<p>But even then, if it stinks, trust your guts.
On one hand, yeah, misrepresenting your experience, even if "AI-assisted," is a red flag, especially when the role clearly requires real, practical knowledge. But on the other hand, this is exactly the kind of outcome we should expect in the age of LLMs: people will use every tool available to bridge gaps, especially when under pressure in a hypercompetitive job market.
With AI, the onus is entirely on you to prompt the AI to perform an ethical practice interview and avoid gaining an unethical advantage by having AI make up answers for you.<p>It just makes me wonder about the importance that an understanding and commitment to ethics will play as people start to use AI more and more in their daily life.
This isn't an issue with "preparing" with AI. This guy is just a liar. The ironic part is, this author is just as much a liar by claiming this as due AI preparation as the candidate was about his experience at the daycare app.
This has nothing to do with AI. They lied in an interview like you could have done in 1980. You can prepare with AI and lie and you can prepare with AI and not lie. I have done the latter.
> I ended by saying that the software community is smaller than it seems, and integrity and reputation goes a long way.<p>Well who are they? How would the next member of the community know this is a fake candidate. I like the idea in general of finding a way to eliminate these time-wasters but how would that work? The candidate can adjust a bit and improve the AI "foo" to come up with online answers for them.
The poor grammar in the resume should have been a red flag. English not being a first language isn’t an excuse. If they can use AI to cheat, they can run their resume through it.
Nothing about AI here, just a candidate making shit up, not even unique to software engineering.<p>Actually it would be interesting if the interviewer had an AI to counter these tactics
It doesn't matter because I can always pry past the candidate's work in front of me to see if there is anything behind the facade. Usually there isn't even if their take-home assignment is done perfectly with of LLMs but there is no understanding behind the work being showcased.