I'll be the contrarian and say that I don't find anything wrong with this, and if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.<p>sshine reply above is coming from a very conflictual mindset. "Can I still use AI and not be caught? Is it cheating? Does it matter if it's cheating?"<p>I think that's a bit like lying on your first date. If you're looking to score, sure, it's somewhat unethical but it works. But if you're looking for a long term collaboration, _and_ you expect to be interviewed by several rounds of very smart people, then you're much better off just going along.
> <i>please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.</i><p>There are two backwards things with this:<p>1) You can't ask people to not use AI when careful, responsible use is undetectable.<p>It just isn't a realistic request. You'll have great replies without AI use and great replies with AI use, and you won't be able to tell whether a great reply used AI or not. You will just be able to filter sludge and dyslexia.<p>2) This is still the "AI is cheating" approach, and I had hoped Anthropic to be thought leaders on responsible AI use:<p>In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.<p>If AI is making your final product and you're none the wiser, it didn't really help you, it just made you addicted to it.<p>Teach a man to fish...
If I want to assess a candidates performance when they can't use AI then I think I'd sit in a room with them and talk to them.<p>If I ask people not to use AI on a task where using AI is advantageous and undetectable then I'm going to discriminate against honest people.
This application requirement really bothered me as someone who's autistic and dyslexic. I think visually, and while I have valid ideas and unique perspectives, I sometimes struggle to convert my visual thoughts into traditional spoken/written language. AI tools are invaluable to me - they help bridge the gap between my visual thinking and the written expression that's expected in professional settings.<p>LLMs are essentially translation tools. I use them to translate my picture-thinking into words, just like others might use spell-checkers or dictation software. They don't change my ideas or insights - they just help me express them in a neurotypical-friendly format.<p>The irony here is that Anthropic is developing AI systems supposedly to benefit humanity, yet their application process explicitly excludes people who use AI as an accessibility tool. It's like telling someone they can't use their usual assistive tools during an application process.<p>When they say they want to evaluate "non-AI-assisted communication skills," they're essentially saying they want to evaluate my ability to communicate without my accessibility tools. For me, AI-assisted communication is actually a more authentic representation of my thoughts. It's not about gaining an unfair advantage - it's about leveling the playing field so my ideas can be understood by others.<p>This seems particularly short-sighted for a company developing AI systems. Shouldn't they want diverse perspectives, including from neurodivergent individuals who might have unique insights into how AI can genuinely help people think and communicate differently?
This is quite a conundrum. These AI companies thrive on the idea that very soon people will not be replaced by AI, but by people who can effectively use AI to be 10x more productive. If AI turns a normal coder into a 10x dev, then why wouldn't you want to see that during an interview? Especially since cheating this whole interview system has become trivial in the past months. It's not the applicants that are the problem, it's the outdated way of doing interviews.
I do lots of technical interviews in Big Tech, and I would be open to candidates using AI tools in the open. I don't know why most companies ban it. IMO we should embrace them, or at least try to and see how it goes (maybe as a pilot program?).<p>I believe it won't change the outcomes that much. For example, on coding, an AI can't teach someone to program or reason in the spot, and the purpose of the interview never was to just answer the coding puzzle anyway.<p>To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc). If I give you a puzzle and you paste the most optimized answer with no reasoning or comment you're not going to pass the interview, no matter if it's done with AI, from memory or with stack overflow.<p>So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.
Kudos to Anthropic. The industry has way too many workers rationalizing cheating with AI right now.<p>Also, I think that the people who are saying it doesn't matter if they use AI to write their job application might not realize that:<p>1. Sometimes, application questions actually do have a point.<p>2. Some people can read <i>a lot</i> into what you say, and how you say it.
> While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.<p>Full quote here; seems like most of the comments here are leaving out the first part.
The irony here is obvious, but what's interesting is that Anthropic is basically asking to <i>not</i> give then a realistic preview of how you will work.<p>This feels similar to asking devs to only use vim during a coding challenge and please refrain from using VS Code or another full featured IDE.<p>If you know, and even encourage, your employees to use LLMs at work you should want to see how well candidates present themselves in that same situation.
Half way through a recent interview it became very apparent that the candidate was using AI. This was only apparent in the standard 'why are you interested in working here?' Questions. Once the questions became more AI resistant the candidate floundered. There English language skills and there general reasoning declined catastrophically. These question had originally been introduced to see see how good the candidate was at thinking abstractly. Example: 'what is your creative philosophy?'
It makes sense. Having the right people with the right merits and motivations will become even more important in the age of AI. Why you might ask? Execution is nothing when AI matures. Grasping the big picture, communicating effectively and possessing domain knowledge will be key. More roles in cognitive work will become senior positions. Of course you must know how to make the most out of AI, but it is more interesting what skills you bring to the table without it.
Funny on the tin, but it makes complete sense to me. A sunglasses company would also ask me to take off my sunglasses during the job interview, presumably.
Anthropic is kind of positioning themselves as the "we want the cream of the crop" company (Dario himself said as much in his Davos interviews), and what I could understand was that they would a) prefer to pick people they already knew b) didn't really care about recruiting outside the US.<p>Maybe I read that wrong, but I suspect they are self-selecting themselves out of some pretty large talent pools, AI or not. But that application note is completely consistent with what they espouse as their core values.
Not new they had that 5 years ago at least.<p>Anthropic interview is nebulous. You get a coding interview. Fast paced, little time, 100% pass mark.<p>Then they chat to you for half an hour to gauge your ethics. Maybe I was too honest :)<p>I'm really bad at the "essay" subjects vsm the "hard" subjects so at that point I was dumped.
Everyone arguing for LLMs as a corrupting crutch needs to explain why <i>this</i> time is different: why the grammar-checkers-are-crutches, don't-use-wikipedia, spell-check-is-a-crutch, etc. etc. people were all wrong, but <i>this</i> time the tool really is somehow unacceptable.
The goal of an interview is to assess talent. AI use gets in the way of that. If the goal were only to produce working code, or to write a quality essay, then sure use AI. But arguing that misunderstands the point of the interview process.<p>Disclaimer: I work at Anthropic but these views are my own.
How much you wanna bet they're using AI to evaluate applicants and they don't even have a human reading 99% of the applications they're asking people to write?<p>As someone who has recently applied to over 300 jobs, just to get form letter rejections, it's really hard to want to invest my time to hand-write an application that I know isn't even going to be read by a human.
Maybe they are ahead of the curve at finding that hiring people based on ability to exploit AI-augmented reach produces catastrophically bad results.<p>If so, that's bad for their mission and marketing department, but it just puts them in the realm of a tobacco company, which can still be quite profitable so long as they don't offer health care insurance and free cigarettes to their employees :)<p>I see no conflict of interest in their reasoning. They're just trying to screen out people who trust their product, presumably because they've had more experience than most with such people. Who would be more likely to attract AI-augmented job applicants and trust their apparent augmented skill than an AI company? They would have far more experience with this than most, because they'd be ground zero for NOT rejecting the idea.
I understand why it's amusing, but there is really nothing to see here.
It could be rephrased as:<p>« The process we use to asses candidates relies on measuring the candidate's ability to solve trivia problems that can easily be solved by AI (or internet search or impersonation etc). Please refrain from using such tools until the industry come up with a better way to assess candidates. »<p>Actually, since the whole point of those many screening levels during hiring is to avoid the cost of having long, in depth discussions between many experts and each individual candidates, probably IA will be the solution that makes the selection process a bit less reliant on trivia quizz (a solution that will, no doubt, come with its own set of new issues).
Relevant (and could probably have been a comment there): <a href="https://news.ycombinator.com/item?id=42909166">https://news.ycombinator.com/item?id=42909166</a>
"Ask HN: What is interviewing like now with everyone using AI?"
I'm sure Anthropic have too many applications submitted that are obviously AI generated, and I am sure what they mean by "non-AI-assisted communication", they don't want "slop" applications, that sounds like a LLM wrote it. They want some greater proof of human ability.
I expect humans at Anthropic can tell what LLM model was used to rewrite (or polish) applications they get, but if they can't, a basic BERT classifier can (I've trained one for this task, it's not so hard).
Much better approach is to ask the candidate about the limitations of AI assistants and the rakes you can step on while walking that path. And the rakes you have already stepped on with AI.
Why aren't they dog fooding? Surely if AIs improve output and performance they should readily accept input from them. Seems like they don't believe in their own products.
Prepping for an interview a couple weeks ago, I grabbed the latest version of IntelliJ. I wanted to set up a blank project with some tests, in case I got stuck and wanted to bail out of whatever app they hit me with and just have unit tests available.<p>So lacking any other ideas for a sample project I just started implementing Fizzbuzz. And IntelliJ started auto suggesting the implementation. That seems more problematic than helpful, so it was a good thing I didn’t end up needing it.
Question 1:<p>Write a program that describes the number of SS's in "Slow Mississippi bass". Then multiply the result by hex number A & 2.<p>Question 2:<p>Do you think your peers will haze you week 1 of the evaluation period? [Yes|No]<p>There are a million reasons to exclude people, and most HR people will filter anything odd or extraordinary.<p><a href="https://www.youtube.com/watch?v=TRZAJY23xio&t=1765s" rel="nofollow">https://www.youtube.com/watch?v=TRZAJY23xio&t=1765s</a><p>Hardly a new issue, =3
Whenever someone asks you to not do something that is victimless, you always should think about the power they are taking away from you, often unfairly. It is often the reason why they have power over you at all. By then doing that very thing, you regain your power, and so you absolutely should do it. I am not asking you to become a criminal, but to never be subservient to a corporation.
How do you guys do coding assessments nowadays with AI?<p>I don’t mind if applicants use it in our tech round but if they do I question them on the generated code and potential performance or design issues (if I spot any) - but not sure if it’s the best approach (I mostly hire SDETs so do a ‘easy’ dev round with a few easy/very easy leet code questions that don’t require prep)
If Alice can do better against Bob when they aren’t using AI, but Bob performs better when both use AI, isn’t it in the company’s best interest to hire Bob, since AI is there to be used during his position duties?<p>If graphic design A can design on paper better that B, but B can design on the computer better than A, paper or computer, why would you hire A?
This strikes me as similar to job applicants who apply for a position and are told it's hybrid or in-office - and then on the day of the interview, it suddenly changes from one in-person to one held over a videoconference, with the other participants with backdrops that look suspiciously like they're working from home.
This feels very similar to ophthalmologists who make their money pushing LASIK while refusing to get it done on themselves or their relatives. "This procedure is life-changing! But..."<p>Anyway, bring back in-person interviews! That's the only way to work around this Pandora's Box they themselves opened.
This has a poetic tone to it.<p>However, not sure what to think of it. So AI should help people on their job and their interview process, but also not? When it matters? What if you're super good ML/AI, but very bad at doing applications? Would you still have a chance?<p>Or do you get filtered out?
So I guess people should not use other available tools? Spell checker? Grammar checker? The Internet? Grammarly?<p>The issue is that they are receiving excellent responses from everyone and can no longer discriminate against people who are not good at writing.
So suddenly we're in a state where:<p>- AI companies ask candidates to not "eat their own dog food"
- AI companies blames each other of "copying" their IP while they find it legit to use "humans" IP for training.
On the face of it it's a reasonable request but the question itself is pointless. An applicants outside opinion on a company is pretty irrelevant and is subject to a lot of change after starting work.
> We want to understand your personal interest in Anthropic without mediation through an AI system<p>Is the application being reviewed with the help of an AI assistant though? If yes, AI mediation is still taking place.
You want to work at an AI company that does not allow the use of AI by it's future employees.<p>That is likely enough said right there. Keep looking for a company that has it's head screwed on straight.
The HR are probably using AI to waste our time with their ridiculously worded job descriptions and now you can have a computer respond... You have simply completed the circle of stupidity. If they are upset you have sidestepped putting yourself inside their circle, maybe there is a better place to work after all...
> please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.<p>Exact opposite of our application process at my previous company. We said usage of ChatGPT was expected during the application and interview phase, since we heavily rely on it for work