From my personal experiences, working with small engineering teams and running an comp sci education program[1].<p>TL;DR: There are some benefits, but mostly not worth it or actively harmful for students/junior engineers.<p>1) Students using LLMs to code or getting answers generally learn much slower than those who do it old fashioned way (we call it "natty coding"). A very important of the learning experience is the effort to grok the problem/concept in your own mind, and finding resources to support/challenge your own thinking. Certainly an answer from a chatbot can be one of those resources, but empirically students tend to just accept the "quickest" answer and move on (bad habbit from schooling). But eventually it hurts them down the road, since their "fuzzy" understanding compounds over time. It's similar to the old copy-from-StackOverflow phenomenon, but on steroids. If the students are using these new tools as the new search, then they still need to learn to read from primary sources (ie. the code or at least the docs).<p>2) I think one of the problems right now is that we're very used to measure learning via productivity. Ie. the ability of a student to produce a thing is a measurement of their learning. The new generation of LLM assistants breaks this model of assessment. And I think a lot of students feel the need to get on the bandwagon because they produce very immediate benefits (like doing better on homework) while incurring long-term costs. What we're trying to do is to actually teach them about learning and education first, so they at least understand the tradeoffs they are making using these new AI tools.<p>3) Where we've found better uses for these new tools are in situations where the student/engineer understand that it's an adversarial relationship. Ie. there's a 20% chance of bullshit. This positioning puts the accountability in the human operators (can't say the AI "told me so") and also helps them train their critical analysis skills. But it's not how most tools are positioned/designed from a product perspective.<p>Overall, we've mostly prohibited junior staff/students from using AI coding tools, and they need a sort of "permit" to use it in specific situations. They all have to disclose if they're using AI assistants. There are less restrictions on senior/more experienced engineers, but most of them are using LLMs less due to the uncertainties and complexities introduced. The "fuzzy understanding" problem seems to affect senior folks to a lesser degree, but it's still there and compounds over time.<p>Personally, I've seen myself be more mindful of the effects of automation from these experiences. So much so that I've turned off things like auto-correct, spellcheck, etc. And it seems like the passing of the torch from senior to junior folks is really strained. I'm not sure how it'll play out. A senior engineer who can properly architect things objectively have less use for junior folks, from a productivity perspective, because they can prompt LLMs to do the manual code generation. Meanwhile, junior folks all have high powered footgun which can slow down their learning. So one is pulling up the ladder behind them, while the other is shooting their feet.<p>[1] <a href="https://www.divepod.to" rel="nofollow">https://www.divepod.to</a>