I'm noticing interesting challenges when using AI assistants (Claude, GPT-4, etc.) on production codebases. The AI often misses important project context, architectural decisions, and existing patterns - leading to suggestions that technically work but don't align with the project's approach.<p>Curious how others are handling this, especially in larger teams. How do you maintain consistency? What strategies have proven effective? What hasn't worked?
I use them for little tasks.<p>You can try looking things up on stack overflow where it routine for the accepted answer to have 30 comments screaming "THIS SHOULDN'T BE THE ACCEPTED ANSWER" and the accepted answer is frequently wrong. You can spend a lot of time filtering the chaff from the wheat and still get something wrong.<p>With the AI you are still taking its chance but unlike S.O. it is responsive to feedback and if you circle back two or three times your odds get better.