How long until such "rules" also become standardized (like we saw with MCP)? It feels redundant to have rules.cursorrules and rules.aiderrules where the rules content is the same. I predict companies will not only publish coding guidelines for their programmers, they'll publish these tiny coding rules for LLMs used in the company as well, so all code follows the same standards/idioms.
I have found Cursor to be frustrating and exhausting to work with, even with my rules file. When it works, it’s like magic. But most of the time, it feels like working with a Jr. dev who has a bit of a concussion. Code review is wearying work, and using Cursor means you’re doing a <i>lot</i> of code review. I have never once gotten into a flow state with it.<p>That was a long preamble to this question: any senior devs out there (20+ years) who enjoy using Cursor? What’s the trick?
Cursor in particular but also others are extremely flaky when it comes to applying rules.<p>So the next generation of "rules" for Cursor, Claude Code, etc should have some predictability baked in, i.e. not be entirely driven by AI.<p>Having rules driven by a vanilla, non-AI program ensures that they're actually, consistently applied, with some logs showing if/how they were applied.<p>Of course, one can augment vanilla rules with some AI capabilities, but the main orchestration should resemble determinism.<p>I suspect that the main reason why this isn't a reality yet is because costs could easily skyrocket. Personally I'd be willing to pay the extra buck if that means that my comprehensive rule system is actually doing something.
Seems like we're rediscovering/reinventing the same thing over and over, this is very close to Memory Bank from Cline <a href="https://docs.cline.bot/improving-your-prompting-skills/cline-memory-bank" rel="nofollow">https://docs.cline.bot/improving-your-prompting-skills/cline...</a><p>I have seen a number of articles and Youtube videos posting more or less the same thing.
This feels like a bandaid on a gaping wound to me. Maybe you're making certain aspects of using Cursor/Copilot Agent Mode less annoying, but they're still there and still annoying.<p>In the parts of the open source LLM community that are interested in roleplay chat, the veterans seem to have the viewpoint that spending a lot of time tinkering to try to overcome the inherent flaws in this technology is relatively pointless; at a certain point, it's random, and the technology just isn't that great, you're expecting too much. Just wait for the next great model. But don't waste your time putting bandaids all over the huge flaws in the technology, you're still not going to get the results you want consistently.<p>I can't help but think of that here. I don't want to spend my time managing a junior engineer with amnesia, writing Rules files for it to follow, come on now. We're supposed to pay $20/mo with usage limits for that? The promise of "vibe coding" according to all the breathless media coverage and hype is that it'll supercharge me 100x. No one said anything about "Cursor rules files"!<p>I'll stick with Copilot's "fancy auto-complete", that does speed me up quite a bit. My forays into Agent mode and Cursor left me feeling pretty annoyed, and, like I said, I don't want a junior developer I'm managing through a chat sidebar, I'll just do the programming myself. Get back to me when Cursor is at senior or principal engineer level.
I've found this a tractable approach, but sometimes not enough. My escalation pattern with cursor looks like this:<p>1. Document everything that you're doing and update a core architecture or technical doc that the LLM can read.<p>2. Update your .cursorrules with specific instructions for .cursor which can grow more specific as you nail down parts of your stack and the patterns you're following. This can be updated (by cursor itself) if you find the same problems recurring.<p>3. I have pre-commit script which runs some internal scripts. If I find the IDE is STILL making the same mistake after I've documented it and added cursor rules, the nuclear option is to add a script here which verifies the integrity of whatever construct is being violated (e.g. tests go into this folder structure, env variables are consistent between these files, import of this model that the LLMs like is forbidden)<p>I would add: any time you expect to be working on a particular feature / enhancement / refactor, have the LLM create a temporary document with a description and an implementation plan and work from that.<p>In addition: I have a specific workflow for resolving testing errors or pre-commit errors which follows the above pattern: Document each failure and work through them one at a time, running the test script and updating the document between runs.<p>I've established these patterns slowly with usage, but it has improved my experience a lot.
Hopefully, we'll converge on a standard, product-agnostic file naming convention, similar to .editorconfig. Are there any existing/emerging generic conventions, like .llm-instructions, that products like Cursor and GitHub Copilot support? This could be useful for teams and orgs with diverse LLM usage.
Question for users of such tools: can't you ask the LLM at the end of the session to choose the most important parts of the context, compress it using your favourite tool, then decompress at the beginning of the next session?
If someone needs a C#/.NET meta-rule, I shared one here: <a href="https://pastebin.com/EmNsTRwY" rel="nofollow">https://pastebin.com/EmNsTRwY</a><p>After quickly testing it seems to be reasonably good.
I’ve been using more and more AI tools in my development and getting a lot of mileage. Cursor is the latest one I’ve adopted and it’s impressive even without rules.<p>I’ll give this a try soon. Thanks for sharing!
LLMs have shit memory and abstraction capabilities and just adding md files and more context is like trying to make a alzheimer patient learn to play the piano. Yes it kinda works to some degree to always repeat everything, but it's not true learning and just a bandaid.