I wish the discussion on differences between the agentic workflow in copilot and cursor were clearer. As is presented, copilot's workflow is described as "request changes via chat, then approve/reject", and cline's workflow is described as "requesting changes via chat, which can be approved/rejected".<p>Similarly, "this tool succeeds where this other tool fails" can always be made more specific. Both a novice who doesn't know the basics, and an expert who's trying something sophisticated, can both report "this tool works where this other tool failed".. a thoughtful description of the difference is going to help. (e.g. to me, "used AI to find codebase's LoC" indicates 'novice').
I think one of the main selling points of Cursor, as an investor-backed company, is that it's cheap. For $0.04 per prompt, I can get Claude 3.7 Sonnet to use 25 tool calls. In comparison, one of the images in the article shows either one prompt or a conversation that cost $7 (a third of Cursor's monthly subscription).