For years I've kept a list of apps / ideas / products I may do someday. I never made the time, with Cursor AI I have already built one, and am working on another. It's enabling me to use frameworks I barely know, like React Native, Swift, etc..<p>The first prompt (with o1) will get you 60% there, but then you have a different workflow. The prompts can get to a local minimum, where claude/gpt4/etc.. just can't do any better. At which point you need to climb back out and try a different approach.<p>I recommend git branches to keep track of this. Keep a good working copy in main, and anytime you want to add a feature, make a branch. If you get it almost there, make another branch in case it goes sideways. The biggest issue with developing like this is that you are not a coder anymore; you are a puppet master of a very smart and sometimes totally confused brain.
What a feat! There's at least 3 pages of google search results for the nearly same thing. The "prompt" I used in google.com is:<p>site:github.com map comparison<p>I guess the difference, is that my way uses dramatically less time and resources, but requires directly acknowledging the original coders instead of relying on the plagiarism-ish capabilities of reguritating something through an LLM.
I have about 6 months of coding experience. All I really knew was how to build a basic MERN app<p>I’ve been using Sonnet 3.5 to code and I’ve managed to build multiple full fledged apps, including paid ones<p>Maybe they’re not perfect, but they work and I’ve had no complaints yet. They might not scale to become the next Facebook, but not everything has to scale
Claude is fantastic. I think the model itself is good enough to be able to write good software when competently directed; it's let down only by the UI/UX around it.<p>My only complaints are:<p>a) that it's really easy to hit the usage limit, especially when refactoring across a half dozen files. One thing that'd theoretically be easyish to fix would be automatically updating files in the project context (perhaps with an "accept"/"reject" prompt) so that the model knows what the latest version of your code is without having to reupload it constantly.<p>b) it oscillating between being lazy in really annoying ways (giving largeish code blocks with commented omissions partway through) and supplying the full file unnecessarily and using up your usage credits.<p>My hope is that Jetbrains give up on their own (pretty limited) LLM and partner with Anthropic to produce a super-tight IDE native integration.
I wanted to develop a simple tool to compare maps. I thought about using this opportunity to try out Claude AI for coding a project from scratch. It worked surprisingly well!<p>At least 95% of the code was generated by AI (I reached the limit so had to add final bits on my own).
I think we're going to see a similar backlash to AI apps as we did with AI art.<p>Not necessarily because users can identify AI apps, but more because due to the lower barrier of entry - the space is going to get hyper-competitive and it'll be <i>VERY</i> difficult to distinguish your app from the hundreds of nearly identical other ones.<p>Another thing that worries me (because software devs in particular seem to take a very loose moral approach to plagiarism and basic human decency) is that it'll be significantly easier for a less scrupulous dev to find an app that they like, and use an LLM to instantly spin up a copy of it.<p>I'm trying not to be all gloom and doom about GenAI, because it can be really nifty to see it generate a bunch of boilerplate (YAML configs, dev opsy stuff, etc.) but sometimes it's hard....
Claude built me a simple react app AND rendered it in it's own UI - including using imports and stuff.<p>I am looking forward to this type of real time app creation being added into our OSs, browsers, phones and glasses.
I used Claude (and a bit of ChatGPT) to write a multi-pass recursive descent parser for a search query DSL:<p><a href="https://github.com/williamcotton/search-input-query">https://github.com/williamcotton/search-input-query</a><p>Why multi-pass? So multiple semantic errors can be reported at once to the user!<p>The most important factor here is that I've written lexers and parsers beforehand. I was very detailed in my instructions and put it together piece-by-piece. It took probably 100 or so different chats.<p>Try it out with the GUI you see in the gif in the README:<p><pre><code> git clone git@github.com:williamcotton/search-input-query.git
cd search-input-query/search-input-query-demo
npm install
npm run dev</code></pre>
Been using LLMs since got3 beta in June 2021 and it’s interesting to see how my use cases have continuously been upgraded as models advanced.<p>Started off with having it create funny random stories, to slowly creating more and more advanced programs.<p>It’s shocking how good 3.5 Sonnet is at coding, considering the size of the model.
Is Claude 'better' than o1-preview? I've had phenomenal results with o1-preview (switching to o1-mini for simpler asks to avoid running out of queries), and tried Claude once and wasn't super impressed. Wondering if I should give it another shot.
Has somebody evaluated the pros and cons of giving developers a programming-specific AI tool like copilot versus a general-purpose AI tool like chatgpt or claude? We are a small shop so I would prefer to not pay for both for every developer.
Ideally, Claude should have told you about easier approaches. I don't see any reason to mess around with code.<p>There are plenty of website builder tools that will glue third party maps. Even the raw Google Maps API website will generate an HTML page with customized maps.
This is great progress.<p>Next obvious steps: make it understand large existing programs, learn form the style of the existing code while avoiding to learn the bad style where it's present, and then contribute features or fixes to that codebase.
Claude has worked amazingly well for me as somebody really not into UI/web development.<p>There are so many small tasks that I could, but until now almost never would automate (whether it's not worth the time [1] or I just couldn't bring myself to do it as I don't really enjoy doing it). A one-off bitmask parser at work here, a proof of concept webapp at home there – it's literally opened up a new world of quality-of-life improvements, in a purely quantitative sense.<p>It extends beyond UI and web development too: Very often I find myself thinking that there must be a smarter way to use CLI tools like jq, zsh etc., but considering how rarely I use them and that I do already know an ineffective way of getting what I need, up until now I couldn't justify spending the hours of going through documentation on the moderately high chance of finding a few useful nuggets letting me shave off a minute here and there every month.<p>The same applies to SQL: After plateauing for several years (I get by just fine for my relatively narrow debugging and occasional data migration needs), LLMs have been much better at exposing me to new and useful patterns than dry and extensive documentation. (There are technical documents I really do enjoy reading, but SQL dialect specifications, often without any practical motivation as to <i>when</i> to use a given construct, are really not it.)<p>LLMs have generally been great at that, but being able to immediately run what they suggest in-browser is where Claude currently has the edge for me. (ChatGPT Plus can apparently evaluate Python, but that's server-side only and accordingly doesn't really allow interactive use cases.)<p>[1] <a href="https://xkcd.com/1205/" rel="nofollow">https://xkcd.com/1205/</a>
Can anyone measure in how Claude compares to copilot? Copilot feels like a fancy auto complete, but people seem to have good experiences with Claude, even in more complex settings.
This sort of thing will be interesting to me once it can be done with fully local and open source tech on attainable hardware (and no, a $5,000 MacBook Pro is not attainable). Building a dependence on yet another untrustworthy AI startup that will inevitably enshittify isn’t compelling despite what the tech can do.<p>We’re getting there with some of the smaller open source models, but we’re not quite there yet. I’m looking forward to where we’ll be in a year!