There is a significant amount of naivety here. I don't say that in any kind of negative way: it's purely an observation (as a dev who has built a lot of different software over the years, plenty of which is web-based)<p>Let me pick just one example: these guys seem quite upset that someone has tried attacking their vibe-coded app, e.g. submitting sign-ups with invalid emails...<p>They're like: "What's the point? Why would someone waste their time targeting us / our app? It's a waste of time."<p>Their fix was to enable a Captcha. While complaining that it's annoying for users, but also saying: never mind, everyone is used to Captchas already.<p>They don't realise at all that pretty much every single public-facing sign-up form will, given a very short amount of time, receive automated sign-ups from false addresses.<p>They're not specifically being targeted at all — it's not because some folk specifically hate vibe-coded apps, despite them believing that (they say specifically this).<p>It's just what happens, to <i>every</i> site with a high enough profile — and that bar is in fact very low indeed. It's pretty much any website that has been around for a week or two.<p>These kinds of sign-ups are fully automated. Welcome to the internet. This has literally been par-for-the-course for years and years already.<p>And, of course, that's why large areas of the internet already have Captchas. An observation which they already made, but seemingly without joining the dots at all...?<p>And this is exactly the difference between someone who is new to the field, or a junior, or whatever, and someone who already has a load of experience behind them.<p>These guys are basically learning some of these facts the hard way. While believing something else is going on (that they are being targeted).<p>The whole of web development is actually like that though. Not just sign-up forms. There are lots of gotchas, many of them security related. Or DOS related. Or whatever (the list goes on).<p>And, with a an approach that mostly boils down to 'kind of understand why things are done a certain way by more experienced devs, but only after the fact', it's so very easy to shoot oneself in the foot.<p>Without such knowledge, it's all too easy to have one's database breached, and have personally identifiable information stolen (hello GDPR violations, and worse, which can destroy a company in many different ways, not only fines or reputation). It's all too easy to have a system that basically gets pwned by hackers.<p>It's a bit like believing one can just build a house, by just learning a few skills. Sure, I mean: that's possible. Kind of.<p>But at some point, particularly once one gets past a single-room house, or past a single-floor house, one will likely quickly realise why there are safety regulations, why we have structural engineers, why we have surveyors, why we certify various things in the building world, why various folk in the field undergo training, some of which is quite specialised, etc.<p>Anyhow. I'll leave my primary observations at that: automated sign-ups including bad email addresses, isn't a targeted attack at all. It's totally normal, expected even. And it's just the tip of the iceberg (rather: one of many icebergs).<p>No, existing developers are not deliberately gatekeeping, as these guys claim and believe.<p>We just don't have time to provide 101/202 (and more) of web development / web security / best practises / appropriate algorithm selection — the list goes on and on (hello compsci degree, and more) to folk who just don't yet have a suitable foundation, and perhaps don't even understand why so much of this might even be needed — and, in some cases, don't even care (admittedly: perhaps through complete lack of knowledge). It takes time to learn these thing, and it takes time to teach these things. Doesn't matter if anyone thinks there are shortcuts — that attitude will likely just provide very tough lessons, after the fact.<p>There's lots of stuff that AI doesn't know, and there's lots of apps that AI can't build. There's lots of things in security that can't just be "winged". And this isn't gonna change any time soon — despite what less technical folk might believe (including the folk in this video, it seems).<p>It's just not realistic to believe that AI can improve enough in the near future to be able to build fully secure apps. Sure, they might be <i>more</i> secure than previous versions. But how are these folk even going to be able to verify their app is secure? Security and pen-testing is a whole field in itself. And nobody in their right might is going to (also) trust all of that to AI.<p>I wish them luck in their endeavours. But it's a steep learning curve — particularly if basically believing everything is just like the early days of the internet — and with a mindset that they are specifically being targeted with bad-email-address signups (as just one single example).<p>Don't get me wrong: I'm not anti-generative-AI in the slightest. But being an experienced developer, I can plainly see that it is a giant bag of loaded foot-guns. Particularly for those who are new and inexperienced in this field. And AI doesn't solve that.