So there's a C program. There's a bunch of sub-par programmers, who don't use the old, well documented, stable, memory-safe functions and techniques. And they write code with memory safety bugs.<p>They are eventually forced to transition to a new language, which makes the memory safety bugs moot. Without addressing the fact that they're still sub-par, or why they were to begin with, why they didn't use the memory safe functions, why we let them ship code to begin with.<p>They go on to make more sub-par code, with more avoidable security errors. They're just not memory safety related anymore. And the hackers shift their focus to attack a different way.<p>Meanwhile, nobody talks about the pink elephant in the room. That we were, and still are, completely fine with people writing code that is shitty. That we allow people to continuously use the wrong methods, which lead to completely avoidable security holes. Security holes like the injection attacks, which make up 40% of all CVEs now, when memory safety only makes up 25%.<p>Could we have focused on a default solution for the bigger class of security holes? Yes. Did we? No. Why? Because none of this is about security. Programmers just like new toys to play with. Security is a red herring being used to justify the continuation of allowing people to write shitty code, and play with new toys.<p>Security will continue to be bad, because we are not addressing the <i>way</i> we write software. Rather than this one big class of bugs, we will just have the million smaller ones to deal with. And it'll actually get harder to deal with it all, because we won't have the "memory safety" bogey man to point at anymore.