<p><pre><code> Why don’t programming languages do this?
</code></pre>
I worked on a contribution to clang-analyser a few months ago - one of the things I learned was (1) No-one likes code analysers that raise false alarms and (2) it's really difficult not to raise false alarms. For example, consider checking this program for divide-by-zero bugs:<p><pre><code> for (int i=-99999 ; i<99999 ; i+=2) {
printf("%d\n", 100/i);
}
for (int i=-100000 ; i<100000 ; i+=2) {
printf("%d\n", 100/i);
}
</code></pre>
A programmer can clearly see that in the first loop there is no divide-by-zero as i will jump directly from -1 to +1 - whereas in the second loop the counter will hit 0 and trigger a divide-by-zero.<p>But for a static analyser, your options are:<p>1. Model i as a single value, and simulate all ~400,000 passes through the loops. Slow - you basically have to run the program at compile time.<p>2. Same but truncate the simulation after, say, 20 loops on the assumption that most bugs would have been found on the first few passes. This misses the bug in the second loop.<p>3. Model i as "Integer in the range -99,999 to +99,999" and generate a false alarm on the first loop.<p>4. Support arbitrarily complex symbolic values. Difficult - as the value of a variable might depend on complicated business logic.<p>I guess the benefit of starting a new programming language is you can choose option 3 and say "Working code is illegal, deal with it" from the start.