Modern CPU design adds an interesting twist to the bounds checking question that (imo) renders it moot. At least on Intel CPUs, the array bounds check gets compiled down to a compare-and-jump instruction pair, which then ends up getting run as a single microinstruction on the CPU. Unless you're in an <i>extremely</i> tight loop, the cost of that instruction just isn't significant. The only time it costs much at all is in the event of a branch misprediction, in which case the very next thing that's going to happen is an exception so it's still insignificant given the context. And if you are in an extremely tight loop, it's probably structured in a way that makes it easy for compiler optimize away the bounds check.
It's funny the author uses an example of an array indexed by 199x values, as the 90s were in some way "the lost decade" of C/++ development, and an utter lack of concern for this sort of checking.<p>The realities of internet exposure put an end to this recklessness for casual^H^H^H^H^H "enterprise" software development.<p>I love C A Hoare's comments, reference in this link (<a href="http://en.wikipedia.org/wiki/Bounds_checking" rel="nofollow">http://en.wikipedia.org/wiki/Bounds_checking</a>) about "some languages" (cough cough - C)<p>Interesting how the Go language now includes array bounds checking. While 2 of the main designers are ex Bell labs, 1 of them is from U of Zurich. (I'm assuming he would have had some Pascal/Modula exposure there)
Range checking is great; the CPU cycles consumed are well worth the bugs they catch.<p>Thing is, they're a list thing, and there's more than just "is this index valid?" that can go wrong. One of the things I appreciate about C++ is that iterator invalidation is very precisely specified. Sadly, that's where it stops: it's just specified, but the onus is still on you to catch the errors. It'd be great to have that same thing: immediate errors when you use an invalidated iterator. (in vectors, it might just skip an element (some deletes) or see the same element twice (some inserts); Python warns about this during some cases with dicts:<p><pre><code> RuntimeError: dictionary changed size during iteration
</code></pre>
which is nice, but I think you can still slip by that.) I don't believe Java or Python specify what happens to iterators when the collection changes, which to me, is a bit sad.<p>Thing is, to implement this, the collections would probably need to know about all outstanding iterators, so as to figure out where they are and whether they should be invalidated at all. Most operators then become O(number of active iterators + whatever the normal cost of this op is); I'd argue this might still be close to O(1 + ...) since the number of active iterators on a collection is probably usually 0 or 1. But there's a memory cost, a CPU cost, and if you have something like:<p><pre><code> if(weird condition) {
use_that_iterator_i_just_accidentally_invalidated()
}
</code></pre>
Then your bug only gets caught if `weird_condition` is true. Is it worth it?
Simple answer:<p>Leave in all range checks. If the compiler can determine that the range check will never be hit, or can partially optimize it (hoisting it out of a loop, etc) great. Otherwise? It gets checked at runtime, which is typically not that expensive an operation anyways.
I wonder if the "bound" instruction on x86 has the same issue that the VAX(?) opcode did - that it's faster to write out the two checks than to call the instruction...