That was written when Tandem was the leader in database replication. They had shared-nothing redundancy working very well. The problem was that their custom CPUs cost too much.<p>Today, we see a lot of shared-nothing systems, where database slaves are kept in sync over the network. That's basically how Google stores data at scale. When you get big enough, you almost have to go that way.
Considering that this was written in 1986 (when I was first getting into databases on mainframes and minis), he makes a statement of belief that automatic tuning tools will become available.<p>Funny thing is, I am still waiting for these tools. We still have DBA's doing tuning of DBMS's today. Since the early 90's, I was asking why the DBMS wasn't doing the automated tuning for every database being created, especially on the major database venders's systems.<p>Every DBA tuning guide that I read over the years described techniques that should have been automated and automatically run by the various DBMS's.<p>The physical characteristics of all databases should not be the concern of database designers or implementers. Such should be concerned with the logical design and leave the DBMS to handle how it will physically lay out those logical designs.<p>Whether there is shared memory or shared disks or no sharing should be a characteristic of the DBMS and not our concern. Our concern should be whether or not the DBMS efficiently runs our logical design.