The thing that I think this could run up against is that in HTML+CSS it is fairly common to take an element and apply a whole bunch of properties in coordination with each other. That is, I'm going to set similar margins and paddings and fonts and many other properties on each element, and there are a lot of broad similarities. This is where CSS variables come in; even if I'm applying a color to a lot of elements I'm probably pulling from a much smaller palette and if I change one of them I want to change all.<p>Cloud template definitions also have a lot of settings, but from what I can see, they are all different, all the time, for lots of good reasons. If I'm deploying a lot of different kinds of EC2 instances, I've got a whole bunch of settings that are going to be different for each type. Abstracting is a much different problem as a result. And it isn't just this moment in time, it's the evolution of the system over time, too. In code, overabstracting happens sometimes. In cloud architecture it is an all-the-time thing. It is amazingly easy to over-abstract into "hey this is our all-in-one EC2 template" and then whoops, one day I want to change the instance size for only one of my types of nodes, and now I either need to un-abstract that or add yet another parameter to my all-in-one EC2 template.<p>The inner platform effect is very easy to stumble into in the infrastructure code as a result, where you have your "all-in-one" template for resource X that, in the end, just ends up offering every single setting the original resource did anyhow.<p>By contrast, I've pondered the "focus on the links rather than the nodes" idea a few times, and there may be something there. However the big problem I see is that I <i>like</i> rolling up to a resource and having one place where either all the configuration is, or where there is a clear path for me to get to that point. Sticking with an instance just to keep things relatable, if I try to define an instance in terms of its relationship to the network, to the disk system, to the queues that it uses and the lambda it talks to and the autoscaling group it is a part of, now its configuration is distributed everywhere.<p>One possible solution I've often pondered is modifying the underlying configuration management system to keep track of <i>where</i> things come from, e.g., if you have a string that represents the name of the system you're creating, but it is travelling through 5 distinct modules on its way to the final destination, it would be great if there was a way of looking at the final resource and saying "where exactly did that name come from?" and it would tell you the file name and line number, or the set of such things that went into it. Then at least you could query the state of a resource, and rather than just getting a pile of values, you'd be able to see where they are coming from, dig into all the things that went into all the decisions, that might free you to do link-based configuration rather than node-based configuration. But you'd probably need an interactive explorer; if for instance the various links can configure the size of the underlying disk and you take the max() of the various sizes (or the sum or whatever), you'd need to be able to look at everything that went into the max and all the sources of <i>those</i> values; it's more complicated than just tracking atomic values through the system.<p>I've often wished for this even in just my small little configs I manage compared to some of you, and it is <i>possible</i> that this would be enough of an advantage to stand out in the crowd right now.<p>(I think the "track where values came from and how they were used in computation" could be retrofitted onto existing systems. "Focus on links rather than nodes" will require something new; perhaps something that could leverage an existing system but would require a new language at a minimum.)