Hi HN, looking for some advice please.<p>I currently work at a smallish scale-up. The company has ~100 ppl and the tech side of the company is ~40. There are 4 main teams within engineering with different responsibilities e.g. modelling, platforms, infrastructure etc<p>I'm frustrated that our shared infrastructure is a bit of a wild west. Things like S3 buckets, DB tables, ES indices, Kafka topics often have nonsense names that mean nothing to anyone other than the couple of engineers familiar with them.<p>This makes it really difficult to<p>- clean up deprecated resources<p>- quickly identify what processes are being used by which services<p>- identify who owns the resources<p>I'm trying to implement a shared methodology across all four teams to make this a bit better, but need advice.<p>My aim is to come up with a structure for resources that solves some of the above issues. My current thinking is to implement a naming template like:<p><team_name>_<repository>_<branch_name>_<unique_identifier><p>One of my concerns is that things like <team_name> can be transient, although it's unlikely they will change in my org for at least a year.<p>I also want to ensure this methodology isn't just a verbal agreement across engineers, and is instead automated as much as possible - e.g. everything other than the `unique_identifier` is created automatically. Should there be a centralised service that processes make requests to to create/delete resources? Should this be config driven and handled in `initContainers` in K8 deployments?<p>I've never worked in a bigger engineering department than this so I don't know how this problem has been solved elsewhere.<p>Would appreciate some feedback - perhaps my logic is fundamentally flawed.<p>Thanks
Do:<p>* Focus on primary function category first. I divide my naming of assets first by a general category: service, module, terminal, browser. Service denotes structures intended for transfer across a network. Module is a poor name to describe a major library artifact such as an object contains methods and properties for a single focused feature set file system or text messaging. Terminal describes lesser resources that live only on the terminal. Browser describes lesser resources that only in the browser.<p>* Document naming conventions by an architect or committee outside the dev teams.<p>Don’t:<p>* Do not associate naming by team or people identify. Doing this erodes reuse between teams and creates broken silos. People come and go but problems with the code will be there forever.<p>* Don’t name things by technology stack. Developers can become religious about particular tools and often that happens because they build tech debt around certain tools they don’t want to escape from. The goal is to deliver a product that solves a problem which in turn generates revenue. Thinking in terms of tools is not thinking in terms of solution delivery.<p>Something I am painfully learning the hard way on a personal project is applying services independently of transmission scheme. For example service data and handling at all end points should work identically regardless of using http, sockets, or something else. Doing, and naming/organization of assets is a major part, can lower processing on transmission related overhead by as much as 33%. That is an amazing performance increase because it is purely incidental to just applying a code reorganization to increase scale and lower maintenance.
You could get an inspiration from <a href="https://spiffe.io/" rel="nofollow">https://spiffe.io/</a> to name things.<p>I would look out how centralized you want things to behave and if you introduce roadblocks for each team since they can’t work independently any more.<p>Maybe not the team name should be part of your naming but the business case which it is solving. Try to keep an evolutionary architecture.<p>Also ask the question why you have so many shared resources divided by multiple teams. Is there actually another team boundary implicitly defined in your organization by these shared resources?