After working for a while on products built around microservices, I saw repeatedly the issue of not having an easy way to check dependencies between these components or get basic info about them being discussed and raised by teams.<p>Do you know any solution that suit this purpose? A solution that provides a more high level overview of your microservices ecosystem?<p>I know there are tools that "scan" and drew a map of your microservices and inter-dependencies, but I just don't think they're very friendly to use. I also know that is possible to write wiki pages and readme files to describe the microservices, but I don't think this solution is easily searchable/accessible.<p>I was wondering if exist something that can be used more upfront, when the services are being created. Like a service index, for instance.<p>I'd like to hear your experiences if you please.
The question could be - How do you design/develop/maintain dependency hell of your micro services?<p>Service discovery is one solution. Dependency graph is always behind built tools (SBT, Ant, Maven, NPM, PIP,..etc) unless you standardize them.<p>If it is environment specific, It could be part of your configuration management tools like Ansible, Chef. I like it here because it is pretty much codified in terms of DSL scripts. Config management sitting in front of Linux package manager like YUM, APT, etc.,<p>In Docker world, It could be part of application profiles, Pod, Pod groups. Sidecar dockers, System service dockers should be captured as dependencies in pod groups as part of infra/environment setup.<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow">https://kubernetes.io/docs/concepts/workloads/pods/pod/</a>
<a href="https://mesosphere.github.io/marathon/docs/application-groups.html" rel="nofollow">https://mesosphere.github.io/marathon/docs/application-group...</a><p>I guess there is very good market to solve this dependency complexity issues. Micro services leads lot different type of tech stacks and addressing the complexity is niche in demand. This could be very good idea for startup.
From a technical perspective, a tracing system like opentracing is essential. Jaeger is a very user-friendly and great standard to do it. I prefer it over Zipkin.<p>If you want to maintain an "index" you can use a service-discovery service like <a href="https://www.consul.io/" rel="nofollow">https://www.consul.io/</a> and build tools on it.<p>Here you can see an example how to manage it with hemera in the node.js world <a href="https://github.com/hemerajs/aither" rel="nofollow">https://github.com/hemerajs/aither</a>. We use a production-grade messaging system called <a href="https://nats.io" rel="nofollow">https://nats.io</a> as service-discovery and transport layer and have a very transparent and maintainable landscape of our services.
Are you referring to just internal documentation about your own architecture?<p>If you are struggling with the runtime dependency and debugging, make sure you have a distributing opentracing stack like Jaeger running.<p>If it’s internal docs and wikis and knowledge, make this part of what you build. Writing code and shipping to servers isn’t where our job as engineers ends. Document it. Training your coworkers. Make sure you are building maintainable software, and that means that building software isn’t just about writing code.
This is one advantage of having client libraries, if you publish them to a local package feed the dependencies become easily enumerable.<p>You could dynamically infer this by adding origin info to your service locator query ("<i>I'm Profile service</i> looking for User service").