This may be a nice time and place to ask ... I have docker problems.<p>We have a monorepo in GitLab and that works nicely. Many Dockerfiles are present in this repo in various folders. They all build private docker images pushed to AWS ECR. They all depend on each other and some have multiple parents, so this sort of forms a family tree.<p>It is a major pain to know what docker images are stale and need to be rebuilt, and to ensure that it's parents are also up to date. We do all this manually and it sucks.<p>Some images take hours to build and we only want to rebuild them when necessary.
It would be great if the build server cached build-steps so that the resulting images could share more layers with previously downloaded images, saving hugely on disk space, time and bandwidth for people and servers.<p>We make sure to have one Dockerfile per folder and a file called "destination.txt" indicating the final image name adjacent to it, this allows scripts to easily build the entire tree of images and parents by scanning our repo.<p>We want nice automation. I don't even know what the best practices are here.<p>What should I do?