The OCI Distribution Spec is not great, it does not read like a specification that was carefully designed.<p>> According to the specification, a layer push must happen sequentially: even if you upload the layer in chunks, each chunk needs to finish uploading before you can move on to the next one.<p>As far as I've tested with DockerHub and GHCR, chunked upload is broken anyways, and clients upload each blob/layer as a whole. The spec also promotes `Content-Range` value formats that do not match the RFC7233 format.<p>(That said, there's parallelism on the level of blobs, just not per blob)<p>Another gripe of mine is that they missed the opportunity to standardize pagination of listing tags, because they accidentally deleted some text from the standard [1]. Now different registries roll their own.<p>[1] <a href="https://github.com/opencontainers/distribution-spec/issues/461#issuecomment-1701554264">https://github.com/opencontainers/distribution-spec/issues/4...</a>
Actually, Cloudflare open-sourced a container registry server using R2.[1]<p>Anyone tried it?<p>[1]: <a href="https://github.com/cloudflare/serverless-registry">https://github.com/cloudflare/serverless-registry</a>
Hi HN, author here. If anyone knows why layer pushes need to be sequential in the OCI specification, please tell! Is it merely a historical accident, or is there some hidden rationale behind it?<p>Edit: to clarify, I'm talking about sequentially pushing a _single_ layer's contents. You can, of course, push multiple layers in parallel.
That's a pretty cool use case!<p>Personally, I just use Nexus because it works well enough (and supports everything from OCI images to apt packages and stuff like a custom Maven, NuGet, npm repo etc.), however the configuration and resource usage both are a bit annoying, especially when it comes to cleanup policies: <a href="https://www.sonatype.com/products/sonatype-nexus-repository" rel="nofollow">https://www.sonatype.com/products/sonatype-nexus-repository</a><p>That said:<p>> More specifically, I logged the requests issued by docker pull and saw that they are “just” a bunch of HEAD and GET requests.<p>this is immensely nice and I wish more tech out there made common sense decisions like this, just using what has worked for a long time and not overcomplicating.<p>I am a bit surprised that there aren't more simple container repositories out there (especially with auth and cleanup support), since Nexus and Harbor are both a bit complex in practice.
Note that CNCF's Distribution (formerly Docker's Registry) includes support for backing a registry with Cloudfront signed URLs that pull from S3. [1]<p><a href="https://distribution.github.io/distribution/storage-drivers/middleware/" rel="nofollow">https://distribution.github.io/distribution/storage-drivers/...</a>
What’s wrong with <a href="https://github.com/distribution/distribution">https://github.com/distribution/distribution</a>?
I don't do a ton with Docker outside dev tooling, but I have never understood why private container registries even exist? It just smells like rent seeking. What real advantage does it provide over say just generating some sort of image file you manage yourself, as you please?
It seems that ECR is actually designed in a way to support uploading image layers in multiple parts.<p>Related ECR APIs:<p>- InitiateLayerUpload API: called at the beginning of upload of each image layer<p>- UploadLayerPart API: called for each layer chunk (up to 20 MB)<p>- PutImage API: called after layers are uploaded, to push image manifest, containing references to all image layers<p>The only weird thing seems to be that you have to upload layer chunks in base64 encoding, which increases data for ~33%.
Interesting idea to use the file path layout as a way to control the endpoints.<p>I do wonder though how you would deal with the Docker-Content-Digest header. While not required it is suggested that responses should include it as many clients expect it and will reject layers without the header.<p>Another thing to consider is that you will miss out on some feature from the OCI 1.1 spec like the referrers API as that would be a bit tricky to implement.
> Why can’t ECR support this kind of parallel uploads? The “problem” is that it implements the OCI Distribution Spec…<p>I don't see any reason why ECR couldn't support parallel uploads as an optimization. Provide an alternative to `docker push` for those who care about speed that doesn't conform to the spec.
What I would really love is for the OCI Distribution spec to support just static files, so we can use dumb http servers directly, or even file:// (for pull). All the metadata could be/is already in the manifests, having Content-Type: octet-stream could work just fine.
It's cool to see it, I was interested in trying something similar a couple years ago but priorities changed.<p>My interest was mainly around a hardening stand point. The base idea was the release system through IAM permissions would be the only system with any write access to the underlying S3 bucket. All the public / internet facing components could then be limited to read only access as part of the hardening.<p>This would of course be in addition to signing the images, but I don't think many of the customers at the time knew anything about or configured any of the signature verification mechanisms.
This is such a wonderful idea, congrats.<p>There is a real usecase for this in some high security sectors. I can't put complete info here for the security reasons, let me know if you are interested.
Make sure you use HTTPS, or someone could theoretically inject malicious code into your container. If you want to use your own domain you'll have to use CloudFront to wrap S3 though.
R2 in only "free" until it isn't. Cloudflare hasn't got a lot of good press recently. Not something I'd wanna build my business around.
I've started to grow annoyed with container registry cloud products. Always surprisingly cumbersome to auto-delete old tags, deal with ACL or limit the networking.<p>It would be nice if a Kubernetes distro took a page out of the "serverless" playbook and just embedded a registry. Or maybe I should just use GHCR