TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Weird: Docker Registry returns 500 only from Vultr

4 pointsby julienmarieabout 1 year ago
Since last Friday, we&#x27;ve encountered persistent 500 Internal Server Error responses when accessing the Docker registry (https:&#x2F;&#x2F;registry-1.docker.io&#x2F;v2&#x2F;) from certain Vultr servers. Interestingly, this issue is not universal across all Vultr instances nor is it present when accessing from local machines, other hosting services, or even some other Vultr servers. The problem seems to be isolated to specific Vultr datacenters, resulting in an inability to login or pull images from the Docker registry.<p>This issue is significantly impacting the Vultr Kubernetes Engine (VKE), hindering any deployment efforts that rely on Docker registry and preventing the addition of new nodes, as they depend on Calico, which is hosted on the Docker registry.<p>Vultr&#x27;s support suggests the problem originates from Docker&#x27;s side. However, the absence of widespread complaints or discussions on platforms like Twitter about Docker registry outages affecting Kubernetes deployments worldwide makes this unlikely. Notably, discussions on Docker&#x27;s Discord and Reddit indicate that the issue is confined to Vultr users.<p>This situation raises questions about the underlying cause and potential solutions. Has anyone else experienced similar issues or found workarounds? Any insight or shared experiences would be greatly appreciated as we navigate this problem.

4 comments

robcxyzabout 1 year ago
This might not be related, but I had an outage of a VKE cluster from Friday to Monday morning and their customer support blamed it on dockerhub. This didn&#x27;t seem right at all though since the issue only came up when I upgraded a cluster and didn&#x27;t impact every node. So like their customer support normally does, they figure out some way to deflect the problem (ie point at dockerhub despite the status page only showing some degradation) and ignore it. What didn&#x27;t inspire confidence though is that their customer support clearly doesn&#x27;t understand k8s well giving me a response to the effect of &quot;clearly it is dockerhub&#x27;s fault&quot; when highlighting a pod&#x27;s status without going into the events or logs of the pod to see the containers were being pulled.<p>Again, not sure if this is related, but using this as an opportunity to share how bad my experience has been with vultr&#x27;s customer support over the last couple years. Every time I have interacted with them over an issue it is some diagnosis that makes things not their fault somehow. When people have clusters out because of control plane errors for multiple days, I would think they would be somewhat concerned or give some kind of response to the effect of an apology especially when spending thousands every month. I doubt I&#x27;ll get any reimbursement.<p>Worst situation in the past was when I complained about connectivity issues that I was sure related to some firewall on their side that was throwing alarms for my app and kept on trying to get them to look at it. Going absolutely crazy for a month trying to figure out what the hell is going on, finally got my rep to look at it and bam, they see the issues and blamed it on a faulty cable. Faulty cables don&#x27;t drop packets like what I saw though so now I honestly just don&#x27;t know what to believe from them.
LinuxBenderabout 1 year ago
Just anecdotally and perhaps unrelated to your issue, I have a Primary DNS server in Vultr and at times IPv4 times out, then IPv6. It hasn&#x27;t been persistent enough for me to start troubleshooting it or setting up 3rd party monitoring but I may do that today if others are seeing odd behavior now. Perhaps together we could create a list of service endpoints to monitor each other using curl or dig maybe and find a pattern to it.<p>Something to play around with<p><pre><code> # TCP AXFR. kdig @2001:19f0:b001:e83:5400:4ff:fe72:e740 +nocookie +padding=64 +retry=0 +all -t axfr example.net dig @216.128.176.142 +nocookie +padding=64 +retry=0 +all -t axfr example.net # UDP TXT or whatever kdig @2001:19f0:b001:e83:5400:4ff:fe72:e740 +nocookie +padding=64 +retry=0 +all -t txt example.net dig @216.128.176.142 +nocookie +padding=64 +retry=0 +all -t txt example.net</code></pre>
inok6743about 1 year ago
I am having the same issue now with the Cloud compute servers at the Tokyo region.<p>In terms of a workaround, it seems like a server created without an IPv6 address works fine and assigning an IPv6 network to the server causes the issue again for me.<p>So I guess that something is going wrong with Vultr&#x27;s network configuration at this point.
评论 #39740194 未加载
rjst01about 1 year ago
We noticed this yesterday trying to release a minor bug fix. As of this morning it appears to still be broken.<p>It&#x27;s hard to see how it could be anything other than an issue on Docker&#x27;s side - we are seeing a 500 after all. I need to unblock development ASAP so for now the workaround for us has been to migrate our container registry to Azure.