Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
On Lemmy, it’s a sin to make money off your work, especially if it is opensource core projects providing paid infrastructure/support. You can only ask for donations and/or quit. No in-between.
A single malfunctioning service that restarts in a loop can exhaust the limit near instantly. And now you can’t bring up any of your services, because you’re blocked.
I’ve been there plenty of times. If you have to rely on docker.io, you better pay up. Running your own NexusRM or Harbor to proxy it can drastically improve your situation though.
Docker is a pile of shit. Steer clear entirely of any of their offerings if possible.
I use docker at home and at work, nexus at work too. I really don’t understand… even a malfunctioning service should not pull the image over and over, there should be a cache… It could be some fringe case, but I have never experienced it.
Ultimately, it doesn’t matter what caused you to be blocked from Docker Hub due to rate-limiting. When you’re in that scenario, it’s most cost efficient to buy your way out.
If you can’t even imagine what would lead up to such a situation, congratulations, because it really sucks.
Yes, there should be a cache. But sometimes people force pull images on service start, to ensure they get the latest “latest” tag. Every tag floats, not just “latest”. Lots of people don’t pin digests in their OCI references. This almost implies wanting to refresh cached tags regularly. Especially when you start critical services, you might pull their tag in case it drifted.
Consider you have multiple hosts in your home lab, all running a good couple services, you roll out that new container runtime upgrade to your network, it resets all caches and restarts all services. Some pulls fail. Some of them are for DNS and other critical services. Suddenly your entire network is down, and you can’t even get on the Internet, because your pihole doesn’t start. You can’t recover, because you’re rate-limited.
I’ve been there a couple of times until I worked on better resilience, but relying on docker.io is still a problem in general. I did pay them for quite some time.
This is only one scenario where their service bit me. As a developer, it gets even more unpleasant, and I’m not talking commercial.
One of the previous places I worked at had about a dozen outbound IP addresses (company VPN).
We also had 10k developers who all used docker.
We exhausted the rate limit constantly. They paid for an unlimited account and we just would queue an automation that would pull the image and mirror it into the local artifact repo
A enterprise company that has 10k developers should just invest in their own image hub. It’s not really that hard to do. Docker even open-sourced it under Apache2.0.
Are You guys really pulling more than 40 images per hour? Isn’t the free one enough?
On Lemmy, it’s a sin to make money off your work, especially if it is opensource core projects providing paid infrastructure/support. You can only ask for donations and/or quit. No in-between.
Even at work we don’t pull that many, and we have dozens of developers.
Also its 40 per hour per user
A single malfunctioning service that restarts in a loop can exhaust the limit near instantly. And now you can’t bring up any of your services, because you’re blocked.
I’ve been there plenty of times. If you have to rely on docker.io, you better pay up. Running your own NexusRM or Harbor to proxy it can drastically improve your situation though.
Docker is a pile of shit. Steer clear entirely of any of their offerings if possible.
I use docker at home and at work, nexus at work too. I really don’t understand… even a malfunctioning service should not pull the image over and over, there should be a cache… It could be some fringe case, but I have never experienced it.
Ultimately, it doesn’t matter what caused you to be blocked from Docker Hub due to rate-limiting. When you’re in that scenario, it’s most cost efficient to buy your way out.
If you can’t even imagine what would lead up to such a situation, congratulations, because it really sucks.
Yes, there should be a cache. But sometimes people force pull images on service start, to ensure they get the latest “latest” tag. Every tag floats, not just “latest”. Lots of people don’t pin digests in their OCI references. This almost implies wanting to refresh cached tags regularly. Especially when you start critical services, you might pull their tag in case it drifted.
Consider you have multiple hosts in your home lab, all running a good couple services, you roll out that new container runtime upgrade to your network, it resets all caches and restarts all services. Some pulls fail. Some of them are for DNS and other critical services. Suddenly your entire network is down, and you can’t even get on the Internet, because your pihole doesn’t start. You can’t recover, because you’re rate-limited.
I’ve been there a couple of times until I worked on better resilience, but relying on docker.io is still a problem in general. I did pay them for quite some time.
This is only one scenario where their service bit me. As a developer, it gets even more unpleasant, and I’m not talking commercial.
We just build from scratch and pull nothing
One of the previous places I worked at had about a dozen outbound IP addresses (company VPN).
We also had 10k developers who all used docker.
We exhausted the rate limit constantly. They paid for an unlimited account and we just would queue an automation that would pull the image and mirror it into the local artifact repo
A enterprise company that has 10k developers should just invest in their own image hub. It’s not really that hard to do. Docker even open-sourced it under Apache2.0.
They did.
Regardless they need a way to pull new ones.