-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 throttling due to package.json overwrites #404
Comments
Which command/endpoint raises this error? And how many packages in the repository? |
It's a result of GET request
produced by Currently there are 5942 packages in the repository (21,016 s3 objects). Only 7 of them were published locally. Remaining are proxied. |
If you're using AWS s3 (not another vendor), it has a high throttling per prefix in an S3 bucket. It is usually unlikely to happen when hitting the individual endpoints, except for the You may want to check if it happens only once? or every time when installing the same package? or you share the s3 bucket with other busy services. My point is that you can not just let your server-side endpoint wait a very long time to re-try a third-party service, your client-side (npm) will timeout first. |
Maybe half of errors are produced by requests for the same package. And remainder are from requests to other packages. The bucket isn't accessed by other services. I can see that errors were happened less frequently during the last week (around 50 errors), then the previous week (around 2600 errors). I am using I suppose that throttling response is returned by s3 right after a request reaches it. In that case the retry request might be sent in 100 ms and would not cause npm client timeout. |
By querying s3 access logs I can see that only PUT requests are throttled (during last hours at least, when logging was enabled). I believe it's because every package metadata request is making a PUT operation. E.g. a package @types/node was requested from Verdaccio 15 times per minute, from which one request resulted in status 500 with the SlowDown message. In s3 access logs during that time there were 13 PUT operations with status 200, and 16 PUT operations with status 503 having SlowDown error code. Probably the exponential backoff or another algorithm already retries on the 503 SlowDown responses from s3. And after 10 retries Verdaccio responds with the error status 500. I am wondering why the default 2 minutes of the |
Maybe package.json is overwritten on each request because of malfunctioning ETag comparison? Because I am using KMS bucket encryption and thus ETag is not an MD5 digest of an object data. |
Hi
I am receiving following error on Verdaccio 4.8.1 with aws-s3-storage plugin
most probably caused by S3 throttling.
I'd like to have configurable exponential backoff to be able to work-around errors like this.
--
Max
The text was updated successfully, but these errors were encountered: