-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node.js servers being down fails the CI workflows #20
Comments
WIP here : alzo-archi@9ab9452 The idea would be to make the backoff optional (the default max_attempts \ 1 param makes it optional) to allow people to control this behavior from the outside (personnally I would launch the install in an Oban job as part of a workflow DAG). Would that be direction that would suit you ? I'll have to separate the actual downloading from this, to unit test the backoff separately in my next commits. |
Looks great, like the direction this is taking! Regarding the default for What do you think? |
I think you're right, in that case I'd write a tighter loop that sleeps for max 50ms at a time and checks if a time boundary has been exceeded to decide whether to sleep again or trigger a retry, because I think I remember that sleeping for more than a few tens of milliseconds is suboptimal for the BEAM's preemptive scheduling. |
Looks I'm wrong on that one, after searching, sleeping is generally discouraged because it's often the wrong tool vs message passing but it's fine here. |
Now working on that in #23 . |
🎉 This issue has been resolved in version 1.0.0-alpha.15 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
When using Nodelix in CI (e.g. through
semantic_release
), sometimes the Node.js servers are down and the whole CI fails.We should consider implementing an exponential backoff retry to mitigate that issue and eliminate the need to manually relaunch the workflow (sometimes multiple times).
The text was updated successfully, but these errors were encountered: