-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updates for self-hosted runner #100
Comments
Thanks @cboettig . The jobs I've requested for |
@jzwart thanks for the ping! yup you're right, I needed to manually add any public repos to the whitelist for the runner group in https://github.com/organizations/eco4cast/settings/actions/runner-groups/1. done now. |
you should see that you now have a runner listed in this repo's runner list in https://github.com/eco4cast/usgsrc4cast-ci/settings/actions/runners . We probably need to make you an org admin for you to see the previous link for all public repos that can access the runner. @rqthomas should we bump Jake to admin? also should we have a governance process for that? |
@jzwart for https://github.com/eco4cast/usgsrc4cast-ci/actions/runs/12559914161/job/35016390626, can you drop the use of futures here? gdal is already threaded anyway so I don't think it's helping that much. |
Thanks! it works now and I removed futures in the stage 3 update. I think @rqthomas and I were supposed to work on a governance process for eco4cast org, but we haven't gotten around to doing that yet. This could be motivation to do so |
@jzwart I've needed to juggle things for the self-hosted runner to better accommodate loads.
For any tasks that need a self-hosted runner, you will need to request the
efi-cirrus
runner group as in this example, like so:It's really important you include the
options: memory="15g"
(or whatever up to 45 g) in any workflows on the self-hosted runner. (Unfortunately/astonishingly, GitHub ARC doesn't provide a mechanism to enforce this on runners that use custom docker images other than opting in to limits in the job yaml).The text was updated successfully, but these errors were encountered: