You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We recently seem to have hit some kind of limit with the bucket: the automated ingestion would only pick up the first 500 tarballs it would find in the S3 bucket, meaning that new skylake tarballs would not be found (as they are picked up alphabetically, and skylake is at the bottom of the list). We can probably work around it by doing some kind of pagination, and we were considering reimplementing this workflow anyway, but for now I've done a quick workaround by setting up a new bucket (software.eessi.io-archive), syncing all files from the existing to the archive bucket, and removing them from the existing one.
We can do that more often by running the following:
This should only be run if there are no open PRs in the staging repo and if no tarballs were just uploaded to the bucket (i.e. no open PRs with bot: deploy label), otherwise they will be lost.
The text was updated successfully, but these errors were encountered:
We recently seem to have hit some kind of limit with the bucket: the automated ingestion would only pick up the first 500 tarballs it would find in the S3 bucket, meaning that new skylake tarballs would not be found (as they are picked up alphabetically, and skylake is at the bottom of the list). We can probably work around it by doing some kind of pagination, and we were considering reimplementing this workflow anyway, but for now I've done a quick workaround by setting up a new bucket (
software.eessi.io-archive
), syncing all files from the existing to the archive bucket, and removing them from the existing one.We can do that more often by running the following:
This should only be run if there are no open PRs in the staging repo and if no tarballs were just uploaded to the bucket (i.e. no open PRs with
bot: deploy
label), otherwise they will be lost.The text was updated successfully, but these errors were encountered: