Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Group workflows by trigger type and initial commit for the integration tests workflow #611

Merged
merged 2 commits into from
Jan 20, 2025

Conversation

rdimitrov
Copy link
Member

@rdimitrov rdimitrov commented Jan 16, 2025

The following PR bundles some of the existing workflows by trigger type and adds an initial workflow for the integration tests.

Changes:

  • It adds 3 new workflows that run on PR, push to main and published release
  • Updates the existing workflows to be triggered by a workflow call and removes their existing triggers
  • Updates the README CI badge to refer to the new workflow
  • Adds a placeholder for the integration tests
  • Updates the small LLM used for the integration tests (for some reason this was needed, otherwise it was failing to load it)
  • Updates the integration tests to return zero vs non-zero codes in case 1 or more tests fail (needed to fail to CI job in such cases)

Reasoning:

  • Treating workflows as functions (known as reusable workflows) and organising them like that makes it more visible what workflows are being run for each trigger type and also allows to easily extend this list in the future.
  • In this case this helps adding the integration tests more easily as it re-uses the container image built from the previous workflow and aims to re-use it in the integration steps by passing it as an artifact.

@rdimitrov rdimitrov force-pushed the workflows branch 15 times, most recently from 51ce373 to 1cc7334 Compare January 16, 2025 17:07
@jhrozek
Copy link
Contributor

jhrozek commented Jan 16, 2025

this is looking great! Let's figure out the TLS errors tomorrow

- name: Run integration tests
env:
CODEGATE_PROVIDERS: "copilot"
CA_CERT_FILE: "/home/runner/work/codegate/codegate/codegate_volume/certs/ca.crt"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might be able to fix the ca stuff with something like this:

cp ca.crt /usr/local/share/ca-certificates/codegate.crt
update-ca-certificates

I don't know if sudo would be needed, do these run as root in github?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be able to either run the tests without installing the CA_CERT_FILE or not specify the CA_CERT_FILE argument if we had installed the cert to the system store.

@rdimitrov rdimitrov force-pushed the workflows branch 12 times, most recently from 9779706 to 4782697 Compare January 17, 2025 13:23
@rdimitrov rdimitrov force-pushed the workflows branch 5 times, most recently from 3bce730 to 22cc684 Compare January 17, 2025 15:55
@@ -12,15 +16,33 @@ jobs:
docker-image:
name: Check docker image build
runs-on: ubuntu-latest
env:
IMAGE_NAME: stacklok/codegate
IMAGE_TAG: dev
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't push this image do we? It's just a tag used for the artifact that we upload and then download?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't, no. This is just so it has a reference that we can then use to save it as a tar file.

name: sqlite_data
name_is_regexp: true
skip_unpack: false
if_no_artifact_found: ignore
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we use this parameter? Wouldn't this cause the integration tests to be ignored if the artifact is not built?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got this part from the image-publish workflow to solve one of the issues I faced which was - codegate did not had any populated db so the invokehttp test was failing because of that. With this step I'm adding the latest db.

That said, I don't think this is an expected use case (we call a workflow that does produce an artifact).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting - so the CI images are expected to not have the full db, but the contents of https://github.com/stacklok/codegate/tree/main/data. I worked around that in the initial integration tests patch by adding invokehttp into the JSON files. Which is not ideal, if we can copy the db, then much better. I'd just be careful if we are not taking too much time or resources.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We were looking at this issue with Pankaj and Michelangelo and it turns out the dataset was not getting imported while building the image. We discussed that using the latest one vs the minimal dataset should not impose any performance implications (and if it does, it would be nice to catch those, right).

Another concern against this was if in the tests we rely that a given package is malicious, but it turns out its maintainers improved its score and this is no longer the case. Even though possible, this is rarely going to be the case, so thus the decision.

@jhrozek
Copy link
Contributor

jhrozek commented Jan 17, 2025

@rdimitrov not related to this patch, but iirc you mentioned on Slack that if one of these test cases fails the whole test run wouldn't fail, do we have an issue for that?

@rdimitrov
Copy link
Member Author

@rdimitrov not related to this patch, but iirc you mentioned on Slack that if one of these test cases fails the whole test run wouldn't fail, do we have an issue for that?

I don't think we have, but I should have fixed it as part of this PR (check the changes on integration_tests.py). If one or more tests fail now we'll return a non-zero code in oppose to before.

@jhrozek
Copy link
Contributor

jhrozek commented Jan 18, 2025

@rdimitrov not related to this patch, but iirc you mentioned on Slack that if one of these test cases fails the whole test run wouldn't fail, do we have an issue for that?

I don't think we have, but I should have fixed it as part of this PR (check the changes on integration_tests.py). If one or more tests fail now we'll return a non-zero code in oppose to before.

Oh, I missed those, I was just reading the workflows. Sloppy review..

jhrozek
jhrozek previously approved these changes Jan 18, 2025
Copy link
Contributor

@jhrozek jhrozek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for answering my questions during the review!

lukehinds
lukehinds previously approved these changes Jan 20, 2025
@lukehinds
Copy link
Contributor

@dependabot rebase

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants