-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build: update dockerfiles #153
Conversation
@LouisBrunner how do you feel about this change, see the suggestion by Enrico in the parent issue. It would be kinda cool to have a 'development' image with protobuf and all dependencies installed, to keep the testing in the CI efficient and such that users dont have to build protobuf themselves if they want to work with devcontainers. What do you think about a "docker architecture" where we have two dockerfiles and the production one that builds the sources builds on top of the development dependencies? Something to avoid or could we do that? (This is very low priority, I'm just hoping for some ideas/brainstorming, not for you to take action) |
What would be the main difference? That ones has SSH configuration setup and not the other? Is that something that can be handled through
I would maybe go even wider and say: which package is responsible for system libraries like Protobuf? As CL is the one which requires it, shouldn't it always be bundled with it, be it in prod or dev? |
The main difference would be that we'd have an image that holds all the development dependencies (protobuf, pinocchio etc.). Right now, it takes about 10-30min to build a dev image for the devcontainer in VS code and even more to build the image locally. So it's not just about the SSH and user config but mainly about having the dependencies built remotely.
Yes it should be bundles but it's currently bundled as |
Gotcha. I guess we could revive the
All of this should be cached, including the pinocchio build. So someone doing regular builds locally should see small build times. That doesn't help the CI, agreed, but maybe the devcontainer setup is faulty? |
We copy the dependencies of one image into the other. We've been doing this here
I guess you would copy from a tagged image, so either its available locally or it will pull the one from github
I'm not sure, I think we can get away with the same amount as we have now
The build that happens on main would be two steps (where the first blocks the second), first the development image is build and then we build the final CL image.
See above.
Don't know, quite big probably |
No no you are right and the devcontainer setup is good but I happen to delete my images regularly and then I get upset when I want to spend 5 minutes in CL but have to wait 15 minutes for a devcontainer.. |
So you would never build the dependencies locally?
Potentially, depends on the above.
Because ~GBs downloads are pretty painful too (not sure how much it would cost us as well?). On the other hand we could look at Docker cache-from/cache-to (especially as it integrates with GitHub directly https://docs.docker.com/build/cache/backends/gha/). @eeberhard also had found better and cheaper runners which included better caching IIRC. Ultimately, I am happy with either but this kind of setup will definitely make it a bit more painful to use and thus integrate with other systems (e.g. package-builder). |
Not really, but that would be a relief from my point of view no?
CL is public, we're not going to pay for the CI, and the final image will be the same size at it is now since we just copy the necessary files to a scratch image.
Well is the goal to introduce package builder here as well? |
Depends on the size of the image but most of the time I imagine yeah.
Ah, so you it squashed, gotcha.
I thought we might because other people might want to create libraries which can be used by multiple packages instead of building it for each one separately. |
44a24e4
to
7787c4c
Compare
8636950
to
e990d1b
Compare
How do you like the PR in this current state now? It solves the original issue that the base image is not a ROS image. To me, this is an acceptable change and then we can discuss other points that were raised in this discussion in a follow up issue. |
PR description updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question then LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the discussion and work on this issue / PR. It seems in a good state to me!
Description
This PR solves the issue by building the CL image based on ubuntu 22:04 and not a ROS2 workspace image. This requires configuring a user called
developer
like before. Apart from that, just minor changes that came along this.Review guidelines
Estimated Time of Review: x minutes
Checklist before merging: