Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use packageManager as a attempt to identify node package manager #1263

Merged
merged 5 commits into from
Jan 21, 2025

Conversation

arthurfiorette
Copy link
Contributor

@arthurfiorette arthurfiorette commented Jan 13, 2025

When running nixpacks inside pnpm monorepos a lockfile will not be present at cwd, to avoid this problem it attempts to first read the package.json packageManager field before trying to read lockfiles until a match is found.

Ideally would be great if we could choose a package manager as a configurable option for nixpacks.toml but since I did not see any other providers using dedicated configurations for a provider I was not sure if that's something desired to have...

@coffee-cup coffee-cup added the release/minor Author minor release label Jan 16, 2025
@arthurfiorette
Copy link
Contributor Author

@coffee-cup I've fixed linting and improved documentation to state that now packageManager field is taking into account.

@arthurfiorette
Copy link
Contributor Author

There's another problem when running with pnpm workspaces:

In latest's pnpm version, only one lockfile is present in the root of the project, meaning no packageManager neither pnpm-lockfile.yaml will be present near the project structure. To avoid giant refactors to take account a "root" folder being different from the project structure, we could use pnpm deploy instead of install and build recursively all dependencies.

pnpm deploy --filter=<npm package name> ./target
cd target
pnpm build --filter=<npm package name>...

Doing something like the above, deploy will install only the required dependencies into a subfolder build (with ... at the end) would build all local dependencies of the project before building itself, making working with monorepos just like normal packages. In the case of a single package, deploy and build would work just like if it was in a workspace.

This is also how pnpm itself recommend building docker images.

What do you think about this?

@arthurfiorette
Copy link
Contributor Author

We could even improve installing times by firstly copying only pnpm-lock.yaml and running a fetch to load all dependencies before copying the whole project so dependencies gets cached in a layer that is resistant to changes in the codebase and even non-dep changes to package.json

@coffee-cup
Copy link
Contributor

Thanks for the docs and lint fixes. I’m afk until Monday and will merge and release it then.

Great ideas for improving the pnpm provider too. Fetching deps in an another layer would certainly help deploy times. I’m a bit unsure why it is an issue if there is only a single lockfile at the root though. Does that not work with the current system?

@arthurfiorette
Copy link
Contributor Author

If the lockfile is in the workspace's root and I build in a subfolder, pnpm inside docker will not be aware of the monorepo, nor local dependencies installed with workspace:* protocol or a local lockfile. Hence, it breaks when installing with --frozen-lockfile because there's no lockfile. But even removing frozen lockfile local dependencies (the most basic feature of workspaces) would also break.

@coffee-cup
Copy link
Contributor

Ah I see. Yes in that case it is recommended to build from the root and build a specific package with --filter.

@coffee-cup coffee-cup merged commit c434a25 into railwayapp:main Jan 21, 2025
1 check passed
@arthurfiorette
Copy link
Contributor Author

Hey @coffee-cup but in that case the build wouldn't invalidate its cache on any unrelated change across the entire workspace?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release/minor Author minor release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants