Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a Multilevel schwarz preconditioner #1431

Open
wants to merge 8 commits into
base: develop
Choose a base branch
from

Conversation

pratikvn
Copy link
Member

This PR uses the distributed coarse level generation from PGM to use as a coarse level for the additive Schwarz preconditioner. The only requirement is that the galerkin product generator ($RAP$) generate a multigrid::MultigridLevel which is a triplet of restrict, prolong and coarse operators. The user additionally also needs to set the solver for the coarse level.

TODO

  • Add options to switch between multiplicative and additive
  • Options for arbitrary number of levels.

Possible issues

  1. The coarse level solve significantly reduces the number of iterations, but can be extremely expensive. Maybe makes sense to apply coarse solve, only a few times instead of every preconditioner apply (need to be careful of consistency of preconditioner apply) ?
  2. Weighting between the local solver solution and the coarse solution is unclear.
  3. Though the coarse solve and the local solve are independent, we have no way of performing them in parallel. This will probably require a rewrite of the apply interface to enable asynchronicity, in addition to support for multiple streams/queues.

@pratikvn pratikvn self-assigned this Oct 15, 2023
@ginkgo-bot ginkgo-bot added reg:testing This is related to testing. mod:core This is related to the core module. reg:example This is related to the examples. type:preconditioner This is related to the preconditioners labels Oct 15, 2023
@tcojean tcojean added this to the Ginkgo 1.9.0 milestone May 3, 2024
@yhmtsai yhmtsai force-pushed the distributed_pgm branch 4 times, most recently from a10beb2 to 8667fa1 Compare May 9, 2024 08:00
Base automatically changed from distributed_pgm to develop May 9, 2024 12:23
@pratikvn pratikvn marked this pull request as draft July 11, 2024 14:31
@pratikvn pratikvn force-pushed the multilevel-schwarz branch 2 times, most recently from 24c6f7d to 8df491e Compare August 12, 2024 14:45
@pratikvn pratikvn force-pushed the multilevel-schwarz branch from 8df491e to 9101732 Compare October 7, 2024 12:18
@MarcelKoch MarcelKoch modified the milestones: Ginkgo 1.9.0, Ginkgo 1.10.0 Dec 9, 2024
@ginkgo-bot
Copy link
Member

Error: The following files need to be formatted:

core/distributed/preconditioner/schwarz.cpp
core/test/mpi/distributed/preconditioner/schwarz.cpp
examples/distributed-solver/distributed-solver.cpp
include/ginkgo/core/distributed/preconditioner/schwarz.hpp
test/mpi/preconditioner/schwarz.cpp

You can find a formatting patch under Artifacts here or run format! if you have write access to Ginkgo

@pratikvn pratikvn marked this pull request as ready for review January 31, 2025 16:03
@pratikvn pratikvn changed the title WIP: Add a Multilevel schwarz preconditioner Add a Multilevel schwarz preconditioner Jan 31, 2025
@pratikvn
Copy link
Member Author

As a first step, I just added a two-level preconditioner with equal weighting for the local solution and the coarse solution. I think it maybe makes sense to try this out in some applications and then think about arbitrary number of levels, additive/multiplicative etc.

@pratikvn pratikvn added the 1:ST:ready-for-review This PR is ready for review label Jan 31, 2025
@pratikvn pratikvn requested review from a team January 31, 2025 16:05
@pratikvn
Copy link
Member Author

format!

Co-authored-by: Pratik Nayak <[email protected]>
@MarcelKoch MarcelKoch self-requested a review February 3, 2025 09:16
Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks good, I left some minor nits.
Did you have a case, where the coarse grid solve was beneficial? Right now the provided example doesn't seem to benefit from the coarse grid solve.
Also, since the coarse and local solves are not computed in parallel, as you mentioned, which will probably be the case for a while, maybe implement it as a multiplicative version directly.

Comment on lines +96 to +100
* Operator factory to generate the triplet (prolong_op, coarse_op,
* restrict_op).
*/
std::shared_ptr<const LinOpFactory> GKO_DEFERRED_FACTORY_PARAMETER(
galerkin_ops);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not too convinced of the name and the doc. At least to me, galerkin_ops is too unspecific, maybe something like coarse_level_factory would be better suited. Also the doc should contain that this will be used to create a coarse level system, maybe

Operator factory to generate the coarse system `A_c = R * A * P`. 


restrict->apply(dense_b, csol_cache_.get());
this->coarse_solver_->apply(csol_cache_.get(), csol_cache_.get());
prolong->apply(this->half_, csol_cache_.get(), this->half_, dense_x);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's typical to weight these solutions. For example the book Domain Decomposition by Smith, Bjorstad, and Gropp, just sums all contributions up (p. 47) regardless of if they are from the subdomains, or the coarse grid.

} else {
this->set_solver(parameters_.generated_local_solver);
}


if (parameters_.galerkin_ops && parameters_.coarse_solver) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe there should be an exception, if only one of those is set. I think that would be a configuration error, and the user should be notified.

Comment on lines +188 to +189
if (as<gko::multigrid::MultigridLevel>(this->galerkin_ops_)
->get_coarse_op()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this actually be null? If so, maybe that should be considered an error.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think adjustments to the copy/move are necessary. However, they seem to be already incomplete as of now, so that might also be done in a separate PR.

as<gko::multigrid::MultigridLevel>(this->galerkin_ops_)
->get_coarse_op());
auto exec = coarse->get_executor();
auto comm = coarse->get_communicator();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unused

Suggested change
auto comm = coarse->get_communicator();

@@ -79,6 +80,7 @@ int main(int argc, char* argv[])
static_cast<gko::size_type>(argc >= 3 ? std::atoi(argv[2]) : 100);
const auto num_iters =
static_cast<gko::size_type>(argc >= 4 ? std::atoi(argv[3]) : 1000);
std::string schw_type = argc >= 5 ? argv[4] : "multi-level";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just as a note, I was running this example with

mpirun -n 4 ./distributed-solver reference 10000 1000

and the one-level version was faster and used less iteration. So maybe it should be the default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-for-review This PR is ready for review mod:core This is related to the core module. reg:example This is related to the examples. reg:testing This is related to testing. type:distributed-functionality type:preconditioner This is related to the preconditioners
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants