Skip to content

Add positivity-limiter to the AMRCallback #2396

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

patrickersing
Copy link
Member

This PR introduces the possibility to pass a positivity-preserving limiter to the AMRCallback. The limiter is applied at the end of the coarsen!, refine! or adapt! step to ensure positivity after adaptation in the AMR callback. This generalizes the strategy from trixi-framework/TrixiShallowWater.jl#97 for arbitrary equations.

This should fix #2064

Copy link
Contributor

github-actions bot commented May 9, 2025

Review checklist

This checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging.

Purpose and scope

  • The PR has a single goal that is clear from the PR title and/or description.
  • All code changes represent a single set of modifications that logically belong together.
  • No more than 500 lines of code are changed or there is no obvious way to split the PR into multiple PRs.

Code quality

  • The code can be understood easily.
  • Newly introduced names for variables etc. are self-descriptive and consistent with existing naming conventions.
  • There are no redundancies that can be removed by simple modularization/refactoring.
  • There are no leftover debug statements or commented code sections.
  • The code adheres to our conventions and style guide, and to the Julia guidelines.

Documentation

  • New functions and types are documented with a docstring or top-level comment.
  • Relevant publications are referenced in docstrings (see example for formatting).
  • Inline comments are used to document longer or unusual code sections.
  • Comments describe intent ("why?") and not just functionality ("what?").
  • If the PR introduces a significant change or new feature, it is documented in NEWS.md with its PR number.

Testing

  • The PR passes all tests.
  • New or modified lines of code are covered by tests.
  • New or modified tests run in less then 10 seconds.

Performance

  • There are no type instabilities or memory allocations in performance-critical parts.
  • If the PR intent is to improve performance, before/after time measurements are posted in the PR.

Verification

  • The correctness of the code was verified using appropriate tests.
  • If new equations/methods are added, a convergence test has been run and the results
    are posted in the PR.

Created with ❤️ by the Trixi.jl community.

Copy link

codecov bot commented May 9, 2025

Codecov Report

❌ Patch coverage is 87.50000% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 96.68%. Comparing base (e58392c) to head (d9b6c90).

Files with missing lines Patch % Lines
src/callbacks_step/amr_dg3d.jl 66.67% 2 Missing ⚠️
src/callbacks_step/amr.jl 90.00% 1 Missing ⚠️
src/callbacks_step/amr_dg2d.jl 87.50% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2396      +/-   ##
==========================================
- Coverage   96.68%   96.68%   -0.01%     
==========================================
  Files         511      511              
  Lines       42278    42298      +20     
==========================================
+ Hits        40875    40892      +17     
- Misses       1403     1406       +3     
Flag Coverage Δ
unittests 96.68% <87.50%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@andrewwinters5000 andrewwinters5000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for add this generic capability! It is much better than my hotfix from TrixiSW. My main question regards T8codeMesh

@@ -174,16 +174,21 @@ function refine!(u_ode::AbstractVector, adaptor, mesh::Union{TreeMesh{2}, P4estM
@assert ninterfaces(cache.interfaces)==ndims(mesh) * nelements(dg, cache) ("For $(ndims(mesh))D and periodic domains and conforming elements, the number of interfaces must be $(ndims(mesh)) times the number of elements")
end

# Apply the positivity limiter to the solution
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the coarsen / refine for T8codeMesh be given the same functionality to apply the limiter @jmark ?

Co-authored-by: Andrew Winters <andrew.ross.winters@liu.se>
@DanielDoehring
Copy link
Member

Does it makes sense to call the limiter after refinement as well and not only after coarsening? Maybe make this also optional with a boolean variable? Normally, coarsening should be the troublemaker

@andrewwinters5000
Copy link
Member

andrewwinters5000 commented May 12, 2025

Does it makes sense to call the limiter after refinement as well and not only after coarsening? Maybe make this also optional with a boolean variable? Normally, coarsening should be the troublemaker

There is no guarantee on that the sign of values after the refinement (interpolation) or coarsening (L2 projection in this case) when the solution is near vacuum.

@DanielDoehring DanielDoehring added the enhancement New feature or request label May 12, 2025
@patrickersing
Copy link
Member Author

Does it makes sense to call the limiter after refinement as well and not only after coarsening? Maybe make this also optional with a boolean variable? Normally, coarsening should be the troublemaker

For the elixir_euler_double_mach_amr.jl I checked that it is actually the refinement that causes positivity issues, if it runs without additional limiting.

@patrickersing
Copy link
Member Author

Right now this only ensures positivity after the coarsening / refine step. I think in order to fix #2064 we would also need to add positivity-limiting after the mortar projection in prolong2mortars!. But for this we first need to find a good way to also access the positivity-limiter from the solver.
I would say we keep this as a draft for now, until we figure out a complete strategy how to implement this.

For the specialized SWE implementation in trixi-framework/TrixiShallowWater.jl#97 this additional limiting should not be necessary as we obtain positive values from the reconstruction strategy.

@DanielDoehring
Copy link
Member

Right now this only ensures positivity after the coarsening / refine step. I think in order to fix #2064 we would also need to add positivity-limiting after the mortar projection in prolong2mortars!. But for this we first need to find a good way to also access the positivity-limiter from the solver. I would say we keep this as a draft for now, until we figure out a complete strategy how to implement this.

True - but this PR itself is already an improvement compared to the existing code. One option would be that we add the limiter! to the mortar and then supply such a mortar in the solver. But I agree that this would be something for a different PR.

I add this to the agenda of Tuesday's Trixi meeting.

@patrickersing
Copy link
Member Author

True - but this PR itself is already an improvement compared to the existing code. One option would be that we add the limiter! to the mortar and then supply such a mortar in the solver. But I agree that this would be something for a different PR.

Passing the limiter to the mortar would also be my favored option. If we do this, the limiter would be available to both the AMRCallback and the solver. The main question is then how we want to pass the limiter in the elixirs. We could either pass the limiter both to the AMRCallback and DGSEM, or only pass it to DGSEM (and then extract it inside the AMRCallback from the solver).

Passing it only once reduces the risk that someone forgets to set the limiter at both places. On the other hand passing it explicitly to the AMRCallback makes it clear that we apply the limiter for refinement / coarsening.

@patrickersing
Copy link
Member Author

An option could be to store the limiter inside the mortar similar to how it was drafted for the EC mortars.
Then we could have a special mortar type:

struct LobattoLegendrePositivityPreservingMortarL2{RealT <: Real, NNODES,
                               ForwardMatrix <: AbstractMatrix{RealT},
                               ReverseMatrix <: AbstractMatrix{RealT}, Limiter} <:
       AbstractMortarL2{RealT}
    forward_upper::ForwardMatrix
    forward_lower::ForwardMatrix
    reverse_upper::ReverseMatrix
    reverse_lower::ReverseMatrix
    limiter::Limiter
end

@DanielDoehring
Copy link
Member

DanielDoehring commented Jun 5, 2025

@patrickersing so what I would suggest is to add a limiter to a mortar first and then see if we can extract the limter from the mortar (via semi via solver via mortar) in some nice way that still allows for passing a potentially different limiter to the AMR callback/adaptor. I am happy to help here!

@patrickersing
Copy link
Member Author

@patrickersing so what I would suggest is to add a limiter to a mortar first and then see if we can extract the limiter from the mortar (via semi via solver via mortar) in some nice way that still allows for passing a potentially different limiter to the AMR callback/àdaptor`. I am happy to help here!

That sounds like a good plan! If we manage to set the mortar limiter by default, but keep the flexibility to pass different limiters to the AMR callback that would be great.

Going forward I would then keep this PR dedicated to the AMR callback, and create a separate PR to implement the new mortar type.

@bennibolm
Copy link
Contributor

bennibolm commented Jul 28, 2025

I'm currently working on IDP limiting at mortars, and I'm interested in using the Shu limiter to ensure positivity when transferring the solution while refinement and coarsening. So, I only need this positivity limiter for AMR.
I tried some things in the current state of this PR and also implemented a version where all 4 (or 8 in 3D) refined child elements are shifted towards the mean value by the same theta. This (hopefully) preserves conservation when the current version fails.
I will probably set up a PR for this in the next days and discuss it when @patrickersing is back from vacation.
Things I noticed:

  • In many simulations, the refinement and coarsening produced negative value_mean (and therefore negative thetas) directly while initialization of the amr callback. I had to adapt the initial_refinement_level such that this doesn't happen.
  • I haven't found an example where the implementation with simultaneous shifting (=same theta for child elements) preserves conservation and the one without doesn't. I'll test some more setups.

@DanielDoehring
Copy link
Member

In many simulations, the refinement and coarsening produced negative value_mean (and therefore negative thetas) directly while initialization of the amr callback.

Interesting, this sounds like AMR is happening " too late" in some sense.

I haven't found an example where the implementation without simultaneous shifting preserves conservation

What do you mean by "simultaneous shifting"?

@bennibolm
Copy link
Contributor

bennibolm commented Jul 28, 2025

In many simulations, the refinement and coarsening produced negative value_mean (and therefore negative thetas) directly while initialization of the amr callback.

Interesting, this sounds like AMR is happening " too late" in some sense.

With "initialization of the amr callback" I really mean the refinement at the very start of the simulation to get to max_level. So, there is no "too late" here.
One setup here is for instance initial_refinement_level=4 and for the amr controller base_level=4, max_level=7. So, there are at least three round of refinement necessary here.
Additionally, we still have the sharp discontinuous from the initial condition.
So, I think this really is expected, I guess.

I haven't found an example where the implementation without simultaneous shifting preserves conservation

What do you mean by "simultaneous shifting"?

I added a remark above to clarify this. Simultaneous shifting is when I use the same (minimum) theta for all just refined child elements:
The workflow then is:

  • go through all 4 (or 8) child elements and compute the theta as always
  • get the minimum theta of all child elements
  • use this minimum theta to shift all solutions of the child elements toward the mean value as always with: theta * u_node + (1-theta) * u_mean

@DanielDoehring
Copy link
Member

In many simulations, the refinement and coarsening produced negative value_mean (and therefore negative thetas) directly while initialization of the amr callback.

With "initialization of the amr callback" I really mean the refinement at the very start of the simulation to get to max_level. So, there is no "too late" here. One setup here is for instance initial_refinement_level=4 and for the amr controller base_level=4, max_level=7. So, there are at least three round of refinement necessary here. Additionally, we still have the sharp discontinuous from the initial condition. So, I think this really is expected, I guess.

Ah I somehow missed that you were really talking about the init, my bad. Thanks for the explanations, though 👍

@patrickersing
Copy link
Member Author

Thanks a lot for taking the initiative on this @bennibolm!
Hopefully this approach solves the present positivity issues and preserves conservation.

In many simulations, the refinement and coarsening produced negative value_mean (and therefore negative thetas) directly while initialization of the amr callback. I had to adapt the initial_refinement_level such that this doesn't happen.

Does this appear with or without the simultaneous shifting technique? I also think without any fixes this is to be expected due to the reinterpolation.

I haven't found an example where the implementation with simultaneous shifting (=same theta for child elements) preserves conservation and the one without doesn't. I'll test some more setups.

So both approaches lose conservation? Do you already have an idea in which step we lose the conservation? How big are the conservation errors that you observe?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Positivity violation in AMR
4 participants