-
-
Notifications
You must be signed in to change notification settings - Fork 232
Don't use prev_theta for non-adaptive solves #2269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't use prev_theta for non-adaptive solves #2269
Conversation
This will need a test. |
Agreed. It's a little tricky to test for since it requires a sufficiently nonlinear problem that actually runs into it. e.g. none of the algconvergence tests caught the issue. |
Can we kick out an anonymized form of the model that's being tested with? ODEProblemExpr and then obfuscate variable names? Or |
Theoretically, we probably could, but I would like to have a test for a situation where I actually believe that the solution is correct. |
ahh that's good |
should we merge this? |
Rebase, tests should be passing |
11c29fc
to
7eefc04
Compare
rebased. |
Some tests need to be adjusted |
This does suggest a bug since this should be strictly more accurate for non-adaptive methods... |
I think I see the problem with the previous version. Lets see if this passes tests. |
Did we add a test to ensure this works moving forward? Maybe we should at least add a benchmark that failed previously but works now? |
this is very difficult to target a specific test towards. At best if we can anonymize an integration test that would help. |
we do not. while it's easy (in hindsight) to see the issue here, coming up with an example ODE that observes the behavior as previously mentioned is somewhat nontrivial. specifically, we need an ODE where the Newton iteration transitions from converging with a high order (ideally 10 or higher) but which then in the very next step has the error jump substantially (but less than the order of the Newton solver) . furthermore, the accumulated error from this single step needs to throw of the solution significantly. |
found by @bradcarman. The early exit here relies on implicit feedback from the solver to prevent it from overly aggressively terminating the nonlinear solve as successful. As such, we disable prev_theta for non-adaptive algorithms.