Skip to content

Revert primal dual test to meaningful result #364

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Apr 10, 2025

Conversation

mmcleod89
Copy link
Contributor

  • Brings primal dual test into line with others (checks the MSE rather than a strict regression test)
  • Reverts previous changes made to measurement operator
  • Now produces a reasonable looking image

The commit message attached to the code that had to be changed back was: "updated unit tests to be shorted and use un normed operators", which suggests that the un-normalised operators may be an issue for the implementation of the PD algorithm? (Commit 40be6b8)

@mmcleod89
Copy link
Contributor Author

Original primal dual solution (solution.jpg) and current primal dual solution (pd_result.jpg) for context. I don't get exactly the same result but it looks very close.

pd_result
solution

@mmcleod89 mmcleod89 requested a review from tkoskela October 25, 2024 15:31
@tkoskela
Copy link
Contributor

It looks like the test still fails. Are we supposed to be multiplying average_intensity by 1e-3 in

CHECK(mse <= average_intensity * 1e-3);

Is this a unit conversion?

@mmcleod89
Copy link
Contributor Author

Not a unit conversion, just an loose attempt to check that the average error per pixel was significantly less than the average value per pixel as a first attempt at checking for similarity of solutions.

@mmcleod89
Copy link
Contributor Author

I will play around a bit more with these tests and the UQ tomorrow

@mmcleod89
Copy link
Contributor Author

These tests should pass when SOPT is updated to account for the change in the initial guess for PD

@mmcleod89
Copy link
Contributor Author

The issue was just that just that the noise was too high for the tolerance because the majority of the image is black and therefore strongly biases the average value of the pixels. Visually inspecting the results they seem consistent and the algorithms are validated in SOPT, but I think a more robust testing framework would need to be built around synthetic data with a known solution and an expected level of noise

Copy link
Contributor

@tkoskela tkoskela left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mmcleod89 mmcleod89 merged commit 92488fb into development Apr 10, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants