Skip to content

BatchedInverseProblem with asynchronous, device-aware forward_map + concatenation #232

@glwagner

Description

@glwagner

We need a way to run batches of forward simulations asynchronously, and then to concatenate the forward maps deterministically for EnsembleKalmanInversion. One way to do this is to develop a BatchedInverseProblem that consists of

  1. A tuple / array of InverseProblems, each either their own observations and simulation;
  2. A utility that concatenates the forward_map / inverting_forward_map from each individual InverseProblem to pass to EnsembleKalmanInversion.
  3. The ability to extract each inverting_forward_map asynchronously: https://docs.julialang.org/en/v1/manual/asynchronous-programming/

We probably also want to make inverting_forward_map "device aware", so that we can run simulations on different GPUs on the same node (for example). This won't be hard, since it's just a matter of "switching" to the appropriate device before running any GPU code. We can copy data to the CPU in FieldTimeSeriesCollector while the simulations are running, so none of the rest of the code needs to care about this. See CUDA.jl docs or here: https://juliagpu.org/post/2020-07-18-cuda_1.3/index.html.

All of this is relatively simple to implement in that it won't take many lines of code once we know what to write.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions