Skip to content

Rework sharding tests #4293

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 95 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 47 commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
3c64dfe
Extend set_to_function for non-sharded field
glwagner Mar 27, 2025
474db60
Bugfix in stencil not using default FloatType
glwagner Mar 27, 2025
3239d42
Import R_Earth
Mar 27, 2025
c110972
Fix broken import
glwagner Mar 27, 2025
1b051be
Merge branch 'tpu-fixes' of https://github.com/CliMA/Oceananigans.jl …
glwagner Mar 27, 2025
66efc57
Fix sharded grids
Mar 27, 2025
f4ea5c0
Rm shows
Mar 27, 2025
3b435db
Merge branch 'main' into tpu-fixes
simone-silvestri Mar 27, 2025
34163cd
build grid on CPU and switch it to sharded ractant
simone-silvestri Mar 27, 2025
9f89b14
try removing the xla forcing
simone-silvestri Mar 27, 2025
8633c24
correct architecture for lat lon
simone-silvestri Mar 27, 2025
239e3b4
make sure sharding is initialized
simone-silvestri Mar 27, 2025
4c2197d
fix lat lon grid
simone-silvestri Mar 27, 2025
093de34
import r_Earth
simone-silvestri Mar 27, 2025
2fbd8eb
bugfix
simone-silvestri Mar 27, 2025
c5d7576
try running with IFRT
simone-silvestri Mar 27, 2025
4e8d811
quite a large bug
simone-silvestri Mar 27, 2025
f436384
a little cleanup
simone-silvestri Mar 27, 2025
cf14457
we get to compiling of the first timestep
simone-silvestri Mar 27, 2025
8bab678
reduce the MPI show madness
simone-silvestri Mar 27, 2025
ab58214
Change to constant_with_arch
glwagner Mar 27, 2025
649b448
create grid
simone-silvestri Mar 27, 2025
db7e49a
remove comment
simone-silvestri Mar 27, 2025
2122d08
remove the tripolar shard
simone-silvestri Mar 27, 2025
afd8e7d
add some info
simone-silvestri Mar 27, 2025
552736e
use (Base.julia_md())
simone-silvestri Mar 27, 2025
750392b
add replicate in z
simone-silvestri Mar 27, 2025
65e08b1
add sharding to the clock
simone-silvestri Mar 27, 2025
456e299
sharding the z direction
simone-silvestri Mar 27, 2025
919823b
add a comment
simone-silvestri Mar 27, 2025
58f43a5
different tests
simone-silvestri Mar 27, 2025
1f8acad
sharding tests
simone-silvestri Mar 27, 2025
9f6854b
we don't need preferences for the moment
simone-silvestri Mar 27, 2025
7d245bd
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Mar 27, 2025
33d4476
see where it runs from
simone-silvestri Mar 27, 2025
4d890ce
another check
simone-silvestri Mar 27, 2025
43547e7
LocalPreferences in the correct folder
simone-silvestri Mar 27, 2025
d4037db
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Mar 27, 2025
016fa6c
remove the reactant test
simone-silvestri Mar 27, 2025
2cd1416
one host 4 devices
simone-silvestri Mar 27, 2025
76a4cc7
Merge branch 'ss/fix-sharding-tests' of github.com:CliMA/Oceananigans…
simone-silvestri Mar 27, 2025
cd574b8
improve tests
simone-silvestri Mar 27, 2025
d699a42
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Mar 28, 2025
8fea6d5
fix a couple of bugs
simone-silvestri Mar 28, 2025
86c0fbd
Merge branch 'ss/fix-sharding-tests' of github.com:CliMA/Oceananigans…
simone-silvestri Mar 28, 2025
a89f44e
some improvements
simone-silvestri Mar 28, 2025
5e57a3f
and add the bottom height
simone-silvestri Mar 28, 2025
d46b324
remove immersed boundary for now
simone-silvestri Mar 28, 2025
4e4acc8
at least fix this issue
simone-silvestri Mar 28, 2025
5385609
fix latitude longitude coordinates
simone-silvestri Mar 28, 2025
9a9bd5f
run the tests
simone-silvestri Mar 28, 2025
6fbf1bf
Merge branch 'main' into ss/fix-coordinates
simone-silvestri Mar 28, 2025
41de112
these should pass now if everything is correct
simone-silvestri Mar 28, 2025
a5ad48f
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Mar 28, 2025
082bb1d
add sharding lat lon
simone-silvestri Mar 28, 2025
7e0a004
add the tripolar test
simone-silvestri Mar 28, 2025
e5c6886
back to 5 minutes timestep
simone-silvestri Mar 28, 2025
5903257
add sharding tests
simone-silvestri Mar 28, 2025
1f9a7c2
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Mar 28, 2025
37b6c42
Merge remote-tracking branch 'origin/ss/fix-coordinates' into ss/fix-…
simone-silvestri Mar 28, 2025
557da39
correct stuff
simone-silvestri Mar 28, 2025
02187a9
Merge branch 'ss/fix-sharding-tests' of github.com:CliMA/Oceananigans…
simone-silvestri Mar 28, 2025
0eda368
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Mar 28, 2025
aeea174
try this
simone-silvestri Mar 28, 2025
ec728d3
Merge branch 'ss/fix-sharding-tests' of github.com:CliMA/Oceananigans…
simone-silvestri Mar 28, 2025
bcf2a04
MPITripolarGrid
simone-silvestri Mar 28, 2025
9c9e0c0
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Mar 31, 2025
0fa87f5
try without immersed boundary
simone-silvestri Mar 31, 2025
de178c9
Merge branch 'ss/fix-sharding-tests' of github.com:CliMA/Oceananigans…
simone-silvestri Mar 31, 2025
0ea86ad
reinclude everything
simone-silvestri Mar 31, 2025
d16f536
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 3, 2025
e5cd0a9
try a new arch
simone-silvestri Apr 4, 2025
583f2df
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 4, 2025
6c94f19
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 5, 2025
088a944
try like this
simone-silvestri Apr 5, 2025
f7f24a9
Merge branch 'ss/fix-sharding-tests' of github.com:CliMA/Oceananigans…
simone-silvestri Apr 5, 2025
7c8ef5d
also for this
simone-silvestri Apr 5, 2025
93cc68c
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 6, 2025
c256996
remove the test for the moment
simone-silvestri Apr 6, 2025
e10b020
Merge branch 'ss/fix-sharding-tests' of github.com:CliMA/Oceananigans…
simone-silvestri Apr 6, 2025
d249017
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 8, 2025
3e4ac64
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 17, 2025
ed0b1fa
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 26, 2025
811dda2
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 27, 2025
2ede33c
bugfix
simone-silvestri Apr 27, 2025
55d8a44
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Apr 29, 2025
2767749
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri May 13, 2025
fafdff1
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri May 22, 2025
5de9004
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri May 28, 2025
dcd53d2
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Jun 18, 2025
343c164
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Jul 23, 2025
b84bec2
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Jul 29, 2025
24a0ab1
Merge branch 'main' into ss/fix-sharding-tests
simone-silvestri Aug 14, 2025
bef9073
remove distributed
simone-silvestri Aug 14, 2025
c7e933a
Merge branch 'ss/fix-sharding-tests' of github.com:CliMA/Oceananigans…
simone-silvestri Aug 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 8 additions & 32 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,22 +40,23 @@ jobs:
arch:
- x64
steps:
- run: |
touch LocalPreferences.toml

echo "[Reactant]" >> LocalPreferences.toml
echo "xla_runtime = \"IFRT\"" >> LocalPreferences.toml

cat LocalPreferences.toml
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- uses: julia-actions/cache@v2
- run: |
touch LocalPreferences.toml

echo "[Reactant]" >> LocalPreferences.toml
echo "xla_runtime = \"IFRT\"" >> LocalPreferences.toml

cat LocalPreferences.toml
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1
env:
XLA_FLAGS: "--xla_force_host_platform_device_count=4"
JULIA_DEBUG: "Reactant, Reactant_jll"
REACTANT_TEST: true
TEST_GROUP: "sharding"
Expand Down Expand Up @@ -110,31 +111,6 @@ jobs:
env:
TEST_GROUP: "turbulence_closures"

reactant:
name: Reactant - Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }}
runs-on: ${{ matrix.os }}
timeout-minutes: 120
strategy:
fail-fast: false
matrix:
version:
- '1.10'
os:
- ubuntu-latest
arch:
- x64
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- uses: julia-actions/cache@v2
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1
env:
TEST_GROUP: "reactant"

metal:
name: Metal - Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }}
runs-on: ${{ matrix.os }}
Expand Down
3 changes: 3 additions & 0 deletions ext/OceananigansReactantExt/Fields.jl
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ using Oceananigans.Fields: Field, interior
using KernelAbstractions: @index, @kernel

import Oceananigans.Fields: set_to_field!, set_to_function!, set!
import Oceananigans.DistributedComputations: reconstruct_global_field

import ..OceananigansReactantExt: deconcretize
import ..Grids: ReactantGrid
Expand All @@ -17,6 +18,8 @@ import ..Grids: ShardedGrid
const ReactantField{LX, LY, LZ, O} = Field{LX, LY, LZ, O, <:ReactantGrid}
const ShardedDistributedField{LX, LY, LZ, O} = Field{LX, LY, LZ, O, <:ShardedGrid}

reconstruct_global_field(field::ShardedDistributedField) = field

deconcretize(field::Field{LX, LY, LZ}) where {LX, LY, LZ} =
Field{LX, LY, LZ}(field.grid,
deconcretize(field.data),
Expand Down
2 changes: 1 addition & 1 deletion ext/OceananigansReactantExt/Grids/serial_grids.jl
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ function constant_with_arch(cpu_grid::AbstractUnderlyingGrid, arch)
end

function constant_with_arch(cpu_ibg::CPUImmersedBoundaryGrid, arch)
underlying_grid = constant_with_reactant_state(cpu_ibg.underlying_grid, arch)
underlying_grid = constant_with_arch(cpu_ibg.underlying_grid, arch)
TX, TY, TZ = Oceananigans.Grids.topology(cpu_ibg)
return ImmersedBoundaryGrid{TX, TY, TZ}(underlying_grid,
cpu_ibg.immersed_boundary,
Expand Down
4 changes: 2 additions & 2 deletions src/DistributedComputations/distributed_grids.jl
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ insert_connected_topology(::Type{Bounded}, R, r) = ifelse(R == 1, Bounded,
insert_connected_topology(::Type{Periodic}, R, r) = ifelse(R == 1, Periodic, FullyConnected)

"""
reconstruct_global_topology(T, R, r, comm)
reconstruct_global_topology(T, R, r, r1, r2, comm)

reconstructs the global topology associated with the local topologies `T`, the amount of ranks
in `T` direction (`R`) and the local rank index `r`. If all ranks hold a `FullyConnected` topology,
Expand All @@ -362,7 +362,7 @@ function reconstruct_global_topology(T, R, r, r1, r2, comm)
topologies[r] = 1
end

topologies = all_reduce(topologies, +, comm)
all_reduce!(topologies, +, comm)

if sum(topologies) == R
return Periodic
Expand Down
3 changes: 3 additions & 0 deletions src/DistributedComputations/partition_assemble.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ import Oceananigans.Architectures: on_architecture
all_reduce(op, val, arch::Distributed) = MPI.Allreduce(val, op, arch.communicator)
all_reduce(op, val, arch) = val

all_reduce!(op, val, arch::Distributed) = MPI.Allreduce!(val, op, arch.communicator)
all_reduce!(op, val, arch) = nothing

# MPI Barrier
barrier!(arch) = nothing
barrier!(arch::Distributed) = MPI.Barrier(arch.communicator)
Expand Down
18 changes: 11 additions & 7 deletions test/distributed_tests_utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -104,17 +104,21 @@ function run_distributed_latitude_longitude_grid(arch, filename)

distributed_grid = ImmersedBoundaryGrid(distributed_grid, GridFittedBottom(bottom_height))
model = run_distributed_simulation(distributed_grid)

η = reconstruct_global_field(model.free_surface.η)
u = reconstruct_global_field(model.velocities.u)
v = reconstruct_global_field(model.velocities.v)
c = reconstruct_global_field(model.tracers.c)

# Check also that the bottom height is reconstructed correctly!
b = reconstruct_global_field(model.grid.immersed_boundary.bottom_height)

if arch.local_rank == 0
jldsave(filename; u = Array(interior(u, :, :, 10)),
v = Array(interior(v, :, :, 10)),
c = Array(interior(c, :, :, 10)),
η = Array(interior(η, :, :, 1)))
η = Array(interior(η, :, :, 1)),
b = Array(parent(b))[:, :, 1])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason not to save parent for all`?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, I have removed that b field.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could save parent.

end

return nothing
Expand All @@ -124,19 +128,19 @@ end
function run_distributed_simulation(grid)

model = HydrostaticFreeSurfaceModel(; grid = grid,
free_surface = SplitExplicitFreeSurface(grid; substeps = 20),
free_surface = ExplicitFreeSurface(), # SplitExplicitFreeSurface(grid; substeps = 20),
tracers = :c,
buoyancy = nothing,
tracer_advection = WENO(),
momentum_advection = WENOVectorInvariant(order=3),
coriolis = HydrostaticSphericalCoriolis())
tracer_advection = nothing, #WENO(),
momentum_advection = nothing, #WENOVectorInvariant(order=3),
coriolis = nothing) # HydrostaticSphericalCoriolis())

# Setup the model with a gaussian sea surface height
# near the physical north poles and one near the equator
ηᵢ(λ, φ, z) = exp(- (φ - 90)^2 / 10^2) + exp(- φ^2 / 10^2)
set!(model, c=ηᵢ, η=ηᵢ)

Δt = 5minutes
Δt = 10 # 5minutes
arch = architecture(grid)
if arch isa ReactantState || arch isa Distributed{<:ReactantState}
@info "Compiling first_time_step..."
Expand Down
30 changes: 30 additions & 0 deletions test/run_sharding_tests.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# We need to initiate MPI for sharding because we are using a multi-host implementation:
# i.e. we are launching the tests with `mpiexec` and on Github actions the default MPI
# implementation is MPICH which requires calling MPI.Init(). In the case of OpenMPI,
# MPI.Init() is not necessary.

using MPI
MPI.Init()
include("distributed_tests_utils.jl")

if Base.ARGS[1] == "tripolar"
run_function = run_distributed_tripolar_grid
suffix = "trg"
else
run_function = run_distributed_latitude_longitude_grid
suffix = "llg"
end

Reactant.Distributed.initialize(; single_gpu_per_process=false)

arch = Distributed(ReactantState(), partition = Partition(4, 1))
filename = "distributed_xslab_$(suffix).jld2"
run_function(arch, filename)

arch = Distributed(ReactantState(), partition = Partition(1, 4))
filename = "distributed_yslab_$(suffix).jld2"
run_function(arch, filename)

arch = Distributed(ReactantState(), partition = Partition(2, 2))
filename = "distributed_pencil_$(suffix).jld2"
run_function(arch, filename)
8 changes: 4 additions & 4 deletions test/test_reactant.jl
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ ridge(λ, φ) = 0.1 * exp((λ - 2)^2 / 2)
latitude = [0, 1, 2, 3, 4],
z = (0, 1))

constant_llg = OceananigansReactantExt.constant_with_reactant_state(cpu_llg)
constant_llg = OceananigansReactantExt.constant_with_arch(cpu_llg)

for name in propertynames(constant_llg)
p = getproperty(constant_llg, name)
Expand All @@ -124,7 +124,7 @@ ridge(λ, φ) = 0.1 * exp((λ - 2)^2 / 2)

@info " Testing constantified ImmersedBoundaryGrid construction [$FT]..."
cpu_ibg = ImmersedBoundaryGrid(cpu_llg, GridFittedBottom(ridge))
constant_ibg = OceananigansReactantExt.constant_with_reactant_state(cpu_ibg)
constant_ibg = OceananigansReactantExt.constant_with_arch(cpu_ibg)
@test architecture(constant_ibg) isa ReactantState
@test architecture(constant_ibg.immersed_boundary.bottom_height) isa CPU

Expand Down Expand Up @@ -153,7 +153,7 @@ ridge(λ, φ) = 0.1 * exp((λ - 2)^2 / 2)
z = (0, 1))

@info " Replacing architecture with ReactantState [$FT]..."
constant_rllg = OceananigansReactantExt.constant_with_reactant_state(cpu_rllg)
constant_rllg = OceananigansReactantExt.constant_with_arch(cpu_rllg)

for name in propertynames(constant_rllg)
p = getproperty(constant_rllg, name)
Expand All @@ -164,7 +164,7 @@ ridge(λ, φ) = 0.1 * exp((λ - 2)^2 / 2)

@info " Testing constantified immersed OrthogonalSphericalShellGrid construction [$FT]..."
cpu_ribg = ImmersedBoundaryGrid(cpu_rllg, GridFittedBottom(ridge))
constant_ribg = OceananigansReactantExt.constant_with_reactant_state(cpu_ribg)
constant_ribg = OceananigansReactantExt.constant_with_arch(cpu_ribg)
@test architecture(constant_ribg) isa ReactantState
@test architecture(constant_ribg.immersed_boundary.bottom_height) isa CPU
end
Expand Down
137 changes: 46 additions & 91 deletions test/test_sharded_lat_lon.jl
Original file line number Diff line number Diff line change
@@ -1,41 +1,9 @@
using JLD2
using Oceananigans
using Oceananigans.DistributedComputations: reconstruct_global_field, reconstruct_global_grid
using Oceananigans.Units
using Reactant
using Random
using Test

include("dependencies_for_runtests.jl")
include("distributed_tests_utils.jl")

run_xslab_distributed_grid = """
using MPI
MPI.Init()
include("distributed_tests_utils.jl")
Reactant.Distributed.initialize(; single_gpu_per_process=false)
arch = Distributed(ReactantState(), partition = Partition(4, 1))
run_distributed_latitude_longitude_grid(arch, "distributed_xslab_llg.jld2")
"""

run_yslab_distributed_grid = """
using MPI
MPI.Init()
include("distributed_tests_utils.jl")
Reactant.Distributed.initialize(; single_gpu_per_process=false)
arch = Distributed(ReactantState(), partition = Partition(1, 4))
run_distributed_latitude_longitude_grid(arch, "distributed_yslab_llg.jld2")
"""
Nhosts = 1

run_pencil_distributed_grid = """
using MPI
MPI.Init()
include("distributed_tests_utils.jl")
Reactant.Distributed.initialize(; single_gpu_per_process=false)
arch = Distributed(ReactantState(), partition = Partition(2, 2))
run_distributed_latitude_longitude_grid(arch, "distributed_pencil_llg.jld2")
"""

@testset "Test distributed LatitudeLongitudeGrid simulations..." begin
@testset "Test sharded LatitudeLongitudeGrid simulations..." begin
# Run the serial computation
Random.seed!(1234)
bottom_height = - rand(40, 40, 1) .* 500 .- 500
Expand All @@ -48,67 +16,54 @@ run_pencil_distributed_grid = """
us, vs, ws = model.velocities
cs = model.tracers.c
ηs = model.free_surface.η
bs = model.grid.immersed_boundary.bottom_height

us = interior(us, :, :, 10)
vs = interior(vs, :, :, 10)
cs = interior(cs, :, :, 10)
ηs = interior(ηs, :, :, 1)
bs = parent(bs)[:, :, 1]

# Run the distributed grid simulation with a pencil configuration
write("distributed_xslab_llg_tests.jl", run_xslab_distributed_grid)
run(`$(mpiexec()) -n 4 $(Base.julia_cmd()) --project -O0 distributed_xslab_llg_tests.jl`)
rm("distributed_xslab_llg_tests.jl")

# Retrieve Parallel quantities
up = jldopen("distributed_xslab_llg.jld2")["u"]
vp = jldopen("distributed_xslab_llg.jld2")["v"]
ηp = jldopen("distributed_xslab_llg.jld2")["η"]
cp = jldopen("distributed_xslab_llg.jld2")["c"]

# rm("distributed_xslab_llg.jld2")

@test all(us .≈ up)
@test all(vs .≈ vp)
@test all(cs .≈ cp)
@test all(ηs .≈ ηp)

# Run the distributed grid simulation with a slab configuration
write("distributed_yslab_llg_tests.jl", run_yslab_distributed_grid)
run(`$(mpiexec()) -n 4 $(Base.julia_cmd()) --project -O0 distributed_yslab_llg_tests.jl`)
rm("distributed_yslab_llg_tests.jl")

# Retrieve Parallel quantities
up = jldopen("distributed_yslab_llg.jld2")["u"]
vp = jldopen("distributed_yslab_llg.jld2")["v"]
cp = jldopen("distributed_yslab_llg.jld2")["c"]
ηp = jldopen("distributed_yslab_llg.jld2")["η"]

# rm("distributed_yslab_llg.jld2")

# Test slab partitioning
@test all(us .≈ up)
@test all(vs .≈ vp)
@test all(cs .≈ cp)
@test all(ηs .≈ ηp)

# We try now with more ranks in the x-direction. This is not a trivial
# test as we are now splitting, not only where the singularities are, but
# also in the middle of the north fold. This is a more challenging test
write("distributed_pencil_llg_tests.jl", run_pencil_distributed_grid)
run(`$(mpiexec()) -n 4 julia --project -O0 distributed_pencil_llg_tests.jl`)
rm("distributed_pencil_llg_tests.jl")
# Run the distributed grid simulations in all the configurations
run(`$(mpiexec()) -n $(Nhosts) $(Base.julia_cmd()) --project -O0 run_sharding_tests.jl "latlon"`)

# Retrieve Parallel quantities
up = jldopen("distributed_pencil_llg.jld2")["u"]
vp = jldopen("distributed_pencil_llg.jld2")["v"]
ηp = jldopen("distributed_pencil_llg.jld2")["η"]
cp = jldopen("distributed_pencil_llg.jld2")["c"]

# rm("distributed_pencil_llg.jld2")

@test all(us .≈ up)
@test all(vs .≈ vp)
@test all(cs .≈ cp)
@test all(ηs .≈ ηp)
end

bp1 = jldopen("distributed_xslab_llg.jld2")["b"]
up1 = jldopen("distributed_xslab_llg.jld2")["u"]
vp1 = jldopen("distributed_xslab_llg.jld2")["v"]
cp1 = jldopen("distributed_xslab_llg.jld2")["c"]
ηp1 = jldopen("distributed_xslab_llg.jld2")["η"]

bp2 = jldopen("distributed_yslab_llg.jld2")["b"]
up2 = jldopen("distributed_yslab_llg.jld2")["u"]
vp2 = jldopen("distributed_yslab_llg.jld2")["v"]
cp2 = jldopen("distributed_yslab_llg.jld2")["c"]
ηp2 = jldopen("distributed_yslab_llg.jld2")["η"]

bp3 = jldopen("distributed_pencil_llg.jld2")["b"]
up3 = jldopen("distributed_pencil_llg.jld2")["u"]
vp3 = jldopen("distributed_pencil_llg.jld2")["v"]
cp3 = jldopen("distributed_pencil_llg.jld2")["c"]
ηp3 = jldopen("distributed_pencil_llg.jld2")["η"]

@info "Testing xslab partitioning..."
@test all(bs .≈ bp1)
@test all(us .≈ up1)
@test all(vs .≈ vp1)
@test all(cs .≈ cp1)
@test all(ηs .≈ ηp1)

@info "Testing yslab partitioning..."
@test all(bs .≈ bp2)
@test all(us .≈ up2)
@test all(vs .≈ vp2)
@test all(cs .≈ cp2)
@test all(ηs .≈ ηp2)

@info "Testing pencil partitioning..."
@test all(bs .≈ bp3)
@test all(us .≈ up3)
@test all(vs .≈ vp3)
@test all(cs .≈ cp3)
@test all(ηs .≈ ηp3)
end
Loading
Loading