API Reference

MPIHaloArrays.CartesianTopologyType

CartesianTopology

The CartesianTopology type holds neighbor information, current rank, etc.

Fields

  • comm: MPI commicator object
  • nprocs: Number of total processors (global)
  • rank: Current rank
  • coords: Coordinates in the global space, i.e. (0,1,1)
  • global_dims: Dimensions of the global domain, i.e. (4,4) is a 4x4 global domain
  • isperiodic: Vector{Bool}; Perodicity of each dimension, i.e. (false, true, true) means y and z are periodic
  • neighbors: OffsetArray{Int}; Neighbor ranks (including corners), indexed as [[ilo, center, ihi], i, j, k]
source
MPIHaloArrays.CartesianTopologyMethod
CartesianTopology(comm::MPI.Comm, periodicity::Bool; canreorder = false)

Create CartesianTopology only with the vector of boundary periodicity given. This finds the optimal sub-domain ordering for the user.

source
MPIHaloArrays.CartesianTopologyMethod
CartesianTopology(comm::MPI.Comm, ::Tuple{Bool}; canreorder = false)

Create CartesianTopology only with the vector of boundary periodicity given. This finds the optimal sub-domain ordering for the user.

source
MPIHaloArrays.CartesianTopologyMethod
CartesianTopology(comm::MPI.Comm, dims, periodicity; canreorder = false)

Create a CartesianTopology type that holds neighbor information, current rank, etc.

Arguments

  • dims: Vector or Tuple setting the dimensions of the domain in each direction, e.g. (4,3) means a total of 12 procs, with 4 in x and 3 in y
  • periodicity: Vector or Tuple of bools to set if the domain is periodic along a specific dimension

Example


# Create a topology of 4x4 with periodic boundaries in both directions
P = CartesianTopology((4,4), (true, true))
source
MPIHaloArrays.MPIHaloArrayType

MPIHaloArray

Fields

  • data: AbstractArray{T,N} - contains the local data on the current rank
  • partitioning: partitioning datatype
  • comm: MPI communicator
  • window: MPI window
  • neighbor_ranks : Vector{Int} - IDs of the neighboring arrays/MPI procs
  • coords : Vector{Int} - Coordinates in the global MPI space
  • rank: Current MPI rank
source
MPIHaloArrays.MPIHaloArrayMethod

MPIHaloArray constructor

Arguments

  • A: AbstractArray{T,N}
  • topo: Parallel topology type, e.g. CartesianTopology
  • nhalo: Number of halo cells

Keyword Arguments

  • do_corners: [true] Exchange corner halo regions
  • com_model: [:p2p] Communication model, e.g. :p2p is point-to-point (Isend, Irecv), :rma is onesided (Get,Put), :shared is MPI's shared memory model
source
MPIHaloArrays.fillhalo!Method
fillhalo!(A::MPIHaloArray, fillvalue)

Fill the halo regions with a particular fillvalue.

Arguments

  • A::MPIHaloArray
  • fillvalue: value to fill the halo regions of A with
source
MPIHaloArrays.gatherglobalMethod

Gather all MPIHaloArrays onto the root MPI rank and stitch together. This will ignore halo region data and create a Array that represents the global state.

Arguments

  • A: MPIHaloArray
  • root: MPI rank to gather A to
  • halo_dims: Tuple of the dimensions that halo exchanges occur on (not fully working yet)
source
MPIHaloArrays.get_subdomain_dimension_sizesMethod
get_subdomain_dimension_sizes(A, tile_dims, A_halo_dims)

Get the size along each dimension in (i,j,k) of the subdomain, based on a given array A. The tile_dims is the shape of the global domain, e.g., (4,2) means 4 tiles or subdomains in i and 2 in j. A_halo_dims is the tuple of which dimensions the halo exchanges take place on, e.g. (2,3).

Example

A = rand(4,200,100); dims=(2,3), tile_dims=(4,2)
get_subdomain_dimension_sizes(A, tile_dims, dims) # [[i][j]] --> [[50,50,50,50],[100,100]]
source
MPIHaloArrays.global_domain_indicesMethod
global_domain_indices(A::MPIHaloArray)

Get the array indices of the domain region of A (i.e. excluding halo regions) in the global frame of reference. The order of the returned indices is (ilo, ihi, jlo, jhi, ...).

Returns

  • NTuple{Int, 2 * NDimensions} : A tuple of both lo and hi indices for each dimension
source
MPIHaloArrays.globalmaxMethod

Perform a global maximum operation

Arguments

  • A: MPIHaloArray to perform the operation on
  • broadcast: true/false - broadcast to all MPI ranks [default is false]
  • root: If broadcast is false, which MPI rank to reduce to
source
MPIHaloArrays.globalminMethod

Perform a global minimum operation

Arguments

  • A: MPIHaloArray to perform the operation on
  • broadcast: true/false - broadcast to all MPI ranks [default is false]
  • root: If broadcast is false, which MPI rank to reduce to
source
MPIHaloArrays.globalsumMethod

Perform a global sum operation

Arguments

  • A: MPIHaloArray to perform the operation on
  • broadcast: true/false - broadcast to all MPI ranks [default is false]
  • root: If broadcast is false, which MPI rank to reduce to
source
MPIHaloArrays.hi_indicesMethod

Helper functions to get the high side halo and domain starting/ending indices

Arguments

  • field: Array
  • dim: Dimension to check indices on
  • nhalo: Number of halo entries

Return

  • NTuple{Int, 4}: The set of lo indices (hidomainstart, hidomainend, hihalostart, hihaloend.
source
MPIHaloArrays.hi_indicesMethod

Get the ho indicies along the specified dimension dim of A. This will be in order of (hidomainstart, hidomainend, hihalostart, hihaloend).

source
MPIHaloArrays.lo_indicesMethod

Helper functions to get the low side halo and domain starting/ending indices

Arguments

  • field: Array
  • dim: Dimension to check indices on
  • nhalo: Number of halo entries

Return

  • NTuple{Int, 4}: The set of lo indices (lohalostart, lohaloend, lodomainstart, lodomainend).
source
MPIHaloArrays.lo_indicesMethod

Get the lo indicies along the specified dimension dim of A. This will be in order of (lohalostart, lohaloend, lodomainstart, lodomainend).

source
MPIHaloArrays.local_domain_indicesMethod
local_domain_indices(A::MPIHaloArray)

Get the array indices of the domain region of A (i.e. excluding halo regions) in the local frame of reference (relative to itself, rather than in the global domain). This is typically 1 to size(A). The order of the returned indices is (ilo, ihi, jlo, jhi, ...).

Returns

  • NTuple{Int, 2 * NDimensions} : A tuple of both lo and hi indices for each dimension
source
MPIHaloArrays.neighborMethod
neighbor(p::CartesianTopology, i_offset::Int, j_offset::Int, k_offset::Int)

Find the neighbor rank based on the offesets in (i,j,k). This follows the traditional array index convention rather than MPI's version, so an i_offset=1 will shift up in the array indexing.

Arguments

  • p: CartesianTopology type
  • i_offset: Offset in the i direction
  • j_offset: Offset in the j direction
  • k_offset: Offset in the k direction

Example:

# Makes a 4x4 domain with periodic boundaries in both dimensions
P = CartesianTopology((4,4), (true, true))

# Find the ihi neighbor
ihi = neighbor(P,+1,0,0)

# Find the upper ihi corner neighbor (ihi and jhi side)
ihijhi_corner = neighbor(P,+1,+1,0)
source
MPIHaloArrays.pad_with_haloMethod
pad_with_halo(A, nhalo, halo_dims)

Increase the size of the array A along the halo exchange dimensions to make room for the new halo regions

Arguments

  • A::AbstractArray: Array to increase in size
  • nhalo::Int, number of halo cells along each dimension, e.g., 2
  • halo_dims::Tuple: Set of dimensions to do halo exchange along
source
MPIHaloArrays.scatterglobalMethod

Partition the array A on the rank root into chunks based on the given parallel toplogy. The array data in A does not have halo regions. The MPIHaloArray constructor adds the halo regions. This returns a MPIHaloArray

Arguments

  • A: Global array to be split up into chunks and sent to all ranks. This does not include halo cells
  • root: MPI rank that A lives on
  • nhalo: Number of halo cells to create
  • halo_dims: Tuple of the dimensions that halo exchanges occur on (not fully working yet)
source
MPIHaloArrays.split_countMethod
split_count(N::Integer, n::Integer)

Return a vector of n integers which are approximately equally sized and sum to N.

source
MPIHaloArrays.update_halo_data!Method
update_halo_data!(A_no_halo, A_with_halo, halo_dims, nhalo)

During the construction of an MPIHaloArray, the data must be padded by the number of halo regions in each respective dimension. This function copies the data from A_no_halo (which is the original array) to A_with_halo (which is the underlying array within the MPIHaloArray)

Arguments

  • A_no_halo::Array: Array without the halo regions
  • A_with_halo::Array: Array padded with the halo regions
  • halo_dims::Tuple: Dimensions that halo exchanges take place on
  • nhalo::Int: Number of halo cells in each respective dimension
source
MPIHaloArrays.updatehalo!Method
updatehalo!(A::MPIHaloArray{T,N,AA,1}) where {T,N,AA}

Update the halo regions on A where the halo exchage is done on a single dimension

source
MPIHaloArrays.updatehalo!Method
updatehalo!(A::MPIHaloArray{T,N,AA,2}) where {T,N,AA}

Update the halo regions on A where the halo exchage is done over 2 dimensions

source
MPIHaloArrays.updatehalo!Method
updatehalo!(A::MPIHaloArray{T,N,2}) where {T,N}

Update the halo regions on A where the halo exchage is done on over 3 dimensions

source