# API Reference

`MPIHaloArrays.AbstractParallelTopology`

— TypeAn abstract AbstractParallelTopology type that is extended by either a CartesianTopology or GraphTopology (future)

`MPIHaloArrays.CartesianTopology`

— TypeCartesianTopology

The CartesianTopology type holds neighbor information, current rank, etc.

**Fields**

`comm`

: MPI commicator object`nprocs`

: Number of total processors (global)`rank`

: Current rank`coords`

: Coordinates in the global space, i.e.`(0,1,1)`

`global_dims`

: Dimensions of the global domain, i.e.`(4,4)`

is a 4x4 global domain`isperiodic`

: Vector{Bool}; Perodicity of each dimension, i.e.`(false, true, true)`

means y and z are periodic`neighbors`

: OffsetArray{Int}; Neighbor ranks (including corners), indexed as`[[ilo, center, ihi], i, j, k]`

`MPIHaloArrays.CartesianTopology`

— Method`CartesianTopology(comm::MPI.Comm, periodicity::Bool; canreorder = false)`

Create CartesianTopology only with the vector of boundary periodicity given. This finds the optimal sub-domain ordering for the user.

`MPIHaloArrays.CartesianTopology`

— Method`CartesianTopology(comm::MPI.Comm, ::Tuple{Bool}; canreorder = false)`

Create CartesianTopology only with the vector of boundary periodicity given. This finds the optimal sub-domain ordering for the user.

`MPIHaloArrays.CartesianTopology`

— Method`CartesianTopology(comm::MPI.Comm, dims, periodicity; canreorder = false)`

Create a CartesianTopology type that holds neighbor information, current rank, etc.

**Arguments**

`dims`

: Vector or Tuple setting the dimensions of the domain in each direction, e.g. (4,3) means a total of 12 procs, with 4 in x and 3 in y`periodicity`

: Vector or Tuple of bools to set if the domain is periodic along a specific dimension

**Example**

```
# Create a topology of 4x4 with periodic boundaries in both directions
P = CartesianTopology((4,4), (true, true))
```

`MPIHaloArrays.MPIHaloArray`

— TypeMPIHaloArray

**Fields**

`data`

: AbstractArray{T,N} - contains the local data on the current rank`partitioning`

: partitioning datatype`comm`

: MPI communicator`window`

: MPI window`neighbor_ranks`

: Vector{Int} - IDs of the neighboring arrays/MPI procs`coords`

: Vector{Int} - Coordinates in the global MPI space`rank`

: Current MPI rank

`MPIHaloArrays.MPIHaloArray`

— MethodMPIHaloArray constructor

**Arguments**

`A`

: AbstractArray{T,N}`topo`

: Parallel topology type, e.g. CartesianTopology`nhalo`

: Number of halo cells

**Keyword Arguments**

`do_corners`

: [true] Exchange corner halo regions`com_model`

: [:p2p] Communication model, e.g. :p2p is point-to-point (Isend, Irecv), :rma is onesided (Get,Put), :shared is MPI's shared memory model

`MPIHaloArrays.coord_to_rank`

— MethodHelper function to find rank based on coordinates

`MPIHaloArrays.denominators`

— MethodReturn all common denominators of n

`MPIHaloArrays.domainview`

— MethodReturn a view or `SubArray`

of the domain data within the `MPIHaloArray`

.

`MPIHaloArrays.filldomain!`

— MethodFill the domain data with a single `filval`

.

`MPIHaloArrays.fillhalo!`

— Method`fillhalo!(A::MPIHaloArray, fillvalue)`

Fill the halo regions with a particular `fillvalue`

.

**Arguments**

`A::MPIHaloArray`

`fillvalue`

: value to fill the halo regions of A with

`MPIHaloArrays.gatherglobal`

— MethodGather all `MPIHaloArray`

s onto the `root`

MPI rank and stitch together. This will ignore halo region data and create a `Array`

that represents the global state.

**Arguments**

`A`

: MPIHaloArray`root`

: MPI rank to gather`A`

to`halo_dims`

: Tuple of the dimensions that halo exchanges occur on (not fully working yet)

`MPIHaloArrays.get_dims`

— MethodGet the dimensions of each chunk

`MPIHaloArrays.get_istarts_ends`

— MethodGet the starting and ending indices based on the size of each subdomain. These are the global ilo/ihi values.

`MPIHaloArrays.get_subdomain_dimension_sizes`

— Method`get_subdomain_dimension_sizes(A, tile_dims, A_halo_dims)`

Get the size along each dimension in `(i,j,k)`

of the subdomain, based on a given array `A`

. The `tile_dims`

is the shape of the global domain, e.g., (4,2) means 4 tiles or subdomains in `i`

and 2 in `j`

. `A_halo_dims`

is the tuple of which dimensions the halo exchanges take place on, e.g. `(2,3)`

.

**Example**

```
A = rand(4,200,100); dims=(2,3), tile_dims=(4,2)
get_subdomain_dimension_sizes(A, tile_dims, dims) # [[i][j]] --> [[50,50,50,50],[100,100]]
```

`MPIHaloArrays.get_subdomain_indices`

— MethodGet the global indices for each subdomain. This is the tuple of lo and hi indices for each dimension

`MPIHaloArrays.get_subdomain_sizes`

— MethodGet the size of each subdomain given the tile dimensions and number of halo cells

`MPIHaloArrays.getindices`

— MethodA reusable helper function that gathers the indices of the `MPIHaloArray`

.

`MPIHaloArrays.global_domain_indices`

— Method`global_domain_indices(A::MPIHaloArray)`

Get the array indices of the domain region of `A`

(i.e. excluding halo regions) in the global frame of reference. The order of the returned indices is (ilo, ihi, jlo, jhi, ...).

**Returns**

`NTuple{Int, 2 * NDimensions}`

: A tuple of both lo and hi indices for each dimension

`MPIHaloArrays.globalmax`

— MethodPerform a global maximum operation

**Arguments**

`A`

: MPIHaloArray to perform the operation on`broadcast`

: true/false - broadcast to all MPI ranks [default is false]`root`

: If`broadcast`

is false, which MPI rank to reduce to

`MPIHaloArrays.globalmin`

— MethodPerform a global minimum operation

**Arguments**

`A`

: MPIHaloArray to perform the operation on`broadcast`

: true/false - broadcast to all MPI ranks [default is false]`root`

: If`broadcast`

is false, which MPI rank to reduce to

`MPIHaloArrays.globalsize`

— MethodFind the global dims of based on the list of local `MPIHaloArray`

sizes

`MPIHaloArrays.globalsum`

— MethodPerform a global sum operation

**Arguments**

`A`

: MPIHaloArray to perform the operation on`broadcast`

: true/false - broadcast to all MPI ranks [default is false]`root`

: If`broadcast`

is false, which MPI rank to reduce to

`MPIHaloArrays.hi_indices`

— MethodHelper functions to get the high side halo and domain starting/ending indices

**Arguments**

`field`

: Array`dim`

: Dimension to check indices on`nhalo`

: Number of halo entries

**Return**

`NTuple{Int, 4}`

: The set of lo indices (hi*domain*start, hi*domain*end, hi*halo*start, hi*halo*end.

`MPIHaloArrays.hi_indices`

— MethodGet the `ho`

indicies along the specified dimension `dim`

of `A`

. This will be in order of (hi*domain*start, hi*domain*end, hi*halo*start, hi*halo*end).

`MPIHaloArrays.ihi_neighbor`

— MethodNeighbor rank in the i+1 direction

`MPIHaloArrays.ilo_neighbor`

— MethodNeighbor rank in the i-1 direction

`MPIHaloArrays.jhi_neighbor`

— MethodNeighbor rank in the j+1 direction

`MPIHaloArrays.jlo_neighbor`

— MethodNeighbor rank in the j-1 direction

`MPIHaloArrays.khi_neighbor`

— MethodNeighbor rank in the k+1 direction

`MPIHaloArrays.klo_neighbor`

— MethodNeighbor rank in the k-1 direction

`MPIHaloArrays.lo_indices`

— MethodHelper functions to get the low side halo and domain starting/ending indices

**Arguments**

`field`

: Array`dim`

: Dimension to check indices on`nhalo`

: Number of halo entries

**Return**

`NTuple{Int, 4}`

: The set of lo indices (lo*halo*start, lo*halo*end, lo*domain*start, lo*domain*end).

`MPIHaloArrays.lo_indices`

— MethodGet the `lo`

indicies along the specified dimension `dim`

of `A`

. This will be in order of (lo*halo*start, lo*halo*end, lo*domain*start, lo*domain*end).

`MPIHaloArrays.local_domain_indices`

— Method`local_domain_indices(A::MPIHaloArray)`

Get the array indices of the domain region of `A`

(i.e. excluding halo regions) in the local frame of reference (relative to itself, rather than in the global domain). This is typically 1 to size(A). The order of the returned indices is (ilo, ihi, jlo, jhi, ...).

**Returns**

`NTuple{Int, 2 * NDimensions}`

: A tuple of both lo and hi indices for each dimension

`MPIHaloArrays.match_tile_halo_dim_sizes`

— Method`match_tile_halo_dim_sizes(tile_dims, halo_dims)`

Ensure that the tile dimension tuple is the same length as the halo dim tuple. If not, then pad with ones.

`MPIHaloArrays.neighbor`

— Method`neighbor(p::CartesianTopology, i_offset::Int, j_offset::Int, k_offset::Int)`

Find the neighbor rank based on the offesets in `(i,j,k)`

. This follows the traditional array index convention rather than MPI's version, so an `i_offset=1`

will shift up in the array indexing.

**Arguments**

`p`

: CartesianTopology type`i_offset`

: Offset in the`i`

direction`j_offset`

: Offset in the`j`

direction`k_offset`

: Offset in the`k`

direction

**Example:**

```
# Makes a 4x4 domain with periodic boundaries in both dimensions
P = CartesianTopology((4,4), (true, true))
# Find the ihi neighbor
ihi = neighbor(P,+1,0,0)
# Find the upper ihi corner neighbor (ihi and jhi side)
ihijhi_corner = neighbor(P,+1,+1,0)
```

`MPIHaloArrays.num_2d_tiles`

— MethodReturns the optimal number of tiles in (i,j) given total number of tiles n

`MPIHaloArrays.num_3d_tiles`

— MethodReturns the optimal number of tiles in (i,j,k) given total number of tiles n

`MPIHaloArrays.offset_coord_to_rank`

— MethodHelper function to find rank based on 3D offsets

`MPIHaloArrays.offset_coord_to_rank`

— MethodHelper function to find rank based on 2D offsets

`MPIHaloArrays.pad_with_halo`

— Method`pad_with_halo(A, nhalo, halo_dims)`

Increase the size of the array `A`

along the halo exchange dimensions to make room for the new halo regions

**Arguments**

`A::AbstractArray`

: Array to increase in size`nhalo::Int`

, number of halo cells along each dimension, e.g., 2`halo_dims::Tuple`

: Set of dimensions to do halo exchange along

`MPIHaloArrays.scatterglobal`

— MethodPartition the array `A`

on the rank `root`

into chunks based on the given parallel toplogy. The array data in `A`

does not have halo regions. The MPIHaloArray constructor adds the halo regions. This returns a `MPIHaloArray`

**Arguments**

`A`

: Global array to be split up into chunks and sent to all ranks. This does**not**include halo cells`root`

: MPI rank that`A`

lives on`nhalo`

: Number of halo cells to create`halo_dims`

: Tuple of the dimensions that halo exchanges occur on (not fully working yet)

`MPIHaloArrays.split_count`

— Method`split_count(N::Integer, n::Integer)`

Return a vector of `n`

integers which are approximately equally sized and sum to `N`

.

`MPIHaloArrays.update_halo_data!`

— Method`update_halo_data!(A_no_halo, A_with_halo, halo_dims, nhalo)`

During the construction of an `MPIHaloArray`

, the data must be padded by the number of halo regions in each respective dimension. This function copies the data from `A_no_halo`

(which is the original array) to `A_with_halo`

(which is the underlying array within the `MPIHaloArray`

)

**Arguments**

`A_no_halo::Array`

: Array without the halo regions`A_with_halo::Array`

: Array padded with the halo regions`halo_dims::Tuple`

: Dimensions that halo exchanges take place on`nhalo::Int`

: Number of halo cells in each respective dimension

`MPIHaloArrays.updatehalo!`

— Method`updatehalo!(A::MPIHaloArray{T,N,AA,1}) where {T,N,AA}`

Update the halo regions on `A`

where the halo exchage is done on a single dimension

`MPIHaloArrays.updatehalo!`

— Method`updatehalo!(A::MPIHaloArray{T,N,AA,2}) where {T,N,AA}`

Update the halo regions on `A`

where the halo exchage is done over 2 dimensions

`MPIHaloArrays.updatehalo!`

— Method`updatehalo!(A::MPIHaloArray{T,N,2}) where {T,N}`

Update the halo regions on `A`

where the halo exchage is done on over 3 dimensions