MoReFEM
Loading...
Searching...
No Matches
Public Types | Public Member Functions | Static Public Member Functions | Private Member Functions | Static Private Member Functions | Private Attributes
MoReFEM::Wrappers::Mpi Class Reference

A wrapper over MPI functions call, with some common parameters stored in an object. More...

#include <Mpi.hpp>

Collaboration diagram for MoReFEM::Wrappers::Mpi:

Public Types

using const_unique_ptr = std::unique_ptr<const Mpi>
 Alias to unique_ptr.
 

Public Member Functions

template<typename IntT >
IntT GetRank () const
 Get the rank of the processor.
 
MPI_Comm GetCommunicator () const
 Get the communicator.
 
template<typename IntT >
IntT Nprocessor () const
 Get the total number of processors.
 
bool IsRootProcessor () const
 Returns whether current proc is root processor.
 
template<typename T >
void Gather (const std::vector< T > &sent_data, std::vector< T > &gathered_data) const
 An interface over MPI_Gather().
 
template<typename T >
IGNORE_BLOCK_IN_DOXYGEN void Gatherv (const std::vector< T > &sent_data, std::vector< T > &gathered_data) const
 An interface over MPI_Gatherv().
 
template<typename T >
IGNORE_BLOCK_IN_DOXYGEN void AllGather (const std::vector< T > &sent_data, std::vector< T > &gathered_data) const
 An interface over MPI_Allgather().
 
template<typename T >
IGNORE_BLOCK_IN_DOXYGEN void AllGatherv (const std::vector< T > &sent_data, std::vector< T > &gathered_data) const
 An interface over MPI_Allgatherv().
 
template<typename T >
IGNORE_BLOCK_IN_DOXYGEN void Broadcast (std::vector< T > &data, std::optional< int > send_processor=std::nullopt) const
 An interface over MPI_Bcast().
 
template<typename T >
void Broadcast (T &data, std::optional< int > send_processor=std::nullopt) const
 An interface over MPI_Bcast() for single values.
 
template<typename T >
IGNORE_BLOCK_IN_DOXYGEN std::vector< T > AllReduce (const std::vector< T > &sent_data, MpiNS::Op mpi_operation) const
 An interface over MPI_Allreduce().
 
template<typename T >
AllReduce (T sent_data, MpiNS::Op mpi_operation) const
 
template<class ContainerT >
IGNORE_BLOCK_IN_DOXYGEN void AllReduce (const ContainerT &sent_data, ContainerT &gathered_data, MpiNS::Op mpi_operation) const
 The actual implementation of the Allreduce() method.
 
template<typename T >
std::vector< T > ReduceOnRootProcessor (const std::vector< T > &sent_data, MpiNS::Op mpi_operation) const
 
template<typename T >
ReduceOnRootProcessor (T sent_data, MpiNS::Op mpi_operation) const
 
template<typename T >
IGNORE_BLOCK_IN_DOXYGEN std::vector< T > CollectFromEachProcessor (T sent_data) const
 Collect the values of a given data from each processor.
 
const std::string & GetRankPrefix () const
 Returns a string that gives the rank between [].
 
void Barrier () const
 Call to MPI_Barrier that tells the processor to block process for other processors.
 
template<typename T >
void Send (std::size_t destination, T data) const
 Wrapper over MPI_Send for a single value.
 
template<class ContainerT >
void SendContainer (std::size_t destination, const ContainerT &data) const
 Wrapper over MPI_Send for a container.
 
template<typename T >
std::vector< T > Receive (std::size_t rank, std::size_t max_length) const
 Wrapper over MPI_Recv for an array.
 
template<typename T >
Receive (std::size_t rank) const
 Wrapper over MPI_Recv for a single value.
 
Special members.
 Mpi (int root_processor, MpiNS::Comm comm)
 Constructor.
 
 ~Mpi ()
 Destructor.
 
 Mpi (const Mpi &rhs)=delete
 The copy constructor.
 
 Mpi (Mpi &&rhs)=delete
 The move constructor.
 
Mpioperator= (const Mpi &rhs)=delete
 The (copy) operator=.
 
Mpioperator= (Mpi &&rhs)=delete
 The (move) operator=.
 

Static Public Member Functions

static constexpr int AnyTag ()
 The value MPI_ANY_TAG doesn't seem to be accepted, so I defined one myself.
 
static void InitEnvironment (int argc, char **argv)
 Must be called before any Mpi class is created.
 

Private Member Functions

template<class ContainerT >
void GatherImpl (const ContainerT &sent_data, ContainerT &gathered_data) const
 The actual implementation of the Gather() method.
 
template<class ContainerT >
void GathervImpl (const ContainerT &sent_data, ContainerT &gathered_data) const
 The actual implementation of the Gatherv() method.
 
template<class ContainerT >
void AllGatherImpl (const ContainerT &sent_data, ContainerT &gathered_data) const
 The actual implementation of the AllGather() method.
 
template<class ContainerT >
void AllGathervImpl (const ContainerT &sent_data, ContainerT &gathered_data) const
 The actual implementation of the AllGatherv() method.
 
template<class ContainerT >
void ReduceImpl (const ContainerT &sent_data, ContainerT &gathered_data, int target_processor, MpiNS::Op mpi_operation) const
 The actual implementation of the Reduce() method.
 
void DecrementNalive ()
 Decrement the counter. If last object, call MPI::Finalize().
 
int GetRootProcessor () const
 Access to the rank of the root processor.
 
void AbortIfErrorCode (int rank, int error_code, const std::source_location location=std::source_location::current()) const
 If an error code is not MPI_SUCCESS, print the message on screen and abort the whole program.
 

Static Private Member Functions

static int & Nalive ()
 Number of MPI objects currently alive.
 
static bool & IsEnvironment ()
 Whether the environment has been set or not.
 
static void IncrementNalive ()
 Increment the counter. If first object, initialize MPI context.
 

Private Attributes

const int root_processor_
 Root processor.
 
int Nprocessor_ = NumericNS::UninitializedIndex<int>()
 Total number of processors.
 
MPI_Comm comm_
 Comm channel used with MPI.
 
int rank_ = NumericNS::UninitializedIndex<int>()
 Rank of the processor.
 

Detailed Description

A wrapper over MPI functions call, with some common parameters stored in an object.

The purpose is to provide a slightly more user-friendly interface, and to make possible to make once and for all some of the choices such as which is the root processor or whether this root processor takes its share of the calculation or not.

Only the functionalities required by the code have been put here; it is therefore just a subset of what you can do with MPI. Don't hesitate to request additional features there if you need them.

Error handling: currently there is the choice to use MPI_ERRORS_ARE_FATAL rather than MPI_ERRORS_RETURN; that's the reason the error code are not checked.

Constructor & Destructor Documentation

◆ Mpi() [1/3]

MoReFEM::Wrappers::Mpi::Mpi ( int root_processor,
MpiNS::Comm comm )
explicit

Constructor.

Parameters
[in]root_processorWhich processor is used as root processor.
[in]commCommunication channel used by MPI.

◆ ~Mpi()

MoReFEM::Wrappers::Mpi::~Mpi ( )

Destructor.

BEWARE: when all Mpi objects are destroyed the environment is destroyed; you can't revive other MPI object unless InitEnvironment() is called again.

◆ Mpi() [2/3]

MoReFEM::Wrappers::Mpi::Mpi ( const Mpi & rhs)
delete

The copy constructor.

Parameters
[in]rhsThe object from which the construction occurs.

◆ Mpi() [3/3]

MoReFEM::Wrappers::Mpi::Mpi ( Mpi && rhs)
delete

The move constructor.

Parameters
[in]rhsThe object from which the construction occurs.

Member Function Documentation

◆ AnyTag()

static constexpr int MoReFEM::Wrappers::Mpi::AnyTag ( )
staticconstexpr

The value MPI_ANY_TAG doesn't seem to be accepted, so I defined one myself.

Currentlu tags aren't used at all in the operations.

Returns
Value 0

◆ InitEnvironment()

static void MoReFEM::Wrappers::Mpi::InitEnvironment ( int argc,
char ** argv )
static

Must be called before any Mpi class is created.

Parameters
[in]argcThe first argument from main() function.
[in]argvThe second argument from main() function.

◆ operator=() [1/2]

Mpi & MoReFEM::Wrappers::Mpi::operator= ( const Mpi & rhs)
delete

The (copy) operator=.

Parameters
[in]rhsThe object from which the affectation occurs.
Returns
Reference to the object (to enable chained affectation).

◆ operator=() [2/2]

Mpi & MoReFEM::Wrappers::Mpi::operator= ( Mpi && rhs)
delete

The (move) operator=.

Parameters
[in]rhsThe object from which the affectation occurs.
Returns
Reference to the object (to enable chained affectation).

◆ GetRank()

template<typename IntT >
IntT MoReFEM::Wrappers::Mpi::GetRank ( ) const

Get the rank of the processor.

Template Parameters
IntTType in which the result is cast.
Returns
Rank of the calling processor.

◆ Nprocessor()

template<typename IntT >
IntT MoReFEM::Wrappers::Mpi::Nprocessor ( ) const

Get the total number of processors.

Template Parameters
IntTType in which the result is cast.
Returns
Total number of processors.

◆ Gather()

template<typename T >
void MoReFEM::Wrappers::Mpi::Gather ( const std::vector< T > & sent_data,
std::vector< T > & gathered_data ) const

An interface over MPI_Gather().

I had a hard time understanding how to use properly the original Mpi function, as I didn't understand correctly one of the argument (the number of element sent, which is in fact the number of element sent PER PROCESSOR).

Present function aims to provide a much simpler interface, that is likely less powerful than the original one but much more secure.

Template Parameters
TType of the variable sent. Usually a POD C++ type, such as 'int'.
Parameters
[in]sent_dataData sent by the current processor. All sent_data must have the same size. Otherwise use MPI_Gatherv (not implemented in the Wrapper yet).
[out]gathered_dataRelevant only for the root processor. This vector includes all the data gathered from the other processors. The ordering follows the ordering of processors.

◆ Gatherv()

template<typename T >
IGNORE_BLOCK_IN_DOXYGEN void MoReFEM::Wrappers::Mpi::Gatherv ( const std::vector< T > & sent_data,
std::vector< T > & gathered_data ) const

An interface over MPI_Gatherv().

Gatherv() is a broader version of Gather(): it allows collecting arrays of different sizes whereas Gather() expects same size on all ranks.

I had a hard time understanding how to use properly the original Mpi function, as I didn't understand correctly one of the argument (the number of element sent, which is in fact the number of element sent PER PROCESSOR).

Present function aims to provide a much simpler interface, that is likely less powerful than the original one but much more secure.

Template Parameters
TType of the variable sent. Usually a POD C++ type, such as 'int'.
Parameters
[in]sent_dataData sent by the current processor. The vectors can have different sizes on each processor.
[out]gathered_dataRelevant only for the root processor. This vector includes all the data gathered from the other processors. The ordering follows the ordering of processors.

◆ AllGather()

template<typename T >
IGNORE_BLOCK_IN_DOXYGEN void MoReFEM::Wrappers::Mpi::AllGather ( const std::vector< T > & sent_data,
std::vector< T > & gathered_data ) const

An interface over MPI_Allgather().

I had a hard time understanding how to use properly the original Mpi function, as I didn't understand correctly one of the argument (the number of element sent, which is in fact the number of element sent PER PROCESSOR).

Present function aims to provide a much simpler interface, that is likely less powerful than the original one but much more secure.

Template Parameters
TType of the variable sent. Usually a POD C++ type, such as 'int'.
Parameters
[in]sent_dataData sent by the current processor. All sent_data must have the same size.
[out]gathered_dataThis vector includes all the data gathered from the other processors. The ordering follows the ordering of processors.

◆ AllGatherv()

template<typename T >
IGNORE_BLOCK_IN_DOXYGEN void MoReFEM::Wrappers::Mpi::AllGatherv ( const std::vector< T > & sent_data,
std::vector< T > & gathered_data ) const

An interface over MPI_Allgatherv().

I had a hard time understanding how to use properly the original Mpi function, as I didn't understand correctly one of the argument (the number of element sent, which is in fact the number of element sent PER PROCESSOR).

Present function aims to provide a much simpler interface, that is likely less powerful than the original one but much more secure.

Template Parameters
TType of the variable sent. Usually a POD C++ type, such as 'int'.
Parameters
[in]sent_dataData sent by the current processor. The vectors can have different sizes on each processor.
[out]gathered_dataThis vector includes all the data gathered from the other processors. The ordering follows the ordering of processors. Contrary to Gatherv the data is present on each processor.

◆ Broadcast() [1/2]

template<typename T >
IGNORE_BLOCK_IN_DOXYGEN void MoReFEM::Wrappers::Mpi::Broadcast ( std::vector< T > & data,
std::optional< int > send_processor = std::nullopt ) const

An interface over MPI_Bcast().

Parameters
[in,out]dataIf on the send_processor, this parameter is the data sent to all other processors. If not, it is the variable into which the result is written.
Warning
Size of data must be properly allocated before the call on each processor!
Parameters
[in]send_processorThe rank which is broadcasting data. If nullopt, root processor is used.
Template Parameters
TType of the data being sent. This is a standard C++ type (double, std::size_t, ...) for which a specialization of Internal::Wrappers::MpiNS::Datatype must exist.

◆ Broadcast() [2/2]

template<typename T >
void MoReFEM::Wrappers::Mpi::Broadcast ( T & data,
std::optional< int > send_processor = std::nullopt ) const

An interface over MPI_Bcast() for single values.

Parameters
[in,out]dataIf on the send_processor, this parameter is the data sent to all other processors. If not, it is the variable into which the result is written.
[in]send_processorThe rank which is broadcasting data. If nullopt, root processor is used.

For DRY purposes, this method is syntactic sugar that uses up under the hood the overload with vectors. It is obviously not the most clever way to proceed on an efficiency standpoint - we could rewrite a single Broadcast with magic inside to handle efficiently both vector and single value case. I don't do it because I don't need it but if need be it's straightforward to do (the usual thorn in the side would be the vector of bool...)

Warning
Size of data must be properly allocated before the call on each processor!
Template Parameters
TType of the data being sent. This is a standard C++ type (double, std::size_t, ...) for which a specialization of Internal::Wrappers::MpiNS::Datatype must exist.

◆ AllReduce() [1/3]

template<typename T >
IGNORE_BLOCK_IN_DOXYGEN std::vector< T > MoReFEM::Wrappers::Mpi::AllReduce ( const std::vector< T > & sent_data,
MpiNS::Op mpi_operation ) const

An interface over MPI_Allreduce().

Each processor detains a vector of the exact same size; the reduction consists in agglomerating all of them on each of the processor.

For instance, in MoReFEM each processor fills a part of the sparse matrix, and put 0 for degree of freedoms it doesn't manage. The call to MPI_Allreduce aggregate all those vectors and hence form the global sparse matrix content.

Template Parameters
TType of the variable sent. Usually a POD C++ type, such as 'int'.
Parameters
[in]sent_dataData sent by the current processor.
[in]mpi_operationThe MPI operation used during the reduction.
Returns
Data gathered by the current processor.

◆ AllReduce() [2/3]

template<typename T >
T MoReFEM::Wrappers::Mpi::AllReduce ( T sent_data,
MpiNS::Op mpi_operation ) const

Overload for a single value.

Parameters
[in]sent_dataData sent by the current processor.
[in]mpi_operationThe MPI operation used during the reduction.

◆ AllReduce() [3/3]

template<class ContainerT >
IGNORE_BLOCK_IN_DOXYGEN void MoReFEM::Wrappers::Mpi::AllReduce ( const ContainerT & sent_data,
ContainerT & gathered_data,
MpiNS::Op mpi_operation ) const

The actual implementation of the Allreduce() method.

This implementation will be used for the generic case as well as the very specific case of booleans, which are handled by a hand-made container.

Template Parameters
ContainerTEither std::vector<T> or BoolArray expected (underlying type obtained through ContainerT::value_type).
Parameters
[in]sent_dataData sent by the current processor.
[in]mpi_operationThe MPI operation used during the reduction.
[out]gathered_datagathered by the current processor.

◆ ReduceOnRootProcessor() [1/2]

template<typename T >
std::vector< T > MoReFEM::Wrappers::Mpi::ReduceOnRootProcessor ( const std::vector< T > & sent_data,
MpiNS::Op mpi_operation ) const

Reduce operation, which target is the root processor.

Parameters
[in]sent_dataData sent by the current processor.
[in]mpi_operationThe MPI operation used during the reduction.

◆ ReduceOnRootProcessor() [2/2]

template<typename T >
T MoReFEM::Wrappers::Mpi::ReduceOnRootProcessor ( T sent_data,
MpiNS::Op mpi_operation ) const

Overload for a single value.

Parameters
[in]sent_dataData sent by the current processor.
[in]mpi_operationThe MPI operation used during the reduction.

◆ CollectFromEachProcessor()

template<typename T >
IGNORE_BLOCK_IN_DOXYGEN std::vector< T > MoReFEM::Wrappers::Mpi::CollectFromEachProcessor ( T sent_data) const

Collect the values of a given data from each processor.

Parameters
[in]sent_dataProcessor-wise data which is collected.
Returns
A vector with one entry per processor: the value of the collected data.

[internal] This method calls AllReduce() under the hood, so no need to foresee all overloads (the correct AllReduce will be chosen automatically).

◆ Send()

template<typename T >
void MoReFEM::Wrappers::Mpi::Send ( std::size_t destination,
T data ) const

Wrapper over MPI_Send for a single value.

Template Parameters
TType of data sent (there must be a dedicated specialization of Internal::Wrappers::MpiNS::Datatype<> template class for it).
Parameters
[in]destinationRank of the processor to which the message is sent.
[in]dataSingle value to be sent.

◆ SendContainer()

template<class ContainerT >
void MoReFEM::Wrappers::Mpi::SendContainer ( std::size_t destination,
const ContainerT & data ) const

Wrapper over MPI_Send for a container.

Template Parameters
ContainerTType of the container involved; it must define data(), size() and value_type. The latter must be a POD type recognized by Openmpi (there must be a dedicated specialization of Internal::Wrappers::MpiNS::Datatype<> template class for it).
Parameters
[in]destinationRank of the processor to which the message is sent.
[in]dataContainer to be sent.

The container must be 'caught' on the destination rank through a call to Receive.

◆ Receive() [1/2]

template<typename T >
std::vector< T > MoReFEM::Wrappers::Mpi::Receive ( std::size_t rank,
std::size_t max_length ) const

Wrapper over MPI_Recv for an array.

Template Parameters
TType of the data sent; this must be a POD type recognized by Openmpi (there must be a dedicated specialization of Internal::Wrappers::MpiNS::Datatype<> template class for it).
Parameters
[in]rankRank of the processor from which the message was sent.
Returns
A vector with all the data sent. It is redimensioned to match the content (so its size is lower or equal to max_length).
Parameters
[in]max_lengthMaximal number of expected items to be received.

◆ Receive() [2/2]

template<typename T >
T MoReFEM::Wrappers::Mpi::Receive ( std::size_t rank) const

Wrapper over MPI_Recv for a single value.

Template Parameters
TType of the data sent; this must be a POD type recognized by Openmpi (there must be a dedicated specialization of Internal::Wrappers::MpiNS::Datatype<> template class for it).
Parameters
[in]rankRank of the processor from which the message was sent.
Returns
A vector with all the data sent. It is redimensioned to match the content (so its size is lower or equal to max_length).

◆ GatherImpl()

template<class ContainerT >
void MoReFEM::Wrappers::Mpi::GatherImpl ( const ContainerT & sent_data,
ContainerT & gathered_data ) const
private

The actual implementation of the Gather() method.

This implementation will be used for the generic case as well as the very specific case of booleans, which are handled by a hand-made container.

Template Parameters
ContainerTEither std::vector<T> or BoolArray expected (underlying type obtained through ContainerT::value_type).
Parameters
[in]sent_dataData sent by the current processor.
[out]gathered_dataRelevant only for the root processor. This vector includes all the data gathered from the other processors. The ordering follows the ordering of processors.

[internal] Assumes all processors take part in the calculation; if not Mpi_gatherv should be called instead in the implementation!

◆ GathervImpl()

template<class ContainerT >
void MoReFEM::Wrappers::Mpi::GathervImpl ( const ContainerT & sent_data,
ContainerT & gathered_data ) const
private

The actual implementation of the Gatherv() method.

This implementation will be used for the generic case as well as the very specific case of booleans, which are handled by a hand-made container.

Template Parameters
ContainerTEither std::vector<T> or BoolArray expected (underlying type obtained through ContainerT::value_type).
Parameters
[in]sent_dataData sent by the current processor. The vectors can have different sizes on each processor.
[out]gathered_dataRelevant only for the root processor. This vector includes all the data gathered from the other processors. The ordering follows the ordering of processors.

◆ AllGatherImpl()

template<class ContainerT >
void MoReFEM::Wrappers::Mpi::AllGatherImpl ( const ContainerT & sent_data,
ContainerT & gathered_data ) const
private

The actual implementation of the AllGather() method.

This implementation will be used for the generic case as well as the very specific case of booleans, which are handled by a hand-made container.

Template Parameters
ContainerTEither std::vector<T> or BoolArray expected (underlying type obtained through ContainerT::value_type).
Parameters
[in]sent_dataData sent by the current processor.
[out]gathered_dataThis vector includes all the data gathered from the other processors. The ordering follows the ordering of processors.

◆ AllGathervImpl()

template<class ContainerT >
void MoReFEM::Wrappers::Mpi::AllGathervImpl ( const ContainerT & sent_data,
ContainerT & gathered_data ) const
private

The actual implementation of the AllGatherv() method.

This implementation will be used for the generic case as well as the very specific case of booleans, which are handled by a hand-made container.

Template Parameters
ContainerTEither std::vector<T> or BoolArray expected (underlying type obtained through ContainerT::value_type).
Parameters
[in]sent_dataData sent by the current processor. The vectors can have different sizes on each processor.
[out]gathered_dataThis vector includes all the data gathered from the other processors. The ordering follows the ordering of processors. Contrary to Gatherv the data is present on each processor.

◆ ReduceImpl()

template<class ContainerT >
void MoReFEM::Wrappers::Mpi::ReduceImpl ( const ContainerT & sent_data,
ContainerT & gathered_data,
int target_processor,
MpiNS::Op mpi_operation ) const
private

The actual implementation of the Reduce() method.

This implementation will be used for the generic case as well as the very specific case of booleans, which are handled by a hand-made container.

Template Parameters
ContainerTEither std::vector<T> or BoolArray expected (underlying type obtained through ContainerT::value_type).
Parameters
[in]sent_dataData sent by the current processor.
[in]mpi_operationThe MPI operation used during the reduction.
[in]target_processorProcessor on which the reduction occurs.
[out]gathered_datagathered by the current processor.

◆ AbortIfErrorCode()

void MoReFEM::Wrappers::Mpi::AbortIfErrorCode ( int rank,
int error_code,
const std::source_location location = std::source_location::current() ) const
private

If an error code is not MPI_SUCCESS, print the message on screen and abort the whole program.

Parameters
[in]rankRank of the current processor.
[in]error_codeError code returned by a function of the mpi API.
Parameters
[in]locationSTL object with relevant information about the calling site (usually to help when an exception is thrown.

The documentation for this class was generated from the following file: