Sandia Home

APPSPACK Configuration and Examples for MPI Mode

APPSPACK Configuration and Examples for MPI Mode

APPSPACK has an MPI mode executable and library. This version is not fault-tolerant, but it is parallel.

Configuring and Compiling APPSPACK in MPI Mode

Most MPI packages come with a wrapper to the compiler that specifies the correct flags, include files, libraries, and so on to compile with MPI. This is often the easiest way to compile with MPI.
  • Use the default MPI C++ compiler (mpiCC).
    configure --with-mpi-cxx 
    
  • Specify the exact name and location of the MPI C++ compiler. This is useful if you have multiple installations of MPI or if mpiCC is not in your default path.
    configure --with-mpi-cxx=/home/jdoe/mpich-1.2.3/bin/mpiCC
    

An alternative to using mpiCC is to specify the include directory, library directory, and libraries specifically, There are several flags that you can use to do this. These are generally incompatible with --with-mpi-cxx

  • This specifies the general location of the MPI installation. We then assume that the headers are located in /home/jdoe/lam/include, that the libraries are located in /home/jdoe/lam/lib, and that the library is specifed by -lmpi.
    configure --with-mpi=/home/jdoe/lam
    

  • This is the same as above except that the library to be linked with is explicitly specified.
    configure --with-mpi=/home/jdoe/mpich --with-mpi-libs="-lmpich"
    

  • Specify the exact locations of the MPI include files and the MPI libraries. Assumes that the library is specifed by -lmpi.
    configure --with-mpi-include="/home/jdoe/lam/include" \
              --with-mpi-libdir="/home/jdoe/lam/lib"
    

  • This is the same as above except that the library to be linked with is explicitly specified.
    configure --with-mpi-include="/home/jdoe/mpich/include" \
              --with-mpi-libdir="/home/jdoe/mpich/lib" \
              --with-mpi-libs="-lmpich"
    

  • Explicitly specify the MPI library without specifying the library directory.

    configure --with-mpi-include="/home/jdoe/mpich/include" \
              --with-mpi-libs="/home/jdoe/mpich/lib/libmpich.a"
    

This will create the following:

  • src/mpiappspack - the default executable
  • src/libmpiappspack.a - the default library that can be linked with larger codes
  • examples/example1 - a simple example executable
  • examples/mpiappspack_example1 - an alternative way to solve example1

Using the Default MPI Version of APPSPACK

Assuming you are using MPICH and mpirun is in your path, a simple example is as follows.
cd appspack/examples
mpirun -np 6 ../src/mpiappspack example1.apps

See Format of the APPSPACK Input File for the format of example1.apps. See Communicating with a Simulation Via File I/O for the details of running executables via system calls.

MPI.gif

APPSPACK in MPI Mode

In MPI mode, APPSPACK typically runs $2n+2$ processes. In the example above, for $n=2$, we have 6 copies of APPSPACK running. The first copy, with rank=0, is the master agent. The last copy, with rank=5, is the cache. The remainder are workers. Here the six copies of APPSPACK are distributed across three computers. Communication between the various copies of APPSPACK is done via MPI, as illustrated by red lines. The simulation is run as a separate process, as described for serial mode in Communicating with a Simulation Via File I/O. Here, we have $2n+2$ copies running in parallel, although more than one copy man be running on each machine.

On Choosing the Number of MPI Processes

The number of MPI processes should be the number of search directions plus 2. Typically, the number of search directions in $2n$. Extra search directions may be specified by the -s (or, equivalently, --search) option to APPSPACK. If the number of search directions is $s$, then the number of MPI processes should be $s+2$.

Customizing the MPI Version of APPSPACK

The default MPI version of APPSPACK requires that the simulations run as seperate executables and communicates with the worker tasks via file input and output. In some cases, the system calls and file I/O can add substantial time to the overall runtime. In that case, it may be useful to customize APPSPACK to your particular problem so that it can avoid the overhead.

MPI_Example1.gif

APPSPACK customized for Example 1

In this case, we have linked the code to execute Example 1 directly into APPSPACK. Then, each worker task can execute the simulation itself, without an external system call.

To run the customized version, do the following:

cd appspack/examples
../src/mpiappspack_example1 example1.apps

Note that in this case, the format of the input file example1.apps is the same as above except that the parameters "executable", "params_prefix", and "result_prefix" are ignored and need not be specified.

The custom version of APPSPACK was constructed as follows. First, we created derived versions of the function evaluation interfaces. We created the following files

The Makefile for a custom version of APPSPACK might look something like the following.

APPSPACK = <home directory for APPSPACK>
PROJECT = <project name>
LDFLAGS = -L$(APPSPACK)/src 
LIBS = -lmpiappspack 
CXXFLAGS = -I$(APPSPACK)/src -DHAVE_CONFIG_H

HEADERS = \
	$(PROJECT)_FevalMgr.H \
	$(PROJECT)_FevalWkr.H \
	<other project headers>

SOURCES = \
	$(APPSPACK)/src/APPSPACK_Main.C \
	$(PROJECT)_FevalMgrFactory.C \
	$(PROJECT)_FevalMgr.C \
	$(PROJECT)_FevalWkrFactory.C \
	$(PROJECT)_FevalWkr.C \
	<other project sources>

mpiappspack_$(PROJECT): $(HEADERS) $(SOURCES)
	$(CXX) -o mpiappspack_$(PROJECT) $(CXXFLAGS) $(LDFLAGS) $(SOURCES) $(LIBS) 


Generated on Mon Oct 27 15:14:48 2003 for APPSPACK by doxygen 1.3.4 written by Dimitri van Heesch, © 1997-2002


© Sandia Corporation | Site Contact | Privacy and Security