Optimizing with a Parallel Direct Search Method

OptPDS is an implementation of a derivative-free algorithm for unconstrained optimization. The search direction is driven solely by function information. In addition, OptPDS is easy to implement on parallel machines.

In this example, we highlight the steps needed to take advantage of parallel capabilities and to set up PDS. Further information and examples for setting up and solving a problem can be found in the Setting up and Solving an Optimization Problem section

First, include the header files and subroutine declarations.

   #ifdef HAVE_CONFIG_H
   #include "OPT++_config.h"

   #include <string>
   #include <iostream>
   #include <fstream>
   #ifdef HAVE_STD
   #include <cstdio>
   #include <stdio.h>

   #ifdef WITH_MPI
   #include "mpi.h"

   #include "OptPDS.h"
   #include "NLF.h"
   #include "CompoundConstraint.h"
   #include "BoundConstraint.h"
   #include "OptppArray.h"
   #include "cblas.h"
   #include "ioformat.h"

   #include "tstfcn.h"

   using NEWMAT::ColumnVector;
   using NEWMAT::Matrix;
   using namespace OPTPP;

   void SetupTestProblem(string test_id, USERFCN0 *test_problem, 
                      INITFCN *init_problem);
   void update_model(int, int, ColumnVector) {}

After an argument check, initialize MPI. This does not need to be done within an "ifdef", but if you want the option of also building a serial version of your problem, then it should be. (Note: An argument check is used here because this example is set up to work with multiple problems. Such a check is not required by OPT++.)

   int main (int argc, char* argv[])
     if (argc != 3) {
        cout << "Usage: tstpds problem_name ndim\n";

     #ifdef WITH_MPI
        int me;

        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &me);

Define the variables.

     int i, j;
     int ndim;
     double perturb;

     static char *schemefilename = {"myscheme"};

     USERFCN0 test_problem;
     INITFCN  init_problem;

     string test_id;

     test_id = argv[1];
     ndim    = atoi(argv[2]);

     ColumnVector x(ndim);
     ColumnVector vscale(ndim);
     Matrix init_simplex(ndim,ndim+1);

     // Setup the test problem
     // test_problem is a pointer to the function (fcn) to optimize
     // init_problem is a pointer to the function that initializes fcn
     // test_id is a character string identifying the test problem

     SetupTestProblem(test_id, &test_problem, &init_problem);

Now set up the output file. If you are running in parallel, you may want to designate an output file for each processor. Otherwise, the output from all of the processors will be indiscriminantly intertwined in a single file. If the function evaluation does any file I/O, you should set up a working directory for each processor and then have the each process chdir (or something comparable) into its corresponding directory. Each working directory should have a copy of the input file(s) needed by the function evaluation. If the function evaluation requires file I/O and working directories are not used, the function evaluation will not work properly.

     char status_file[80];
     #ifdef WITH_MPI
        sprintf(status_file,"%s.out.%d", status_file, me);

Set up the problem.

     //  Create an OptppArray of Constraints 
     OptppArray<Constraint> arrayOfConstraints;

     //  Create an EMPTY compound constraint 
     CompoundConstraint constraints(arrayOfConstraints);  
     //  Create a constrained Nonlinear problem object 
     NLF0 nlp(ndim,test_problem, init_problem, &constraints);         

Set up a PDS algorithm object. Some of the algorithmic parameters are common to all OPT++ algorithms.

     OptPDS objfcn(&nlp);
     objfcn.setOutputFile(status_file, 0);
     ostream* optout = objfcn.getOutputFile();
     *optout << "Test problem: " << test_id << endl;
     *optout << "Dimension   : " << ndim    << endl;

Other algorithmic parameters are specific to PDS. Here we set the size of the search pattern to be considered at each iteration, the scale of the initial simplex. We explicitly define the initial simplex here, but there are also built-in options. Finally, we tell the algorithm that we need to create a scheme file that contains the search pattern, and we give it the name of the file (one of the variables defined above).


     vscale = 1.0;

     x = nlp.getXc();
     for (i=1; i <= ndim; i++) {
       for (j=1; j <= ndim+1; j++) {
         init_simplex(i,j) = x(i);

     for (i=1; i<= ndim; i++) {
       perturb = x(i)*.01;
       init_simplex(i,i+1) = x(i) + perturb;




Optimize and clean up.

     objfcn.printStatus("Solution from PDS");


Finally, it is necessary to shut down MPI.

     #ifdef WITH_MPI


Next Section: Trust-Region with Parallel Direct Search | Back to Parallel Optimization

Last revised September 14, 2006 .

Bug Reports    OPT++ Developers    Copyright Information    GNU Lesser General Public License
Documentation, generated by , last revised August 30, 2006.