⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 tstpds.html.svn-base

📁 OPT++
💻 SVN-BASE
字号:
/** \page tstpds Optimizing with a Parallel Direct Search Method OptPDS is an implementation of a derivative-free algorithm forunconstrained optimization.  The search direction is driven solely byfunction information.  In addition, OptPDS is easy to implement onparallel machines.In this example, we highlight the steps needed to take advantage ofparallel capabilities and to set up PDS.  Further information andexamples for setting up and solving a problem can be found in the <ahref="SetUp.html"> Setting up and Solving an Optimization Problem</a>sectionFirst, include the header files and subroutine declarations.<table><tr><td>\code   #ifdef HAVE_CONFIG_H   #include "OPT++_config.h"   #endif   #include <string>   #include <iostream>   #include <fstream>   #ifdef HAVE_STD   #include <cstdio>   #else   #include <stdio.h>   #endif   #ifdef WITH_MPI   #include "mpi.h"   #endif   #include "OptPDS.h"   #include "NLF.h"   #include "CompoundConstraint.h"   #include "BoundConstraint.h"   #include "OptppArray.h"   #include "cblas.h"   #include "ioformat.h"   #include "tstfcn.h"   using NEWMAT::ColumnVector;   using NEWMAT::Matrix;    using namespace OPTPP;   void SetupTestProblem(string test_id, USERFCN0 *test_problem, 		      INITFCN *init_problem);   void update_model(int, int, ColumnVector) {}\endcode</table>After an argument check, initialize MPI.  This does not need to bedone within an "ifdef", but if you want the option of also building aserial version of your problem, then it should be.  (Note: An argumentcheck is used here because this example is set up to work withmultiple problems.  Such a check is not required by OPT++.)<table><tr><td>\code   int main (int argc, char* argv[]){     if (argc != 3) {        cout << "Usage: tstpds problem_name ndim\n";        exit(1);     }     #ifdef WITH_MPI        int me;        MPI_Init(&argc, &argv);        MPI_Comm_rank(MPI_COMM_WORLD, &me);     #endif\endcode</table>Define the variables.<table><tr><td>\code     int i, j;     int ndim;     double perturb;     static char *schemefilename = {"myscheme"};     USERFCN0 test_problem;     INITFCN  init_problem;     string test_id;     test_id = argv[1];     ndim    = atoi(argv[2]);     ColumnVector x(ndim);     ColumnVector vscale(ndim);     Matrix init_simplex(ndim,ndim+1);     // Setup the test problem     // test_problem is a pointer to the function (fcn) to optimize     // init_problem is a pointer to the function that initializes fcn     // test_id is a character string identifying the test problem     SetupTestProblem(test_id, &test_problem, &init_problem);\endcode</table>Now set up the output file.  If you are running in parallel, you maywant to designate an output file for each processor.  Otherwise, theoutput from all of the processors will be indiscriminantly intertwinedin a single file.  If the function evaluation does any file I/O, youshould set up a working directory for each processor and then have theeach process chdir (or something comparable) into its correspondingdirectory.  Each working directory should have a copy of the inputfile(s) needed by the function evaluation.  If the function evaluationrequires file I/O and working directories are not used, the functionevaluation will not work properly.<table><tr><td>\code     char status_file[80];     strcpy(status_file,test_id.c_str());     #ifdef WITH_MPI        sprintf(status_file,"%s.out.%d", status_file, me);     #else        strcat(status_file,".out");     #endif\endcode</table>Set up the problem.<table><tr><td>\code     //  Create an OptppArray of Constraints      OptppArray<Constraint> arrayOfConstraints;     //  Create an EMPTY compound constraint      CompoundConstraint constraints(arrayOfConstraints);         //  Create a constrained Nonlinear problem object      NLF0 nlp(ndim,test_problem, init_problem, &constraints);         \endcode</table>Set up a PDS algorithm object.  Some of the algorithmic parameters arecommon to all OPT++ algorithms.<table><tr><td>\code     OptPDS objfcn(&nlp);     objfcn.setOutputFile(status_file, 0);     ostream* optout = objfcn.getOutputFile();     *optout << "Test problem: " << test_id << endl;     *optout << "Dimension   : " << ndim    << endl;     objfcn.setFcnTol(1.49012e-8);     objfcn.setMaxIter(500);     objfcn.setMaxFeval(10000);\endcode</table>Other algorithmic parameters are specific to PDS.  Here we set thesize of the search pattern to be considered at each iteration, thescale of the initial simplex.  We explicitly define the initialsimplex here, but there are also built-in options.  Finally, we tellthe algorithm that we need to create a scheme file that contains thesearch pattern, and we give it the name of the file (one of thevariables defined above).<table><tr><td>\code     objfcn.setSSS(256);     vscale = 1.0;     objfcn.setScale(vscale);     x = nlp.getXc();     for (i=1; i <= ndim; i++) {       for (j=1; j <= ndim+1; j++) {         init_simplex(i,j) = x(i);       }     }     for (i=1; i<= ndim; i++) {       perturb = x(i)*.01;       init_simplex(i,i+1) = x(i) + perturb;     }     objfcn.setSimplexType(4);     objfcn.setSimplex(init_simplex);     objfcn.setCreateFlag();     objfcn.setSchemeFileName(schemefilename);\endcode</table>Optimize and clean up.<table><tr><td>\code     objfcn.optimize();       objfcn.printStatus("Solution from PDS");     objfcn.cleanup();\endcode</table>Finally, it is necessary to shut down MPI.<table><tr><td>\code     #ifdef WITH_MPI        MPI_Finalize();     #endif       }\endcode</table><p> <a href="tsttrpds.html"> Next Section: Trust-Region with Parallel Direct 	Search </a> |  <a href="ParallelOptimization.html">	Back to Parallel Optimization </a> </p> Last revised <em> September 14, 2006 </em>.*/

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -