Sandia National Laboratories
Warning: Can't synchronize with repository "(default)" (/git/poblano_toolbox does not appear to be a Subversion repository.). Look in the Trac log for more information.

Changes between Version 10 and Version 11 of WikiStart


Ignore:
Timestamp:
01/30/12 16:13:31 (7 years ago)
Author:
dmdunla
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • WikiStart

    v10 v11  
    44 
    55Poblano optimizers find local minimizers of scalar-valued objective functions taking vector inputs. The gradient (i.e., first derivative) of the objective function is required for all Poblano optimizers. The optimizers converge to a stationary point where the gradient is approximately zero. A line search satisfying the strong Wolfe conditions is used to guarantee global convergence of the Poblano optimizers. The optimization methods in Poblano include several nonlinear conjugate gradient methods (Fletcher-Reeves, Polak-Ribiere, Hestenes-Stiefel), a limited-memory quasi-Newton method using BFGS updates to approximate second-order derivative information, and a truncated Newton method using finite differences to approximate second-order derivative information. 
     6 
     7== News == 
     8 
     9 * 01-30-2012: Poblano Version v1.1 released (Release Notes) 
    610 
    711== Starting Points ==