Warning:
Can't synchronize with repository "(default)" (/git/poblano_toolbox does not appear to be a Subversion repository.). Look in the Trac log for more information.
 Timestamp:

03/04/10 09:20:05 (9 years ago)
 Author:

dmdunla
 Comment:

Minor edits
Legend:
 Unmodified
 Added
 Removed
 Modified

v2

v3


3  3  Poblano is a Matlab toolbox of largescale algorithms for nonlinear optimization. The algorithms in Poblano require only firstorder derivative information (e.g., gradients for scalarvalued objective functions), and therefore can scale to very large problems. Poblano is a set of general purpose methods for solving unconstrained nonlinear optimization methods. It has been applied to standard test problems covering a range of application areas. The driving application for Poblano development has been tensor decompositions in data analysis applications (bibliometric analysis, social network analysis, chemometrics, etc.). 
4  4  
5   Poblano optimizers find local minimizers of scalarvalued objective functions taking vector inputs. The current version of Poblano supports only unconstrained optimization. The gradient (i.e., first derivative) of the objective function is required for all Poblano optimizers. The optimizers converge to a stationary point where the gradient is approximately zero. A line search satisfying the strong Wolfe conditions is used to guarantee global convergence of the Poblano optimizers. The optimization methods in Poblano include several nonlinear conjugate gradient methods (FletcherReeves, PolakRibiere, HestenesStiefel, steepest descent), a limitedmemory quasiNewton method using BFGS updates to approximate secondorder derivative information, and a truncated Newton method using finite differences for secondorder derivative information. 
 5  Poblano optimizers find local minimizers of scalarvalued objective functions taking vector inputs. The current version of Poblano supports only unconstrained optimization. The gradient (i.e., first derivative) of the objective function is required for all Poblano optimizers. The optimizers converge to a stationary point where the gradient is approximately zero. A line search satisfying the strong Wolfe conditions is used to guarantee global convergence of the Poblano optimizers. The optimization methods in Poblano include several nonlinear conjugate gradient methods (FletcherReeves, PolakRibiere, HestenesStiefel), a limitedmemory quasiNewton method using BFGS updates to approximate secondorder derivative information, and a truncated Newton method using finite differences to approximate secondorder derivative information. 
6  6  
7  7  == Starting Points == 