Warning:
Can't synchronize with repository "(default)" (/git/poblano_toolbox does not appear to be a Subversion repository.). Look in the Trac log for more information.
 Timestamp:

03/18/10 15:46:17 (9 years ago)
 Author:

dmdunla
 Comment:


Legend:
 Unmodified
 Added
 Removed
 Modified

v9

v10


3  3  Poblano is a Matlab toolbox of largescale algorithms for unconstrained nonlinear optimization problems. The algorithms in Poblano require only firstorder derivative information (e.g., gradients for scalarvalued objective functions), and therefore can scale to very large problems. The driving application for Poblano development has been tensor decompositions in data analysis applications (bibliometric analysis, social network analysis, chemometrics, etc.). 
4  4  
5   Poblano optimizers find local minimizers of scalarvalued objective functions taking vector inputs. The current version of Poblano supports only unconstrained optimization. The gradient (i.e., first derivative) of the objective function is required for all Poblano optimizers. The optimizers converge to a stationary point where the gradient is approximately zero. A line search satisfying the strong Wolfe conditions is used to guarantee global convergence of the Poblano optimizers. The optimization methods in Poblano include several nonlinear conjugate gradient methods (FletcherReeves, PolakRibiere, HestenesStiefel), a limitedmemory quasiNewton method using BFGS updates to approximate secondorder derivative information, and a truncated Newton method using finite differences to approximate secondorder derivative information. 
 5  Poblano optimizers find local minimizers of scalarvalued objective functions taking vector inputs. The gradient (i.e., first derivative) of the objective function is required for all Poblano optimizers. The optimizers converge to a stationary point where the gradient is approximately zero. A line search satisfying the strong Wolfe conditions is used to guarantee global convergence of the Poblano optimizers. The optimization methods in Poblano include several nonlinear conjugate gradient methods (FletcherReeves, PolakRibiere, HestenesStiefel), a limitedmemory quasiNewton method using BFGS updates to approximate secondorder derivative information, and a truncated Newton method using finite differences to approximate secondorder derivative information. 
6  6  
7  7  == Starting Points == 