Class reference
ParOpt is an interior point optimizer
- paropt.ParOpt.getOptionsInfo()
Get a dictionary that contains all of the option values and information that are used in any of the ParOpt optimizers.
- paropt.ParOpt.printOptionSummary()
Print a summary of all the options available within all ParOpt optimizers.
- paropt.ParOpt.unpack_checkpoint(filename)
Convert the checkpoint file to usable python objects
- paropt.ParOpt.unpack_mma_output(filename)
Unpack the parameters from a file output from MMA
- paropt.ParOpt.unpack_output(filename)
Unpack the parameters from the paropt output file and return them in a list of numpy arrays. This also returns a small string description from the file itself. This code relies ont he fixed-width nature of the file, which is guaranteed.
- paropt.ParOpt.unpack_tr_2nd_output(filename)
Unpack actual/predicted reduction of function and constraint, note that this will only work when output_level > 0, which need to be checked by user
- paropt.ParOpt.unpack_tr_output(filename)
Unpack the parameters from the paropt output file and return them in a list of numpy arrays. This also returns a small string description from the file itself. This code relies ont he fixed-width nature of the file, which is guaranteed.
- class paropt.ParOpt.Optimizer
- getOptimizedPoint()
Get the optimized solution in PVec form for interpolation purposes
- class paropt.ParOpt.PVec
- copyValues(vec)
Copy values from the provided PVec
- dot(vec)
Compute the dot product with the provided PVec
- l1norm()
Compute the l1 norm of the vector
- maxabs()
Compute the linfty norm of the vector
- norm()
Compute the l2 norm of the vector
- zeroEntries()
Zero the values
- class paropt.ParOpt.InteriorPoint
- getOptimizedPoint()
Get the optimized solution in PVec form for interpolation purposes
- getOptimizedSlacks()
Get the optimized slack variables from the problem
- class paropt.ParOpt.TrustRegion
- getOptimizedPoint()
Get the optimized solution in PVec form for interpolation purposes
-
class ParOptProblem : public ParOptBase
Subclassed by CyParOptBlockProblem, ParOptInfeasSubproblem, ParOptMMA, ParOptSparseProblem, ParOptTrustRegionSubproblem
Public Functions
-
ParOptProblem(MPI_Comm _comm)
Create an empty ParOptProblem class without defining the problem layout.
- Parameters:
_comm -- is the MPI communicator
-
virtual ParOptVec *createDesignVec()
Create a new distributed design vector
- Returns:
a new distributed design vector
-
virtual ParOptVec *createConstraintVec()
Create a new distributed sparse constraint vector
- Returns:
a new distributed sparse constraint vector
-
virtual ParOptQuasiDefMat *createQuasiDefMat() = 0
Create a new quasi-definite matrix object
- Returns:
a new quasi-definite matrix object
-
MPI_Comm getMPIComm()
Get the communicator for the problem
- Returns:
the MPI communicator for the problem
-
void setProblemSizes(int _nvars, int _ncon, int _nwcon)
Set the problem size
- Parameters:
_nvars -- the number of local design variables
_ncon -- the global number of dense constraints
_nwcon -- the local number of sparse separable constraints
-
void setNumInequalities(int _ninequality, int _nwinequality)
Set the number of sparse or dense inequalities
- Parameters:
_ninequality -- the number of inequality constraints
_nwinequality -- the block size of the separable constraints
-
void getProblemSizes(int *_nvars, int *_ncon, int *_nwcon)
Get the problem size
- Parameters:
_nvars -- the number of local design variables
_ncon -- the global number of dense constraints
_nwcon -- the local number of sparse separable constraints
-
void getNumInequalities(int *_ninequality, int *_nwinequality)
Get the number of inequalities
- Parameters:
_ninequality -- the number of dense inequality constraints
_nwinequality -- the block size of the separable constraints
-
virtual int isSparseInequality()
Are the dense constraints inequalities? Default is true.
- Returns:
flag indicating if the dense constraints are inequalities
-
virtual int useLowerBounds()
Indicate whether to use the lower variable bounds. Default is true.
- Returns:
flag indicating whether to use lower variable bound.
-
virtual int useUpperBounds()
Indicate whether to use the upper variable bounds. Default is true.
- Returns:
flag indicating whether to use upper variable bounds.
-
virtual void getVarsAndBounds(ParOptVec *x, ParOptVec *lb, ParOptVec *ub) = 0
Get the initial variable values and bounds for the problem
- Parameters:
x -- is a vector of the initial design variable values
lb -- are the lower variable bounds
ub -- are the upper variable bounds
-
virtual int evalObjCon(ParOptVec *x, ParOptScalar *fobj, ParOptScalar *cons) = 0
Evaluate the objective and constraints.
Given the design variables x, compute the scalar objective value and the dense constraints. The constraints and objective must be consistent across all processors.
- Parameters:
x -- is the design variable vector
fobj -- is the objective value at x
cons -- is the array of constraint vaules at x
- Returns:
zero on success, non-zero fail flag on error
-
virtual int evalObjConGradient(ParOptVec *x, ParOptVec *g, ParOptVec **Ac) = 0
Evaluate the objective and constraint gradients.
Given the desgin variables x, compute the gradient of the objective and dense constraint functions. This call is made only after a call to evaluate the objective and dense constraint functions.
- Parameters:
x -- is the design variable vector
g -- is the gradient of the objective at x
Ac -- are the gradients of the dense constraints at x
- Returns:
zero on success, non-zero fail flag on error
-
virtual int evalHvecProduct(ParOptVec *x, ParOptScalar *z, ParOptVec *zw, ParOptVec *px, ParOptVec *hvec)
Evaluate the product of the Hessian with a given vector.
This function is only called if Hessian-vector products are requested by the optimizer. By default, no implementation is provided.
- Parameters:
x -- is the design variable vector
z -- is the array of multipliers for the dense constraints
zw -- is the vector of multipliers for the sparse constraints
px -- is the direction vector
hvec -- is the output vector hvec = H(x,z,zw)*px
- Returns:
zero on success, non-zero flag on error
-
virtual int evalHessianDiag(ParOptVec *x, ParOptScalar *z, ParOptVec *zw, ParOptVec *hdiag)
Evaluate the diagonal of the Hessian.
This is only used by MMA.
-
virtual void computeQuasiNewtonUpdateCorrection(ParOptVec *x, ParOptScalar *z, ParOptVec *zw, ParOptVec *s, ParOptVec *y)
Compute a correction or modification of the quasi-Newton update.
The vectors s and y represent the step and gradient difference, respectively between iterations. By default, no correction or modification is performed. However, some problems may benefit by modifying the gradient difference term.
- Parameters:
x -- is the design variable vector
z -- is the array of multipliers for the dense constraints
zw -- is the vector of multipliers for the sparse constraints
s -- The step in the quasi-Newton update
y -- The gradient difference in the quasi-Newton update
-
virtual void evalSparseCon(ParOptVec *x, ParOptVec *out)
Evaluate the sparse constraints.
Give the design variable vector x, compute the sparse constraints.
- Parameters:
x -- is the design variable vector
out -- is the sparse constraint vector
-
virtual void addSparseJacobian(ParOptScalar alpha, ParOptVec *x, ParOptVec *px, ParOptVec *out)
Compute the Jacobian-vector product of the sparse constraints.
This code computes the action of the Jacobian of the sparse constraint matrix on the input vector px, to compute out = alpha*J(x)*px.
- Parameters:
alpha -- is a scalar factor
x -- is the design variable vector
px -- is the input direction vector
out -- is the sparse product vector
-
virtual void addSparseJacobianTranspose(ParOptScalar alpha, ParOptVec *x, ParOptVec *pzw, ParOptVec *out)
Compute the tranpose Jacobian-vector product of the sparse constraints.
This code computes the action of the transpose Jacobian of the sparse constraint matrix on the input vector pzw, to compute out = alpha*J(x)^{T}*pzw.
- Parameters:
alpha -- is a scalar factor
x -- is the design variable vector
pzw -- is the input direction vector
out -- is the sparse product vector
-
virtual void addSparseInnerProduct(ParOptScalar alpha, ParOptVec *x, ParOptVec *cvec, ParOptScalar *A)
Add the inner product of the constraints to the matrix such that A += J(x)*cvec*J(x)^{T} where cvec is a diagonal matrix
- Parameters:
alpha -- is a scalar factor
x -- is the design variable vector
cvec -- are input components of the diagonal matrix
A -- is the output block-diagonal matrix
-
void checkGradients(double dh, ParOptVec *x = NULL, int check_hvec_product = 0)
Check the objective and constraint gradients for this problem instance
- Parameters:
dh -- Finite difference step size used for verification
x -- The design vector at which point to check the gradient (can be NULL)
-
ParOptProblem(MPI_Comm _comm)
-
class ParOptInteriorPoint : public ParOptBase
Public Functions
-
ParOptInteriorPoint(ParOptProblem *_prob, ParOptOptions *_options = NULL)
ParOpt interior point optimization constructor.
This function allocates and initializes the data that is required for parallel optimization. This includes initialization of the variables, allocation of the matrices and the BFGS approximate Hessian. This code also sets the default parameters for optimization. These parameters can be modified through member functions.
- Parameters:
prob -- the optimization problem
options -- the options object
-
~ParOptInteriorPoint()
Free the data allocated during the creation of the object
-
ParOptOptions *getOptions()
Get the options object associated with the interior point method
- Returns:
The options object
-
void resetProblemInstance(ParOptProblem *_prob)
Reset the problem instance.
The new problem instance must have the same number of constraints design variables, and design vector distribution as the original problem.
- Parameters:
problem -- ParOptProblem instance
-
int optimize(const char *checkpoint = NULL)
Perform the optimization.
This is the main function that performs the actual optimization. The optimization uses an interior-point method. The barrier parameter (mu/barrier_param) is controlled using a monotone approach where successive barrier problems are solved and the barrier parameter is subsequently reduced.
The method uses a quasi-Newton method where the Hessian is approximated using a limited-memory BFGS approximation. The special structure of the Hessian approximation is used to compute the updates. This computation relies on there being relatively few dense global inequality constraints (e.g. < 100).
The code also has the capability to handle very sparse linear constraints with the special structure that the rows of the constraints are nearly orthogonal. This capability is still under development.
- Parameters:
checkpoint -- the name of the checkpoint file (NULL if not needed)
-
void getProblemSizes(int *_nvars, int *_ncon, int *_nwcon)
Retrieve the problem sizes from the underlying problem class
- Parameters:
_nvars -- the local number of variables
_ncon -- the number of global constraints
_nwcon -- the number of sparse constraints
-
void getOptimizedPoint(ParOptVec **_x, ParOptScalar **_z, ParOptVec **_zw, ParOptVec **_zl, ParOptVec **_zu)
Retrieve the values of the design variables and multipliers.
This call can be made during the course of an optimization, but changing the values in x/zw/zl/zu is not recommended and the behavior after doing so is not defined. Note that inputs that are NULL are not assigned. If no output is available, for instance if use_lower == False, then NULL is assigned to the output.
- Parameters:
_x -- the design variable values
_z -- the dense constraint multipliers
_zw -- the sparse constraint multipliers
_zl -- the lower bound multipliers
_zu -- the upper bound multipliers
-
void getOptimizedSlacks(ParOptScalar **_s, ParOptScalar **_t, ParOptVec **_sw, ParOptVec **_tw)
Retrieve the values of the optimized slack variables.
Note that the dense inequality constraints are formualted as
c(x) = s - t
where s, t > 0. And the sparse inequality constraints are formulated as:
cw(x) = sw - tw
where sw, tw > 0. When equality rather than inequality constraints are present, sw and tw may be NULL.
- Parameters:
_s -- the postive slack for the dense constraints
_t -- the negative slack for the dense constraints
_sw -- the positive slack variable vector for the sparse constraints
_tw -- the negative slack variable vector for the sparse constraints
-
void setPenaltyGamma(double gamma)
Set the penalty parameter for the l1 penalty function.
- Parameters:
gamma -- is the value of the penalty parameter
-
void setPenaltyGamma(const double *gamma)
Set the individual penalty parameters for the l1 penalty function.
- Parameters:
gamma -- is the array of penalty parameter values.
-
void setBFGSUpdateType(ParOptBFGSUpdateType bfgs_update)
Set the type of BFGS update.
- Parameters:
update -- is the type of BFGS update to use
-
double getBarrierParameter()
Retrieve the barrier parameter
- Returns:
the barrier parameter value.
-
ParOptScalar getComplementarity()
Get the average of the complementarity products at the current point. This function call is collective on all processors.
- Returns:
the current complementarity.
-
void setQuasiNewton(ParOptCompactQuasiNewton *_qn)
Set the quasi-Newton update object.
- Parameters:
_qn -- the compact quasi-Newton approximation
-
void resetQuasiNewtonHessian()
Reset the Quasi-Newton Hessian approximation if it is used.
-
void resetDesignAndBounds()
Reset the design variables and bounds.
-
int writeSolutionFile(const char *filename)
Write out the design variables, Lagrange multipliers and slack variables to a binary file in parallel.
- Parameters:
filename -- is the name of the file to write
-
int readSolutionFile(const char *filename)
Read in the design variables, Lagrange multipliers and slack variables from a binary file.
This function requires that the same problem structure as the original problem.
- Parameters:
filename -- is the name of the file input
Public Static Functions
-
static void addDefaultOptions(ParOptOptions *options)
Add the default options to the input options object
- Parameters:
options -- The input options argument
-
ParOptInteriorPoint(ParOptProblem *_prob, ParOptOptions *_options = NULL)
-
class ParOptTrustRegion : public ParOptBase
Public Functions
-
ParOptTrustRegion(ParOptTrustRegionSubproblem *_subproblem, ParOptOptions *_options = NULL)
A parallel optimizer with trust region globalization
The following code implements a trust region method. The algorithm uses an l1 penalty function for the constraints with an l-infinity (box) constraint for the trust region. The trust region sub-problems are solved at each step by an instance of the ParOptInteriorPoint optimizer.
- Parameters:
_subproblem -- the ParOptTrustRegionSubproblem class
_options -- The trust region options
-
~ParOptTrustRegion()
Delete the trust region object
-
void initialize()
Initialize the problem
-
void setPenaltyGamma(double gamma)
Set the penalty parameter for the l1 penalty function.
- Parameters:
gamma -- is the value of the penalty parameter
-
void setPenaltyGamma(const double *gamma)
Set the individual penalty parameters for the l1 penalty function.
- Parameters:
gamma -- is the array of penalty parameter values.
-
int getPenaltyGamma(const double **gamma)
Retrieve the penalty parameter values.
- Parameters:
_penalty_gamma -- is the array of penalty parameter values.
-
void optimize(ParOptInteriorPoint *optimizer)
Perform the optimization
This performs all steps in the optimization: initialization, trust-region updates and quasi-Newton updates. This should be called once In some cases, you
- Parameters:
optimizer -- the instance of the ParOptInteriorPoint optimizer
-
void getOptimizedPoint(ParOptVec **_x)
Get the optimized point from the subproblem class
- Parameters:
x -- The values of the design variables at the optimized point
-
ParOptTrustRegion(ParOptTrustRegionSubproblem *_subproblem, ParOptOptions *_options = NULL)