非线性优化求解器IPOPT的使用学习
[TOC]
非线性优化求解器 IPOPT 的介绍与安装及其 C++ 官方接口介绍
Overview
全称:Interior Point Optimizer
适用的公式形式:
证书:EPL (Eclipse Public License) open-source license, 商用免费,改动 IPOPT 的源码公布.
安装环境前提
- BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage)
linear solver for sparse symmetric indefinite matrices 支持稀疏对称矩阵运算会大幅度提升性能. - C++ compiler
- HSL package(专门支持 IPOPT 的 linear solvers,最低阶的 MA27 求解器(不支持并行) totally free,其余的高级求解器需要 academics 或者商业购买.
使用
C++, C, Java, or Fortran 接口,AMPL 的模型计算接口以及一些脚本语言接口(python/matlab 等),支持 Linux/Unix environments, and on Windows using Msys2/MinGW.
主要第三方接口:
- ADOL-C (automatic differentiation自动计算微分)
ADOL-C facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. - APMonitor
MATLAB, Python, and Web Interface to Ipopt for Android, Linux, macOS, and Windows. - CppAD (automatic differentiation 自动计算微分)
Given a C++ algorithm that computes function values, CppAD generates an algorithm that computes corresponding derivative values (of arbitrary order using either forward or reverse mode).
安装(Ubuntu 16.04)
- apt 包:
1 | sudo apt-get install gcc g++ gfortran git patch wget pkg-config liblapack-dev libmetis-dev |
- BLAS and LAPACK
即为前面 apt 安装的liblapack-dev
- HSL (Harwell Subroutines Library)
官网注册下载源码.
同时下载如下库. 并将前面下载的 HSL包解压后改名为coinhsl
(例如coinhsl-archive-2021.05.05 => coinhsl
)后复制到ThirdParty-HSL
文件夹中, 最后进行编译安装.
1 | git clone https://github.com/coin-or-tools/ThirdParty-HSL.git |
- 安装 IPOPT
源码下载并解压:
1 | mkdir build |
make test 的输出应该如下:
1 | Running unitTests... |
- 安装成功与否确认
进入 IPOPT 源码文件夹如下位置,用官方例子测试
1 | cd Ipopt-releases-3.14.2/build/examples/Cpp_example |
C++官方接口讲解
需要以下信息:
- Problem dimensions 问题维度
- number of variables
- number of constraints
- Problem bounds 边界与约束
- variable bounds
- constraint bounds
- Initial starting point 起点设置
- Initial values for the primal $x$ variables.
- Initial values for the multipliers (only required for a warm start option, 例如海森矩阵$g(x)$的系数$\lambda_i$.
- Problem Structure 矩阵结构(稀疏,非零, 主要使用 triplet format,即行 index + 列 index + 值包裹在一个数据单元里)
- number of nonzeros in the Jacobian of the constraints.
- number of nonzeros in the Hessian of the Lagrangian function.
- sparsity structure of the Jacobian of the constraints.
- sparsity structure of the Hessian of the Lagrangian function.
- Evaluation of Problem Functions 问题的各个方程
Information evaluated using a given point ( $x$, $\lambda$, $\sigma_f$) coming from Ipopt).- Objective function, $f(x)$.
- Gradient of the objective, $\nabla f(x)$.
- Constraint function values, $g(x)$.
- Jacobian of the constraints, $\nabla g(x)^T$.
- Hessian of the Lagrangian function, $\sigma_f \nabla^2 f(x) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x)$.
(this is not required if a quasi-Newton options is chosen to approximate the second derivatives, 如果使用拟牛顿法不需要计算, 但是性能会降低).
三个主要类
- problem representation -> pure virtual base class,
Ipopt::TNLP
. - executable (the main function) ->
Ipopt::IpoptApplication class
. - memory management (object deletion) ->
Ipopt::SmartPtr class
.
稀疏矩阵里非零个数定义:number of nonzeros is the total number of elements that may ever be nonzero, not just those that are nonzero at the starting point.
函数
参考官网链接:https://coin-or.github.io/Ipopt/INTERFACES.html
Ipopt::TNLP::get_nlp_info(维度初始化)
1
2
3
4
5
6
7virtual bool get_nlp_info(
Index& n,
Index& m,
Index& nnz_jac_g,
Index& nnz_h_lag,
IndexStyleEnum& index_style
) = 0;Parameters
n
(out) Storage for the number of variables $x$.m
(out) Storage for the number of constraints $g(x)$.nnz_jac_g
(out) Storage for the number of nonzero entries in the Jacobian.nnz_h_lag
(out) Storage for the number of nonzero entries in the Hessian.index_style
(out) Storage for the index style.
Ipopt::TNLP::get_bounds_info(边界条件)
等号约束表达为上下限相等1
2
3
4
5
6
7
8virtual bool get_bounds_info(
Index n,
Number* x_l,
Number* x_u,
Index m,
Number* g_l,
Number* g_u
) = 0;Parameters
n
(in) the number of variables $x$ in the problem.x_l
(out) the lower bounds $x^L$ for the variables $x$.x_u
(out) the upper bounds $x^U$ for the variables $x$.m
(in) the number of constraints $g(x)$ in the problem.g_l
(out) the lower bounds $g^L$ for the constraints $g(x)$.g_u
(out) the upper bounds $g^U$ for the constraints $g(x)$.
Ipopt::TNLP::get_starting_point(初始点)
1
2
3
4
5
6
7
8
9
10
11virtual bool get_starting_point(
Index n,
bool init_x,
Number* x,
bool init_z,
Number* z_L,
Number* z_U,
Index m,
bool init_lambda,
Number* lambda
) = 0;Parameters
n
(in) the number of variables $x$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.init_x
(in) if true, this method must provide an initial value for $x$.x
(out) the initial values for the primal variables $x$.init_z
(in) if true, this method must provide an initial value for the bound multipliers $z^L$ and $z^U$.z_L
(out) the initial values for the bound multipliers $z^L$.z_U
(out) the initial values for the bound multipliers $z^U$.m
(in) the number of constraints $g(x)$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.init_lambda
(in) if true, this method must provide an initial value for the constraint multipliers $\lambda$.lambda
(out) the initial values for the constraint multipliers, $\lambda$.
Ipopt::TNLP::eval_f(目标方程,一般而言
new_x
可以忽略,默认为 false,也就是说 eval_* 的结果会缓存如果与之前相同不会重复计算. 下同)1
2
3
4
5
6virtual bool eval_f(
Index n,
const Number* x,
bool new_x,
Number& obj_value
) = 0;Parameters
n
(in) the number of variables $x$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.x
(in) the values for the primal variables $x$ at which the objective function $f(x)$ is to be evaluated.new_x
(in) false if any evaluation method (eval_*) was previously called with the same values in $x$, true otherwise. This can be helpful when users have efficient implementations that calculate multiple outputs at once. Ipopt internally caches results from the TNLP and generally, this flag can be ignored.obj_value
(out) storage for the value of the objective function $f(x)$.
Ipopt::TNLP::eval_grad_f(目标方程偏导方程)
1
2
3
4
5
6virtual bool eval_grad_f(
Index n,
const Number* x,
bool new_x,
Number* grad_f
) = 0;Method to request the gradient of the objective function.
Parameters
n
(in) the number of variables $x$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.x
(in) the values for the primal variables $x$ at which the gradient $\nabla f(x)$ is to be evaluated.new_x
(in) false if any evaluation method (eval_*) was previously called with the same values inx
, true otherwise; see alsoTNLP::eval_f
.grad_f
(out) array to store values of the gradient of the objective function $\nabla f(x)$. The gradient array is in the same order as the $x$ variables (i.e., the gradient of the objective with respect tox[2]
should be put ingrad_f[2]
).- Returns*
true if success, false otherwise.
Ipopt::TNLP::eval_g(等号约束)
1
2
3
4
5
6
7virtual bool eval_g(
Index n,
const Number* x,
bool new_x,
Index m,
Number* g
) = 0;Method to request the constraint values.
Parameters
n
(in) the number of variables $x$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
x
(in) the values for the primal variables $x$ at which the constraint functions $g(x)$ are to be evaluatednew_x
(in) false if any evaluation method (eval_*) was previously called with the same values inx
, true otherwise; see alsoTNLP::eval_f
.m
(in) the number of constraints $g(x)$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.g
(out) array to store constraint function values $g(x)$, do not add or subtract the bound values $g^L$ or $g^U$.
Returns
true if success, false otherwise.
Ipopt::TNLP::eval_jac_g(约束函数的 jacobian 矩阵)
采用稀疏矩阵的Triplet Format,如下图所示.
第一次调用此函数初始化value,后面掉用此函数获得row,col的index.1
2
3
4
5
6
7
8
9
10virtual bool eval_jac_g(
Index n,
const Number* x,
bool new_x,
Index m,
Index nele_jac,
Index* iRow,
Index* jCol,
Number* values
) = 0;Parameters
n
(in) the number of variables $x$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.x
(in) first call: NULL; later calls: the values for the primal variables $x$ at which the constraint Jacobian $\nabla g(x)^T$ is to be evaluated.new_x
(in) false if any evaluation method (eval_*) was previously called with the same values inx
, true otherwise; see alsoTNLP::eval_f
.m
(in) the number of constraints $g(x)$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.nele_jac
(in) the number of nonzero elements in the Jacobian; it will have the same value that was specified inTNLP::get_nlp_info
.iRow
(out) first call: array of length nele_jac to store the row indices of entries in the Jacobian of the constraints; later calls: NULL.jCol
(out) first call: array of length nele_jac to store the column indices of entries in the Jacobian of the constraints; later calls: NULL.values
(out) first call: NULL; later calls: array of length nele_jac to store the values of the entries in the Jacobian of the constraints.- Returns*
true if success, false otherwise.
Ipopt::TNLP::eval_h( $f(x)$ 与 $g(x)$ 的海森矩阵, given values for $x$, $\sigma_f$, and $\lambda$)
1
2
3
4
5
6
7
8
9
10
11
12
13virtual bool eval_h(
Index n,
const Number* x,
bool new_x,
Number obj_factor,
Index m,
const Number* lambda,
bool new_lambda,
Index nele_hess,
Index* iRow,
Index* jCol,
Number* values
)Parameters
n
(in) the number of variables $x$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.x
(in) first call: NULL; later calls: the values for the primal variables $x$ at which the Hessian is to be evaluated.new_x
(in) false if any evaluation method (eval_*) was previously called with the same values inx
, true otherwise; see alsoTNLP::eval_f
.obj_factor
(in) factor $\sigma_f$ in front of the objective term in the Hessian.m
(in) the number of constraints $g(x)$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.lambda
(in) the values for the constraint multipliers $\lambda$ at which the Hessian is to be evaluated.new_lambda
(in) false if any evaluation method was previously called with the same values in lambda, true otherwise.nele_hess
(in) the number of nonzero elements in the Hessian; it will have the same value that was specified inTNLP::get_nlp_info
.iRow
(out) first call: array of length nele_hess to store the row indices of entries in the Hessian; later calls: NULL.jCol
(out) first call: array of length nele_hess to store the column indices of entries in the Hessian; later calls: NULL.values
(out) first call: NULL; later calls: array of length nele_hess to store the values of the entries in the Hessian.- Returns*
true if success, false otherwise.
Ipopt::TNLP::finalize_solution(wrapping函数,获取最终的求解状态)
1
2
3
4
5
6
7
8
9
10
11
12
13virtual void finalize_solution(
SolverReturn status,
Index n,
const Number* x,
const Number* z_L,
const Number* z_U,
Index m,
const Number* g,
const Number* lambda,
Number obj_value,
const IpoptData* ip_data,
IpoptCalculatedQuantities* ip_cq
) = 0;Parameters
- status (in) gives the status of the algorithm
- SUCCESS: Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options).
- MAXITER_EXCEEDED: Maximum number of iterations exceeded (can be specified by an option).
- CPUTIME_EXCEEDED: Maximum number of CPU seconds exceeded (can be specified by an option).
- STOP_AT_TINY_STEP: Algorithm proceeds with very little progress.
- STOP_AT_ACCEPTABLE_POINT: Algorithm stopped at a point that was converged, not to “desired” tolerances, but to “acceptable” tolerances (see the acceptable-… options).
- LOCAL_INFEASIBILITY: Algorithm converged to a point of local infeasibility. Problem may be infeasible.
- USER_REQUESTED_STOP: The user call-back function TNLP::intermediate_callback returned false, i.e., the user code requested a premature termination of the optimization.
- DIVERGING_ITERATES: It seems that the iterates diverge.
- RESTORATION_FAILURE: Restoration phase failed, algorithm doesn’t know how to proceed.
- ERROR_IN_STEP_COMPUTATION: An unrecoverable error occurred while Ipopt tried to compute the search direction.
- INVALID_NUMBER_DETECTED: Algorithm received an invalid number (such as NaN or Inf) from the NLP; see also option check_derivatives_for_nan_inf).
- INTERNAL_ERROR: An unknown internal error occurred.
n
(in) the number of variables $x$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.x
(in) the final values for the primal variables.z_L
(in) the final values for the lower bound multipliers.z_U
(in) the final values for the upper bound multipliers.m
(in) the number of constraints $g(x)$ in the problem; it will have the same value that was specified inTNLP::get_nlp_info
.g
(in) the final values of the constraint functions.lambda
(in) the final values of the constraint multipliers.obj_value
(in) the final value of the objective function.ip_data
(in) provided for expert users.ip_cq
(in) provided for expert users.
- status (in) gives the status of the algorithm
非线性优化求解器IPOPT的使用学习