KASKADE 7 development version
|
An iterative solver for small instances of a particular class of quadratic programs. More...
#include <qp.hh>
An iterative solver for small instances of a particular class of quadratic programs.
The solver addresses problems of the form
\[ \min_x \frac{1}{2} x^T Ax + c^T x \quad \text{s.t. } Bx \le b, \]
where \( A \) is positive semidefinite, and approximates the solution by minimizing the Powell-Hestenes-Rockafellar augmented Lagrangian
\[ \min_{x} \frac{1}{2} x^T Ax + c^T x + \frac{\gamma}{2}\left\|Bx-\Big(b-\frac{\lambda}{\gamma}\Big)\right\|_+^2 \]
using a semismooth Newton method with exact linesearch.
The penalty parameter \( \gamma \) affects both, solution quality and solution effort. For \( \gamma\to\infty \), the original problem is better approximated, but the penalized problem's condition gets worse.
A good multiplier estimate \(\lambda\) improves the solution accuracy significantly. Compared to the standard augmented Lagrangian where a linear term \( \lambda^T (Bx-b) \) is added, the PHR version with shiftet constraints has the advantage that in the inner bound constrained QPs that are to be solved the strongly active constraints remain strongly active. In the standard formulation, they become weakly active in the course of the AL iteration, which can induce frequent switching of constraints between active and inactive.
d | the dimension of (vector valued) optimization variables |
sparsity | the type of linear algebra data structures to use, either DENSE or SPARSE |
Real | the real field type of matrix/vector entries. Explicit instantiations are provided for float and double. |
Public Types | |
using | VectorX = Dune::BlockVector< Dune::FieldVector< Real, d > > |
using | VectorB = Dune::BlockVector< Dune::FieldVector< Real, 1 > > |
using | MatrixA = std::conditional_t< sparsity==QPStructure::DENSE, DynamicMatrix< AEntry >, NumaBCRSMatrix< AEntry > > |
using | MatrixB = std::conditional_t< sparsity==QPStructure::DENSE, DynamicMatrix< BEntry >, NumaBCRSMatrix< BEntry > > |
Public Member Functions | |
QPPenalizedSolver (MatrixA const &A, MatrixB const &B) | |
Constructs the solver, providing the matrices \( A \) and \( B \). More... | |
QPPenalizedSolver (MatrixA const &A, MatrixB const &B, Real gamma) | |
Constructs the solver, providing the matrices \( A \) and \( B \). More... | |
QPPenalizedSolver< d, sparsity, Real > & | setMinSteps (int i) |
Sets the minimum number of projected gradient steps to perform. More... | |
QPPenalizedSolver< d, sparsity, Real > & | setMaxSteps (int i) |
Sets the maximum number of projected gradient steps to perform. More... | |
QPPenalizedSolver< d, sparsity, Real > & | setPenalty (Real gamma) |
Sets the penalty parameter \( \gamma \). More... | |
Real | getPenalty () const |
Returns the used penalty parameter \( \gamma \). More... | |
std::tuple< VectorX, VectorB, int > | solve (VectorX const &c, VectorB const &b, VectorB const &lambda, double tol) const |
Solves the QP approximately for the given right hand side vectors. More... | |
std::tuple< VectorX, VectorB, int > | solve (VectorX const &c, VectorB const &b, VectorB lambda, double tol, VectorX const &x) const |
Solves the QP approximately for the given right hand side vectors. More... | |
std::tuple< VectorX, int > | solve (VectorX const &c, VectorB const &b, double tol, VectorX const &x) const |
std::tuple< VectorX, int > | solve (VectorX const &c, VectorB const &b, double tol) const |
VectorB | multiplierEstimate (VectorX const &x, VectorX c) const |
Computes a least squares multiplier estimate. More... | |
Static Public Attributes | |
static constexpr QPStructure | sparse = sparsity |
Given info on whether dense or sparse arithmetic is used (i.e. the choice of template parameter used in this class). More... | |
using Kaskade::QPPenalizedSolver< d, sparsity, Real >::MatrixA = std::conditional_t<sparsity==QPStructure::DENSE, DynamicMatrix<AEntry>, NumaBCRSMatrix<AEntry> > |
using Kaskade::QPPenalizedSolver< d, sparsity, Real >::MatrixB = std::conditional_t<sparsity==QPStructure::DENSE, DynamicMatrix<BEntry>, NumaBCRSMatrix<BEntry> > |
using Kaskade::QPPenalizedSolver< d, sparsity, Real >::VectorB = Dune::BlockVector<Dune::FieldVector<Real,1> > |
using Kaskade::QPPenalizedSolver< d, sparsity, Real >::VectorX = Dune::BlockVector<Dune::FieldVector<Real,d> > |
Kaskade::QPPenalizedSolver< d, sparsity, Real >::QPPenalizedSolver | ( | MatrixA const & | A, |
MatrixB const & | B | ||
) |
Constructs the solver, providing the matrices \( A \) and \( B \).
The argument matrices are copied internally, and are not referenced later.
Kaskade::QPPenalizedSolver< d, sparsity, Real >::QPPenalizedSolver | ( | MatrixA const & | A, |
MatrixB const & | B, | ||
Real | gamma | ||
) |
Constructs the solver, providing the matrices \( A \) and \( B \).
The argument matrices are copied internally, and are not referenced later.
|
inline |
VectorB Kaskade::QPPenalizedSolver< d, sparsity, Real >::multiplierEstimate | ( | VectorX const & | x, |
VectorX | c | ||
) | const |
Computes a least squares multiplier estimate.
From the adjoint equation
\[ Ax+c - B^T \lambda = 0 \]
and a given approximate solution \( x \), the multiplier can be estimated by minimizing the adjoint equation residual. This results in
\[ BB^T \lambda = B(Ax+c). \]
This estimate, projected back to \( \lambda \le 0 \), is computed here.
The least squares multiplier estimate is well suited for obtaining an initial multiplier for the augmented Lagrangian method, if only a good guess of the solution is available. Examples of this are discretized inequality constraints, where the multiplier structure can change, rendering previously computed multiplier useless (or requires at least a suitable prolongation).
In contrast, the first order multiplier update works only if \( x \) is a minimizer of the penalized problem with known multiplier.
QPPenalizedSolver< d, sparsity, Real > & Kaskade::QPPenalizedSolver< d, sparsity, Real >::setMaxSteps | ( | int | i | ) |
Sets the maximum number of projected gradient steps to perform.
QPPenalizedSolver< d, sparsity, Real > & Kaskade::QPPenalizedSolver< d, sparsity, Real >::setMinSteps | ( | int | i | ) |
Sets the minimum number of projected gradient steps to perform.
QPPenalizedSolver< d, sparsity, Real > & Kaskade::QPPenalizedSolver< d, sparsity, Real >::setPenalty | ( | Real | gamma | ) |
Sets the penalty parameter \( \gamma \).
The default on construction is \( \gamma = 10^3 \).
std::tuple< VectorX, int > Kaskade::QPPenalizedSolver< d, sparsity, Real >::solve | ( | VectorX const & | c, |
VectorB const & | b, | ||
double | tol | ||
) | const |
std::tuple< VectorX, int > Kaskade::QPPenalizedSolver< d, sparsity, Real >::solve | ( | VectorX const & | c, |
VectorB const & | b, | ||
double | tol, | ||
VectorX const & | x | ||
) | const |
std::tuple< VectorX, VectorB, int > Kaskade::QPPenalizedSolver< d, sparsity, Real >::solve | ( | VectorX const & | c, |
VectorB const & | b, | ||
VectorB const & | lambda, | ||
double | tol | ||
) | const |
Solves the QP approximately for the given right hand side vectors.
The starting point is \( x = 0 \).
c | the linear term of the objective |
b | the constraints right hand side |
lambda | the multiplier estimate. Due to the problem structure, \( \lambda \le 0 \) holds. |
tol | the tolerance for termination |
The iteration is terminated as soon as the projected gradient \( s \) is small, i.e. \( \|s\|_2 \le \mathsf{tol}\). Together with the solution vector, a multiplier estimate based on the first order update \( \lambda \gets \lambda - (Bx-s) \) is computed.
If the QP is infeasible, an InfeasibleProblemException is thrown.
std::tuple< VectorX, VectorB, int > Kaskade::QPPenalizedSolver< d, sparsity, Real >::solve | ( | VectorX const & | c, |
VectorB const & | b, | ||
VectorB | lambda, | ||
double | tol, | ||
VectorX const & | x | ||
) | const |
Solves the QP approximately for the given right hand side vectors.
x | the start iterate |
c | the linear term of the objective |
b | the constraints right hand side |
lambda | the multiplier estimate. Due to the problem structure, \( \lambda \ge 0 \) holds. |
tol | the tolerance for termination |
The iteration is terminated as soon as the projected gradient \( s \) is small, i.e. \( \|s\|_2 \le \mathsf{tol}\). Together with the solution vector, a multiplier estimate based on the first order update \( \lambda \gets \lambda - (Bx-s) \) is computed.
If the QP is infeasible, an InfeasibleProblemException is thrown.
|
staticconstexpr |