Constrained least squares

From HandWiki

In constrained least squares one solves a linear least squares problem with an additional constraint on the solution.[1][2] This means, the unconstrained equation 𝐗β=𝐲 must be fit as closely as possible (in the least squares sense) while ensuring that some other property of β is maintained.

There are often special-purpose algorithms for solving such problems efficiently. Some examples of constraints are given below:

  • Equality constrained least squares: the elements of β must exactly satisfy 𝐋β=𝐝 (see Ordinary least squares).
  • Stochastic (linearly) constrained least squares: the elements of β must satisfy 𝐋β=𝐝+ν, where ν is a vector of random variables such that E(ν)=𝟎 and E(ννT)=τ2𝐈. This effectively imposes a prior distribution for β and is therefore equivalent to Bayesian linear regression.[3]
  • Regularized least squares: the elements of β must satisfy 𝐋β𝐲α (choosing α in proportion to the noise standard deviation of y prevents over-fitting).
  • Non-negative least squares (NNLS): The vector β must satisfy the vector inequality β0 defined componentwise—that is, each component must be either positive or zero.
  • Box-constrained least squares: The vector β must satisfy the vector inequalities bβbu, each of which is defined componentwise.
  • Integer-constrained least squares: all elements of β must be integers (instead of real numbers).
  • Phase-constrained least squares: all elements of β must be real numbers, or multiplied by the same complex number of unit modulus.

If the constraint only applies to some of the variables, the mixed problem may be solved using separable least squares by letting 𝐗=[X𝟏X𝟐] and βT=[β𝟏Tβ𝟐T] represent the unconstrained (1) and constrained (2) components. Then substituting the least-squares solution for β𝟏, i.e.

β^1=𝐗1+(𝐲𝐗2β2)

(where + indicates the Moore–Penrose pseudoinverse) back into the original expression gives (following some rearrangement) an equation that can be solved as a purely constrained problem in β2.

𝐏𝐗2β2=𝐏𝐲,

where 𝐏:=𝐈𝐗1𝐗1+ is a projection matrix. Following the constrained estimation of β^2 the vector β^1 is obtained from the expression above.

See also

References

  1. Amemiya, Takeshi (1985). "Model 1 with Linear Constraints". Advanced Econometrics. Oxford: Basil Blackwell. pp. 20–26. ISBN 0-631-15583-X. 
  2. Boyd, Stephen; Vandenberghe, Lieven (2018). Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares. Cambridge University Press. ISBN 978-1-316-51896-0. https://books.google.com/books?id=IApaDwAAQBAJ&q=%22Constrained+least+squares%22. 
  3. Fomby, Thomas B.; Hill, R. Carter; Johnson, Stanley R. (1988). "Use of Prior Information". Advanced Econometric Methods (Corrected softcover ed.). New York: Springer-Verlag. pp. 80–121. ISBN 0-387-96868-7.