Alternant matrix

From HandWiki

In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.

Generally, if f1,f2,,fn are functions from a set X to a field F, and α1,α2,,αmX, then the alternant matrix has size m×n and is defined by

M=[f1(α1)f2(α1)fn(α1)f1(α2)f2(α2)fn(α2)f1(α3)f2(α3)fn(α3)f1(αm)f2(αm)fn(αm)]

or, more compactly, Mij=fj(αi). (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which fj(α)=αj1, and Moore matrices, for which fj(α)=αqj1.

Properties

  • The alternant can be used to check the linear independence of the functions f1,f2,,fn in function space. For example, let f1(x)=sin(x), f2(x)=cos(x) and choose α1=0,α2=π/2. Then the alternant is the matrix [0110] and the alternant determinant is 10. Therefore M is invertible and the vectors {sin(x),cos(x)} form a basis for their spanning set: in particular, sin(x) and cos(x) are linearly independent.
  • Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let f1(x)=sin(x), f2=cos(x) and choose α1=0,α2=π. Then the alternant is [0101] and the alternant determinant is 0, but we have already seen that sin(x) and cos(x) are linearly independent.
  • Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which Ax+1+Bx+2=1(x+1)(x+2). Choosing f1(x)=1x+1, f2(x)=1x+2, f3(x)=1(x+1)(x+2) and (α1,α2,α3)=(1,2,3), we obtain the alternant [1/21/31/61/31/41/121/41/51/20][101011000]. Therefore, (1,1,1) is in the nullspace of the matrix: that is, f1f2f3=0. Moving f3 to the other side of the equation gives the partial fraction decomposition A=1,B=1.
  • If n=m and αi=αj for any ij, then the alternant determinant is zero (as a row is repeated).
  • If n=m and the functions fj(x) are all polynomials, then (αjαi) divides the alternant determinant for all 1i<jn. In particular, if V is a Vandermonde matrix, then i<j(αjαi)=detV divides such polynomial alternant determinants. The ratio detMdetV is therefore a polynomial in α1,,αm called the bialternant. The Schur polynomial s(λ1,,λn) is classically defined as the bialternant of the polynomials fj(x)=xλj.

Applications

See also

References

  • Thomas Muir (1960). A treatise on the theory of determinants. Dover Publications. pp. 321–363. 
  • A. C. Aitken (1956). Determinants and Matrices. Oliver and Boyd Ltd. pp. 111–123. 
  • Richard P. Stanley (1999). Enumerative Combinatorics. Cambridge University Press. pp. 334–342.