Elimination Matrix

Elimination matrix $\mathbf{L}$, first mentioned by Tracy and Singh (1972) and later by Vetter (1975), was studied in depth by Magnus and Neudecker in Magnus and Neudecker [1]. This matrix holds properties that are crucial to converting the standard Kronecker product vector to the unique Kronecker product vector as well as converting the non-redundant polynomial operators to the redundant operators.

Definition

The elimination matrix $\mathbf{L}_n$ is a unique matrix of size $\left(\tfrac{n(n+1)}{2} \times n^2\right)$ that extracts the unique elements from the vectorization of a symmetric matrix. For any symmetric matrix $\mathbf{S} \in \mathbb{R}^{n \times n}$, the vectorization $\mathrm{vec}(\mathbf{S})$ contains redundant elements due to symmetry. The elimination matrix $\mathbf{L}_n$ is defined such that:

\[ \mathrm{vech}(\mathbf{S}) = \mathbf{L}_n \mathrm{vec}(\mathbf{S})\]

where:

  • operator $\mathrm{vec}(\mathbf{S})$ stacks all columns of $\mathbf{S}$ into a single vector of length $n^2$.
  • and $\mathrm{vech}(\mathbf{S})$ stacks only the elements on and below (or above) the main diagonal into a vector of length $\tfrac{n(n+1)}{2}$.

The elimination matrix effectively "eliminates" the redundant elements in $\mathrm{vec}(\mathbf{S})$, preserving only the unique components.

Properties of the Elimination Matrix:

  • Idempotency: $\mathbf{L}_n \mathbf{L}_n^\top$ is an idempotent matrix, meaning $(\mathbf{L}_n \mathbf{L}_n^\top)^2 = \mathbf{L}_n \mathbf{L}_n^\top$.
  • Construction: Each row of $\mathbf{L}_n$ contains a single one and zeros elsewhere, mapping elements from $\mathrm{vec}(\mathbf{S})$ to $\mathrm{vech}(\mathbf{S})$.
Note

Unfortunately, the definition of the elimination matrix is not formally/mathematically defined for Kronecker products beyond the order of 2. However, implementation-wise, this package has generalized the matrix.

Converting Between Kronecker Products

In polynomial dynamical systems, we often deal with Kronecker powers of vectors, which can be redundant due to symmetric terms. The elimination matrix facilitates the conversion between the standard Kronecker product vector and the unique Kronecker product vector.

Standard Kronecker Power:

Given a vector $\mathbf{x} \in \mathbb{R}^n$, the $k$-th order Kronecker power is:

\[ \mathbf{x}^{[k]} = \underbrace{\mathbf{x} \otimes \mathbf{x} \otimes \cdots \otimes \mathbf{x}}_{k \text{ times}} \in \mathbb{R}^{n^k}\]

Unique Kronecker Power:

The unique Kronecker power, denoted by $\mathbf{x}^{\langle k \rangle}$, contains only the unique monomials of degree $k$ formed from the elements of $\mathbf{x}$. It has a length of $\tbinom{n + k - 1}{k}$.

Conversion Using the Elimination Matrix:

The relationship between the standard and unique Kronecker powers is given by:

\[ \mathbf{x}^{\langle k \rangle} = \mathbf{L}_{n,k} \mathbf{x}^{[k]}\]

where $\mathbf{L}_{n,k}$ is the elimination matrix corresponding to the $k$-th Kronecker power, of size $\left(\tbinom{n + k - 1}{k} \times n^k\right)$.

Example:

For $n = 2$ and $k = 2$, let $\mathbf{x} = [x_1, ~x_2]^\top$. Then:

  • Standard Kronecker Power:

    \[ \mathbf{x}^{[2]} = [x_1^2, ~x_1 x_2, ~x_2 x_1, ~x_2^2]^\top\]

  • Unique Kronecker Power:

    \[ \mathbf{x}^{\langle 2 \rangle} = [x_1^2, ~x_1 x_2, ~x_2^2]^\top\]

The elimination matrix $\mathbf{L}_{2,2}$ maps $\mathbf{x}^{[2]}$ to $\mathbf{x}^{\langle 2 \rangle}$ by combining redundant terms (since $x_1 x_2$ and $x_2 x_1$ are the same):

\[ \mathbf{x}^{\langle 2 \rangle} = \mathbf{L}_{2,2} \mathbf{x}^{[2]}\]

UniqueKronecker.elimatFunction
elimat(n::Int, p::Int) -> Lp::SparseMatrixCSC{Int}

Create an elimination matrix of order p for a vector of length n Magnus and Neudecker [1].

Arguments

  • n::Int: The length of the vector.
  • p::Int: The order of the elimination matrix, e.g., p = 2 for x ⊗ x.

Output

  • Lp::SparseMatrixCSC{Int}: The elimination matrix of order p.

Example

julia> elimat(2,2)
3×4 SparseMatrixCSC{Int64, Int64} with 3 stored entries:
 1  ⋅  ⋅  ⋅
 ⋅  1  ⋅  ⋅
 ⋅  ⋅  ⋅  1
source

Benefits:

  • Efficiency: Reduces computational complexity by working with smaller vectors.
  • Simplicity: Simplifies expressions and computations involving symmetric terms.
  • Clarity: Makes the structure of polynomial terms more transparent.

Converting Between Operators

The elimination matrix also plays a crucial role in converting between redundant and non-redundant polynomial operators in dynamical systems.

Redundant Operators:

In the polynomial system:

\[ \dot{\mathbf{x}}(t) = \mathbf{A}\mathbf{x}(t) + \mathbf{A}_2 \mathbf{x}^{[2]}(t) + \mathbf{A}_3 \mathbf{x}^{[3]}(t) + \cdots\]

the matrices $\mathbf{A}_k \in \mathbb{R}^{n \times n^k}$ operate on the full Kronecker powers, which include redundant terms.

Unique Operators:

To eliminate redundancy, we define unique operators $\mathbf{A}_{ku} \in \mathbb{R}^{n \times \tbinom{n + k - 1}{k}}$ such that:

\[ \dot{\mathbf{x}}(t) = \mathbf{A}\mathbf{x}(t) + \mathbf{A}_{2u} \mathbf{x}^{\langle 2 \rangle}(t) + \mathbf{A}_{3u} \mathbf{x}^{\langle 3 \rangle}(t) + \cdots\]

Conversion Using the Elimination Matrix:

The redundant operator $\mathbf{A}_k$ and the unique operator $\mathbf{A}_{ku}$ are related through the elimination matrix:

\[ \mathbf{A}_{k} = \mathbf{A}_{ku} \mathbf{L}_{n,k}\]

which actually works to duplicate the entries of the non-redundant operator. Thus, though seemingly counter-intuitive, the elimination matrix duplicates the entries and constructs to convert the $\mathbf{A}_2$ matrix to the $\mathbf{A}_{2u}$ matrix.

Example:

Consider a system with $n = 2$:

  • Redundant operator: $\mathbf{A}_2 \in \mathbb{R}^{2 \times 4}$
  • Unique operator: $\mathbf{A}_{2u} \in \mathbb{R}^{2 \times 3}$

The elimination matrix $\mathbf{L}_{2,2}$ maps $\mathbf{x}^{[2]}$ to $\mathbf{x}^{\langle 2 \rangle}$:

\[ \mathbf{x}^{\langle 2 \rangle} = \mathbf{L}_{2,2} \mathbf{x}^{[2]}\]

The operators are related by:

\[ \mathbf{A}_{2u} = \mathbf{A}_2 \mathbf{L}_{2,2}^\top\]

UniqueKronecker.duplicateFunction
duplicate(A::AbstractArray)

Duplicate the redundant polynomial coefficients in the matrix A with a unique set of coefficients and return the matrix with redundant coefficients.

Arguments

  • A::AbstractArray: A matrix
  • p::Int: The order of the polynomial, e.g., p = 2 for x ⊗ x.

Returns

  • matrix with redundant coefficients

Example

julia> n = 2; P = rand(n,n); P *= P'; p = vec(P)
4-element Vector{Float64}:
 0.5085988756090203
 0.7704767970682769
 0.7704767970682769
 1.310279680309927

julia> Q = rand(n,n); Q *= Q'; q = vec(Q)
4-element Vector{Float64}:
 0.40940214810208353
 0.2295272821417254
 0.2295272821417254
 0.25503767587483905

julia> A2 = [p'; q']
2×4 Matrix{Float64}:
 0.257622  0.44721   0.0202203  0.247649
 1.55077   0.871029  0.958499   0.650717

julia> D2 = dupmat(2,2)
4×3 SparseMatrixCSC{Int64, Int64} with 4 stored entries:
 1  ⋅  ⋅
 ⋅  1  ⋅
 ⋅  1  ⋅
 ⋅  ⋅  1

julia> A2 * D2
2×3 Matrix{Float64}:
 0.508599  1.54095   1.31028
 0.409402  0.459055  0.255038

julia> duplicate(A2 * D2, 2)
2×4 Matrix{Float64}:
 0.508599  1.54095   0.0  1.31028
 0.409402  0.459055  0.0  0.255038
source