API

All APIs of LiftAndLearn listed in a unstructured manner.

LiftAndLearn.choose_roMethod
choose_ro(Σ::Vector; en_low=-15) → r_all, en

Choose reduced order (ro) that preserves an acceptable energy.

Arguments

  • Σ::Vector: Singular value vector from the SVD of some Hankel Matrix
  • en_low: minimum size for energy preservation

Returns

  • r_all: vector of reduced orders
  • en: vector of energy values
source
LiftAndLearn.compute_all_errorsMethod
compute_all_errors(Xf, Yf, Xint, Yint, Xinf, Yinf, Vr) → PE, ISE, IOE, OSE, OOE

Compute all projection, state, and output errors

Arguments

  • Xf: reference state data
  • Yf: reference output data
  • Xint: intrusive model state data
  • Yint: intrusive model output data
  • Xinf: inferred model state data
  • Xint: inferrred model output data
  • Vr: POD basis

Return

  • PE: projection error
  • ISE: intrusive state error
  • IOE: intrusive output error
  • OSE: operator inference state error
  • OOE: operator inference output error
source
LiftAndLearn.deltaMethod
delta(v::Int, w::Int) → Float64

Another auxiliary function for the F matrix

Arguments

  • v: first index
  • w: second index

Returns

  • coefficient of 1.0 or 0.5
source
LiftAndLearn.ep_constraint_residualFunction
ep_constraint_residual(X, r)
ep_constraint_residual(X, r, redundant; with_moment)

Compute the constraint residual which is the residual of the energy-preserving constraint

\[\sum \left| \hat{h}_{ijk} + \hat{h}_{jik} + \hat{h}_{kji} \right| \quad 1 \leq i,j,k \leq r\]

Arguments

  • X::AbstractArray: the matrix to compute the constraint residual
  • r::Real: the dimension of the system
  • redundant::Bool: redundant or nonredundant operator
  • with_moment::Bool: whether to compute the moment of the constraint residual

Returns

  • ϵX: the constraint residual
  • mmt: the moment which is the sum of the constraint residual without absolute value
source
LiftAndLearn.ep_constraint_violationFunction
ep_constraint_violation(Data, X)
ep_constraint_violation(Data, X, redundant)

Compute the constraint violation which is the violation of the energy-preserving constraint

\[\sum \langle \mathbf{x}, \mathbf{H}(\mathbf{x}\otimes\mathbf{x})\rangle \quad \forall \mathbf{x} \in \mathcal{D}\]

Arguments

  • Data::AbstractArray: the data
  • X::AbstractArray: the matrix to compute the constraint violation
  • redundant::Bool: redundant or nonredundant operator

Returns

  • viol: the constraint violation
source
LiftAndLearn.ephec_opinfMethod
ephec_opinf(D, Rt, dims, operators_symbols, options, IG)

Energy preserved (Hard Equality Constraint) operator inference optimization (EPHEC)

Arguments

  • D: data matrix
  • Rt: transpose of the derivative matrix (or residual matrix)
  • dims: dimensions of the operators
  • operators_symbols: symbols of the operators
  • options: options for the operator inference set by the user
  • IG: Initial Guesses

Returns

  • Inferred operators

Note

  • This is currently implemented for linear + quadratic operators only
source
LiftAndLearn.epopinfMethod
epopinf(X, Vn, options; U, Xdot, IG)

Energy-preserving Operator Inference (EPOpInf) optimization problem.

source
LiftAndLearn.epp_opinfMethod
epp_opinf(D, Rt, dims, operators_symbols, options, IG)

Energy preserving penalty operator inference optimization (EPP)

Arguments

  • D: data matrix
  • Rt: transpose of the derivative matrix (or residual matrix)
  • dims: dimensions of the operators
  • operators_symbols: symbols of the operators
  • options: options for the operator inference set by the user
  • IG: Initial Guesses

Returns

  • Inferred operators

Note

  • This is currently implemented for linear + quadratic operators only
source
LiftAndLearn.epsic_opinfMethod
epsic_opinf(D, Rt, dims, operators_symbols, options, IG)

Energy preserved (Soft Inequality Constraint) operator inference optimization (EPSIC)

Arguments

  • D: data matrix
  • Rt: transpose of the derivative matrix (or residual matrix)
  • dims: dimensions of the operators
  • operators_symbols: symbols of the operators
  • options: options for the operator inference set by the user
  • IG: Initial Guesses

Returns

  • Inferred operators

Note

  • This is currently implemented for linear + quadratic operators only
source
LiftAndLearn.fat2tallMethod
fat2tall(A::AbstractArray)

Convert a fat matrix to a tall matrix by taking the transpose if the number of rows is less than the number of columns.

Arguments

  • A::AbstractArray: input matrix

Returns

  • A::AbstractArray: output matrix
source
LiftAndLearn.fidxMethod
fidx(n::Int, j::Int, k::Int) → Int

Auxiliary function for the F matrix indexing.

Arguments

  • n: row dimension of the F matrix
  • j: row index
  • k: col index

Returns

  • index corresponding to the F matrix
source
LiftAndLearn.get_data_matrixMethod
get_data_matrix(Xhat, Xhat_t, Ut, options; verbose)

Get the data matrix for the regression problem

Arguments

  • Xhat::AbstractArray: projected data matrix
  • Xhat_t::AbstractArray: projected data matrix (transposed)
  • Ut::AbstractArray: input data matrix (transposed)
  • options::AbstractOption: options for the operator inference set by the user
  • verbose::Bool=false: verbose mode returning the dimension breakdown and operator symbols

Returns

  • D: data matrix for the regression problem
  • dims: dimension breakdown of the data matrix
  • operator_symbols: operator symbols corresponding to dims for the regression problem
source
LiftAndLearn.isenergypreservingFunction
isenergypreserving(X)
isenergypreserving(X, redundant; tol)

Check if the matrix is energy-preserving.

Arguments

  • X::AbstractArray: the matrix to check if it is energy-preserving
  • redundant::Bool: redundant or nonredundant operator
  • tol::Real: the tolerance

Returns

  • Bool: whether the matrix is energy-preserving
source
LiftAndLearn.leastsquares_solveMethod
leastsquares_solve(D::AbstractArray, Rt::AbstractArray, Y::AbstractArray, Xhat_t::AbstractArray, 
         dims::AbstractArray, operator_symbols::AbstractArray, options::AbstractOption)

Solve the standard Operator Inference with/without regularization

Arguments

  • D::AbstractArray: data matrix
  • Rt::AbstractArray: derivative data matrix (tall)
  • Yt::AbstractArray: output data matrix (tall)
  • Xhat_t::AbstractArray: projected data matrix (tall)
  • dims::AbstractArray: dimensions of the operators
  • operator_symbols::AbstractArray: symbols of the operators
  • options::AbstractOption: options for the operator inference set by the user

Returns

  • operators::Operators: All learned operators
source
LiftAndLearn.lifted_basisMethod
lifted_basis(W, Nl, gp, ro) → Vr

Create the block-diagonal POD basis for the new lifted system data

Arguments

  • w: lifted data matrix
  • Nl: number of variables of the lifted state dynamics
  • gp: number of grid points for each variable
  • ro: vector of the reduced orders for each basis

Return

  • Vr: block diagonal POD basis
source
LiftAndLearn.opinfMethod
opinf(X::AbstractArray, Vn::AbstractArray, options::AbstractOption; 
    U::AbstractArray=zeros(1,1), Y::AbstractArray=zeros(1,1),
    Xdot::AbstractArray=[]) → op::Operators

Infer the operators with derivative data given. NOTE: Make sure the data is constructed such that the row is the state vector and the column is the time.

Arguments

  • X::AbstractArray: state data matrix
  • Vn::AbstractArray: POD basis
  • options::AbstractOption: options for the operator inference defined by the user
  • U::AbstractArray: input data matrix
  • Y::AbstractArray: output data matix
  • Xdot::AbstractArray: derivative data matrix

Returns

  • op::Operators: inferred operators
source
LiftAndLearn.opinfMethod
opinf(X::AbstractArray, Vn::AbstractArray, full_op::Operators, options::AbstractOption;
    U::AbstractArray=zeros(1,1), Y::AbstractArray=zeros(1,1)) → op::Operators

Infer the operators with reprojection method (dispatch). NOTE: Make sure the data is constructed such that the row is the state vector and the column is the time.

Arguments

  • X::AbstractArray: state data matrix
  • Vn::AbstractArray: POD basis
  • full_op::Operators: full order model operators
  • options::AbstractOption: options for the operator inference defined by the user
  • U::AbstractArray: input data matrix
  • Y::AbstractArray: output data matix
  • return_derivative::Bool=false: return the derivative matrix (or residual matrix)

Returns

  • op::Operators: inferred operators
source
LiftAndLearn.opinfMethod
opinf(W::AbstractArray, Vn::AbstractArray, lm::lifting, full_op::Operators,
        options::AbstractOption; U::AbstractArray=zeros(1,1), 
        Y::AbstractArray=zeros(1,1), IG::Operators=Operators()) → op::Operators

Infer the operators for Lift And Learn for reprojected data (dispatch). NOTE: make sure that the data is constructed such that the row dimension is the state dimension and the column dimension is the time dimension.

Arguments

  • W::AbstractArray: state data matrix
  • Vn::AbstractArray: POD basis
  • lm::lifting: struct of the lift map
  • full_op::Operators: full order model operators
  • options::AbstractOption: options for the operator inference defined by the user
  • U::AbstractArray: input data matrix
  • Y::AbstractArray: output data matix

Returns

  • op::Operators: inferred operators

Note

  • You can opt to use the unlifted version of the opinf function instead of this dispatch if it is not necessary to reproject the data.
source
LiftAndLearn.podMethod
pod(op, Vr, sys_struct; nonredundant_operators)

Perform intrusive model reduction using Proper Orthogonal Decomposition (POD). This implementation is liimted to

  • state: up to 4th order
  • input: only B matrix
  • output: only C and D matrices
  • state-input-coupling: bilinear
  • constant term: K matrix

Arguments

  • op: operators of the target system
  • Vr: POD basis
  • options: options for the operator inference

Return

  • op_new: new operator projected onto the basis
source
LiftAndLearn.proj_errorMethod
proj_error(Xf, Vr) → PE

Compute the projection error

Arguments

  • Xf: reference state data
  • Vr: POD basis

Return

  • PE: projection error
source
LiftAndLearn.rel_output_errorMethod
rel_output_error(Yf, Y) → OE

Compute relative output error

Arguments

  • Yf: reference output data
  • Y: testing output data

Return

  • OE: output error
source
LiftAndLearn.rel_state_errorMethod
rel_state_error(Xf, X, Vr) → SE

Compute the relative state error

Arguments

  • Xf: reference state data
  • X: testing state data
  • Vr: POD basis

Return

  • SE: state error
source
LiftAndLearn.reprojectMethod
reproject(Xhat::AbstractArray, V::AbstractArray, Ut::AbstractArray,
    op::Operators, options::AbstractOption) → Rhat::AbstractArray

Reprojecting the data to minimize the error affected by the missing orders of the POD basis

Arguments

  • Xhat::AbstractArray: state data matrix projected onto the basis
  • V::AbstractArray: POD basis
  • Ut::AbstractArray: input data matrix (tall)
  • op::Operators: full order model operators
  • options::AbstractOption: options for the operator inference defined by the user

Return

  • Rhat::AbstractArray: R matrix (transposed) for the regression problem
source
LiftAndLearn.reprojectMethod
reproject(Xhat::Matrix, V::Union{VecOrMat,BlockDiagonal}, U::VecOrMat,
    lm::lifting, op::Operators, options::AbstractOption) → Rhat::Matrix

Reprojecting the lifted data

Arguments

  • Xhat::AbstractArray: state data matrix projected onto the basis
  • V::AbstractArray: POD basis
  • Ut::AbstractArray: input data matrix (tall)
  • lm::lifting: struct of the lift map
  • op::Operators: full order model operators
  • options::AbstractOption: options for the operator inference defined by the user

Returns

  • Rhat::Matrix: R matrix (transposed) for the regression problem
source
LiftAndLearn.tall2fatMethod
tall2fat(A::AbstractArray)

Convert a tall matrix to a fat matrix by taking the transpose if the number of rows is less than the number of columns.

Arguments

  • A::AbstractArray: input matrix

Returns

  • A::AbstractArray: output matrix
source
LiftAndLearn.tikhonovMethod
tikhonov(b::AbstractArray, A::AbstractArray, Γ::AbstractMatrix, tol::Real;
    flag::Bool=false)

Tikhonov regression

Arguments

  • b::AbstractArray: right hand side of the regression problem
  • A::AbstractArray: left hand side of the regression problem
  • Γ::AbstractMatrix: Tikhonov matrix
  • tol::Real: tolerance for the singular values
  • flag::Bool: flag for the tolerance

Returns

  • regression solution
source
LiftAndLearn.tikhonovMatrix!Method
tikhonovMatrix!(Γ::AbstractArray, dims::Dict, options::AbstractOption)

Construct the Tikhonov matrix

Arguments

  • Γ::AbstractArray: Tikhonov matrix (pass by reference)
  • options::AbstractOption: options for the operator inference set by the user

Returns

  • Γ: Tikhonov matrix (pass by reference)
source
LiftAndLearn.time_derivative_approxMethod
time_derivative_approx(X, options)

Approximating the derivative values of the data with different integration schemes

Arguments

  • X::VecOrMat: data matrix
  • options::AbstractOption: operator inference options

Returns

  • dXdt: derivative data
  • idx: index for the specific integration scheme (important for later use)
source
LiftAndLearn.unpack_operators!Method
unpack_operators!(
    operators,
    O,
    Yt,
    Xhat_t,
    dims,
    operator_symbols,
    options
)

Unpack the operators from the operator matrix O including the output.

source