API
All APIs of LiftAndLearn listed in a unstructured manner.
LiftAndLearn.LiftAndLearn — ModuleLiftAndLearn package main moduleLiftAndLearn.choose_ro — Methodchoose_ro(Σ::Vector; en_low=-15) → r_all, enChoose reduced order (ro) that preserves an acceptable energy.
Arguments
- Σ::Vector: Singular value vector from the SVD of some Hankel Matrix
- en_low: minimum size for energy preservation
Returns
- r_all: vector of reduced orders
- en: vector of energy values
LiftAndLearn.compute_all_errors — Methodcompute_all_errors(Xf, Yf, Xint, Yint, Xinf, Yinf, Vr) → PE, ISE, IOE, OSE, OOECompute all projection, state, and output errors
Arguments
- Xf: reference state data
- Yf: reference output data
- Xint: intrusive model state data
- Yint: intrusive model output data
- Xinf: inferred model state data
- Xint: inferrred model output data
- Vr: POD basis
Return
- PE: projection error
- ISE: intrusive state error
- IOE: intrusive output error
- OSE: operator inference state error
- OOE: operator inference output error
LiftAndLearn.delta — Methoddelta(v::Int, w::Int) → Float64Another auxiliary function for the F matrix
Arguments
- v: first index
- w: second index
Returns
- coefficient of 1.0 or 0.5
LiftAndLearn.ep_constraint_residual — Functionep_constraint_residual(X, r)
ep_constraint_residual(X, r, redundant; with_moment)
Compute the constraint residual which is the residual of the energy-preserving constraint
\[\sum \left| \hat{h}_{ijk} + \hat{h}_{jik} + \hat{h}_{kji} \right| \quad 1 \leq i,j,k \leq r\]
Arguments
- X::AbstractArray: the matrix to compute the constraint residual
- r::Real: the dimension of the system
- redundant::Bool: redundant or nonredundant operator
- with_moment::Bool: whether to compute the moment of the constraint residual
Returns
- ϵX: the constraint residual
- mmt: the moment which is the sum of the constraint residual without absolute value
LiftAndLearn.ep_constraint_violation — Functionep_constraint_violation(Data, X)
ep_constraint_violation(Data, X, redundant)
Compute the constraint violation which is the violation of the energy-preserving constraint
\[\sum \langle \mathbf{x}, \mathbf{H}(\mathbf{x}\otimes\mathbf{x})\rangle \quad \forall \mathbf{x} \in \mathcal{D}\]
Arguments
- Data::AbstractArray: the data
- X::AbstractArray: the matrix to compute the constraint violation
- redundant::Bool: redundant or nonredundant operator
Returns
- viol: the constraint violation
LiftAndLearn.ephec_opinf — Methodephec_opinf(D, Rt, dims, operators_symbols, options, IG)
Energy preserved (Hard Equality Constraint) operator inference optimization (EPHEC)
Arguments
- D: data matrix
- Rt: transpose of the derivative matrix (or residual matrix)
- dims: dimensions of the operators
- operators_symbols: symbols of the operators
- options: options for the operator inference set by the user
- IG: Initial Guesses
Returns
- Inferred operators
Note
- This is currently implemented for linear + quadratic operators only
LiftAndLearn.epopinf — Methodepopinf(X, Vn, options; U, Xdot, IG)
Energy-preserving Operator Inference (EPOpInf) optimization problem.
LiftAndLearn.epp_opinf — Methodepp_opinf(D, Rt, dims, operators_symbols, options, IG)
Energy preserving penalty operator inference optimization (EPP)
Arguments
- D: data matrix
- Rt: transpose of the derivative matrix (or residual matrix)
- dims: dimensions of the operators
- operators_symbols: symbols of the operators
- options: options for the operator inference set by the user
- IG: Initial Guesses
Returns
- Inferred operators
Note
- This is currently implemented for linear + quadratic operators only
LiftAndLearn.epsic_opinf — Methodepsic_opinf(D, Rt, dims, operators_symbols, options, IG)
Energy preserved (Soft Inequality Constraint) operator inference optimization (EPSIC)
Arguments
- D: data matrix
- Rt: transpose of the derivative matrix (or residual matrix)
- dims: dimensions of the operators
- operators_symbols: symbols of the operators
- options: options for the operator inference set by the user
- IG: Initial Guesses
Returns
- Inferred operators
Note
- This is currently implemented for linear + quadratic operators only
LiftAndLearn.fat2tall — Methodfat2tall(A::AbstractArray)Convert a fat matrix to a tall matrix by taking the transpose if the number of rows is less than the number of columns.
Arguments
- A::AbstractArray: input matrix
Returns
- A::AbstractArray: output matrix
LiftAndLearn.fidx — Methodfidx(n::Int, j::Int, k::Int) → IntAuxiliary function for the F matrix indexing.
Arguments
- n: row dimension of the F matrix
- j: row index
- k: col index
Returns
- index corresponding to the Fmatrix
LiftAndLearn.get_data_matrix — Methodget_data_matrix(Xhat, Xhat_t, Ut, options; verbose)
Get the data matrix for the regression problem
Arguments
- Xhat::AbstractArray: projected data matrix
- Xhat_t::AbstractArray: projected data matrix (transposed)
- Ut::AbstractArray: input data matrix (transposed)
- options::AbstractOption: options for the operator inference set by the user
- verbose::Bool=false: verbose mode returning the dimension breakdown and operator symbols
Returns
- D: data matrix for the regression problem
- dims: dimension breakdown of the data matrix
- operator_symbols: operator symbols corresponding to- dimsfor the regression problem
LiftAndLearn.get_data_matrix — Methodget_data_matrix(Xhat, Ut, options)
LiftAndLearn.isenergypreserving — Functionisenergypreserving(X)
isenergypreserving(X, redundant; tol)
Check if the matrix is energy-preserving.
Arguments
- X::AbstractArray: the matrix to check if it is energy-preserving
- redundant::Bool: redundant or nonredundant operator
- tol::Real: the tolerance
Returns
- Bool: whether the matrix is energy-preserving
LiftAndLearn.leastsquares_solve — Methodleastsquares_solve(D::AbstractArray, Rt::AbstractArray, Y::AbstractArray, Xhat_t::AbstractArray, 
         dims::AbstractArray, operator_symbols::AbstractArray, options::AbstractOption)Solve the standard Operator Inference with/without regularization
Arguments
- D::AbstractArray: data matrix
- Rt::AbstractArray: derivative data matrix (tall)
- Yt::AbstractArray: output data matrix (tall)
- Xhat_t::AbstractArray: projected data matrix (tall)
- dims::AbstractArray: dimensions of the operators
- operator_symbols::AbstractArray: symbols of the operators
- options::AbstractOption: options for the operator inference set by the user
Returns
- operators::Operators: All learned operators
LiftAndLearn.lifted_basis — Methodlifted_basis(W, Nl, gp, ro) → VrCreate the block-diagonal POD basis for the new lifted system data
Arguments
- w: lifted data matrix
- Nl: number of variables of the lifted state dynamics
- gp: number of grid points for each variable
- ro: vector of the reduced orders for each basis
Return
- Vr: block diagonal POD basis
LiftAndLearn.opinf — Methodopinf(X::AbstractArray, Vn::AbstractArray, options::AbstractOption; 
    U::AbstractArray=zeros(1,1), Y::AbstractArray=zeros(1,1),
    Xdot::AbstractArray=[]) → op::OperatorsInfer the operators with derivative data given. NOTE: Make sure the data is constructed such that the row is the state vector and the column is the time.
Arguments
- X::AbstractArray: state data matrix
- Vn::AbstractArray: POD basis
- options::AbstractOption: options for the operator inference defined by the user
- U::AbstractArray: input data matrix
- Y::AbstractArray: output data matix
- Xdot::AbstractArray: derivative data matrix
Returns
- op::Operators: inferred operators
LiftAndLearn.opinf — Methodopinf(X::AbstractArray, Vn::AbstractArray, full_op::Operators, options::AbstractOption;
    U::AbstractArray=zeros(1,1), Y::AbstractArray=zeros(1,1)) → op::OperatorsInfer the operators with reprojection method (dispatch). NOTE: Make sure the data is constructed such that the row is the state vector and the column is the time.
Arguments
- X::AbstractArray: state data matrix
- Vn::AbstractArray: POD basis
- full_op::Operators: full order model operators
- options::AbstractOption: options for the operator inference defined by the user
- U::AbstractArray: input data matrix
- Y::AbstractArray: output data matix
- return_derivative::Bool=false: return the derivative matrix (or residual matrix)
Returns
- op::Operators: inferred operators
LiftAndLearn.opinf — Methodopinf(W::AbstractArray, Vn::AbstractArray, lm::lifting, full_op::Operators,
        options::AbstractOption; U::AbstractArray=zeros(1,1), 
        Y::AbstractArray=zeros(1,1), IG::Operators=Operators()) → op::OperatorsInfer the operators for Lift And Learn for reprojected data (dispatch). NOTE: make sure that the data is constructed such that the row dimension is the state dimension and the column dimension is the time dimension.
Arguments
- W::AbstractArray: state data matrix
- Vn::AbstractArray: POD basis
- lm::lifting: struct of the lift map
- full_op::Operators: full order model operators
- options::AbstractOption: options for the operator inference defined by the user
- U::AbstractArray: input data matrix
- Y::AbstractArray: output data matix
Returns
- op::Operators: inferred operators
Note
- You can opt to use the unlifted version of the opinffunction instead of this dispatch if it is not necessary to reproject the data.
LiftAndLearn.pod — Methodpod(op, Vr, sys_struct; nonredundant_operators)
Perform intrusive model reduction using Proper Orthogonal Decomposition (POD). This implementation is liimted to
- state: up to 4th order
- input: only B matrix
- output: only C and D matrices
- state-input-coupling: bilinear
- constant term: K matrix
Arguments
- op: operators of the target system
- Vr: POD basis
- options: options for the operator inference
Return
- op_new: new operator projected onto the basis
LiftAndLearn.proj_error — Methodproj_error(Xf, Vr) → PECompute the projection error
Arguments
- Xf: reference state data
- Vr: POD basis
Return
- PE: projection error
LiftAndLearn.rel_output_error — Methodrel_output_error(Yf, Y) → OECompute relative output error
Arguments
- Yf: reference output data
- Y: testing output data
Return
- OE: output error
LiftAndLearn.rel_state_error — Methodrel_state_error(Xf, X, Vr) → SECompute the relative state error
Arguments
- Xf: reference state data
- X: testing state data
- Vr: POD basis
Return
- SE: state error
LiftAndLearn.reproject — Methodreproject(Xhat::AbstractArray, V::AbstractArray, Ut::AbstractArray,
    op::Operators, options::AbstractOption) → Rhat::AbstractArrayReprojecting the data to minimize the error affected by the missing orders of the POD basis
Arguments
- Xhat::AbstractArray: state data matrix projected onto the basis
- V::AbstractArray: POD basis
- Ut::AbstractArray: input data matrix (tall)
- op::Operators: full order model operators
- options::AbstractOption: options for the operator inference defined by the user
Return
- Rhat::AbstractArray: R matrix (transposed) for the regression problem
LiftAndLearn.reproject — Methodreproject(Xhat::Matrix, V::Union{VecOrMat,BlockDiagonal}, U::VecOrMat,
    lm::lifting, op::Operators, options::AbstractOption) → Rhat::MatrixReprojecting the lifted data
Arguments
- Xhat::AbstractArray: state data matrix projected onto the basis
- V::AbstractArray: POD basis
- Ut::AbstractArray: input data matrix (tall)
- lm::lifting: struct of the lift map
- op::Operators: full order model operators
- options::AbstractOption: options for the operator inference defined by the user
Returns
- Rhat::Matrix: R matrix (transposed) for the regression problem
LiftAndLearn.tall2fat — Methodtall2fat(A::AbstractArray)Convert a tall matrix to a fat matrix by taking the transpose if the number of rows is less than the number of columns.
Arguments
- A::AbstractArray: input matrix
Returns
- A::AbstractArray: output matrix
LiftAndLearn.tikhonov — Methodtikhonov(b::AbstractArray, A::AbstractArray, Γ::AbstractMatrix, tol::Real;
    flag::Bool=false)Tikhonov regression
Arguments
- b::AbstractArray: right hand side of the regression problem
- A::AbstractArray: left hand side of the regression problem
- Γ::AbstractMatrix: Tikhonov matrix
- tol::Real: tolerance for the singular values
- flag::Bool: flag for the tolerance
Returns
- regression solution
LiftAndLearn.tikhonovMatrix! — MethodtikhonovMatrix!(Γ::AbstractArray, dims::Dict, options::AbstractOption)Construct the Tikhonov matrix
Arguments
- Γ::AbstractArray: Tikhonov matrix (pass by reference)
- options::AbstractOption: options for the operator inference set by the user
Returns
- Γ: Tikhonov matrix (pass by reference)
LiftAndLearn.time_derivative_approx — Methodtime_derivative_approx(X, options)
Approximating the derivative values of the data with different integration schemes
Arguments
- X::VecOrMat: data matrix
- options::AbstractOption: operator inference options
Returns
- dXdt: derivative data
- idx: index for the specific integration scheme (important for later use)
LiftAndLearn.unpack_operators! — Methodunpack_operators!(
    operators,
    O,
    Yt,
    Xhat_t,
    dims,
    operator_symbols,
    options
)
Unpack the operators from the operator matrix O including the output.
LiftAndLearn.unpack_operators! — Methodunpack_operators!(operators, O, dims, operator_symbols)
Unpack the operators from the operator matrix O.