API
All APIs of LiftAndLearn listed in a unstructured manner.
LiftAndLearn.LiftAndLearn
— ModuleLiftAndLearn package main module
LiftAndLearn.choose_ro
— Methodchoose_ro(Σ::Vector; en_low=-15) → r_all, en
Choose reduced order (ro) that preserves an acceptable energy.
Arguments
Σ::Vector
: Singular value vector from the SVD of some Hankel Matrixen_low
: minimum size for energy preservation
Returns
r_all
: vector of reduced ordersen
: vector of energy values
LiftAndLearn.compute_all_errors
— Methodcompute_all_errors(Xf, Yf, Xint, Yint, Xinf, Yinf, Vr) → PE, ISE, IOE, OSE, OOE
Compute all projection, state, and output errors
Arguments
Xf
: reference state dataYf
: reference output dataXint
: intrusive model state dataYint
: intrusive model output dataXinf
: inferred model state dataXint
: inferrred model output dataVr
: POD basis
Return
PE
: projection errorISE
: intrusive state errorIOE
: intrusive output errorOSE
: operator inference state errorOOE
: operator inference output error
LiftAndLearn.delta
— Methoddelta(v::Int, w::Int) → Float64
Another auxiliary function for the F
matrix
Arguments
v
: first indexw
: second index
Returns
- coefficient of 1.0 or 0.5
LiftAndLearn.ep_constraint_residual
— Functionep_constraint_residual(X, r)
ep_constraint_residual(X, r, redundant; with_moment)
Compute the constraint residual which is the residual of the energy-preserving constraint
\[\sum \left| \hat{h}_{ijk} + \hat{h}_{jik} + \hat{h}_{kji} \right| \quad 1 \leq i,j,k \leq r\]
Arguments
X::AbstractArray
: the matrix to compute the constraint residualr::Real
: the dimension of the systemredundant::Bool
: redundant or nonredundant operatorwith_moment::Bool
: whether to compute the moment of the constraint residual
Returns
ϵX
: the constraint residualmmt
: the moment which is the sum of the constraint residual without absolute value
LiftAndLearn.ep_constraint_violation
— Functionep_constraint_violation(Data, X)
ep_constraint_violation(Data, X, redundant)
Compute the constraint violation which is the violation of the energy-preserving constraint
\[\sum \langle \mathbf{x}, \mathbf{H}(\mathbf{x}\otimes\mathbf{x})\rangle \quad \forall \mathbf{x} \in \mathcal{D}\]
Arguments
Data::AbstractArray
: the dataX::AbstractArray
: the matrix to compute the constraint violationredundant::Bool
: redundant or nonredundant operator
Returns
viol
: the constraint violation
LiftAndLearn.ephec_opinf
— Methodephec_opinf(D, Rt, dims, operators_symbols, options, IG)
Energy preserved (Hard Equality Constraint) operator inference optimization (EPHEC)
Arguments
D
: data matrixRt
: transpose of the derivative matrix (or residual matrix)dims
: dimensions of the operatorsoperators_symbols
: symbols of the operatorsoptions
: options for the operator inference set by the userIG
: Initial Guesses
Returns
- Inferred operators
Note
- This is currently implemented for linear + quadratic operators only
LiftAndLearn.epopinf
— Methodepopinf(X, Vn, options; U, Xdot, IG)
Energy-preserving Operator Inference (EPOpInf) optimization problem.
LiftAndLearn.epp_opinf
— Methodepp_opinf(D, Rt, dims, operators_symbols, options, IG)
Energy preserving penalty operator inference optimization (EPP)
Arguments
D
: data matrixRt
: transpose of the derivative matrix (or residual matrix)dims
: dimensions of the operatorsoperators_symbols
: symbols of the operatorsoptions
: options for the operator inference set by the userIG
: Initial Guesses
Returns
- Inferred operators
Note
- This is currently implemented for linear + quadratic operators only
LiftAndLearn.epsic_opinf
— Methodepsic_opinf(D, Rt, dims, operators_symbols, options, IG)
Energy preserved (Soft Inequality Constraint) operator inference optimization (EPSIC)
Arguments
D
: data matrixRt
: transpose of the derivative matrix (or residual matrix)dims
: dimensions of the operatorsoperators_symbols
: symbols of the operatorsoptions
: options for the operator inference set by the userIG
: Initial Guesses
Returns
- Inferred operators
Note
- This is currently implemented for linear + quadratic operators only
LiftAndLearn.fat2tall
— Methodfat2tall(A::AbstractArray)
Convert a fat matrix to a tall matrix by taking the transpose if the number of rows is less than the number of columns.
Arguments
A::AbstractArray
: input matrix
Returns
A::AbstractArray
: output matrix
LiftAndLearn.fidx
— Methodfidx(n::Int, j::Int, k::Int) → Int
Auxiliary function for the F
matrix indexing.
Arguments
n
: row dimension of the F matrixj
: row indexk
: col index
Returns
- index corresponding to the
F
matrix
LiftAndLearn.get_data_matrix
— Methodget_data_matrix(Xhat, Xhat_t, Ut, options; verbose)
Get the data matrix for the regression problem
Arguments
Xhat::AbstractArray
: projected data matrixXhat_t::AbstractArray
: projected data matrix (transposed)Ut::AbstractArray
: input data matrix (transposed)options::AbstractOption
: options for the operator inference set by the userverbose::Bool=false
: verbose mode returning the dimension breakdown and operator symbols
Returns
D
: data matrix for the regression problemdims
: dimension breakdown of the data matrixoperator_symbols
: operator symbols corresponding todims
for the regression problem
LiftAndLearn.get_data_matrix
— Methodget_data_matrix(Xhat, Ut, options)
LiftAndLearn.isenergypreserving
— Functionisenergypreserving(X)
isenergypreserving(X, redundant; tol)
Check if the matrix is energy-preserving.
Arguments
X::AbstractArray
: the matrix to check if it is energy-preservingredundant::Bool
: redundant or nonredundant operatortol::Real
: the tolerance
Returns
Bool
: whether the matrix is energy-preserving
LiftAndLearn.leastsquares_solve
— Methodleastsquares_solve(D::AbstractArray, Rt::AbstractArray, Y::AbstractArray, Xhat_t::AbstractArray,
dims::AbstractArray, operator_symbols::AbstractArray, options::AbstractOption)
Solve the standard Operator Inference with/without regularization
Arguments
D::AbstractArray
: data matrixRt::AbstractArray
: derivative data matrix (tall)Yt::AbstractArray
: output data matrix (tall)Xhat_t::AbstractArray
: projected data matrix (tall)dims::AbstractArray
: dimensions of the operatorsoperator_symbols::AbstractArray
: symbols of the operatorsoptions::AbstractOption
: options for the operator inference set by the user
Returns
operators::Operators
: All learned operators
LiftAndLearn.lifted_basis
— Methodlifted_basis(W, Nl, gp, ro) → Vr
Create the block-diagonal POD basis for the new lifted system data
Arguments
w
: lifted data matrixNl
: number of variables of the lifted state dynamicsgp
: number of grid points for each variablero
: vector of the reduced orders for each basis
Return
Vr
: block diagonal POD basis
LiftAndLearn.opinf
— Methodopinf(X::AbstractArray, Vn::AbstractArray, options::AbstractOption;
U::AbstractArray=zeros(1,1), Y::AbstractArray=zeros(1,1),
Xdot::AbstractArray=[]) → op::Operators
Infer the operators with derivative data given. NOTE: Make sure the data is constructed such that the row is the state vector and the column is the time.
Arguments
X::AbstractArray
: state data matrixVn::AbstractArray
: POD basisoptions::AbstractOption
: options for the operator inference defined by the userU::AbstractArray
: input data matrixY::AbstractArray
: output data matixXdot::AbstractArray
: derivative data matrix
Returns
op::Operators
: inferred operators
LiftAndLearn.opinf
— Methodopinf(X::AbstractArray, Vn::AbstractArray, full_op::Operators, options::AbstractOption;
U::AbstractArray=zeros(1,1), Y::AbstractArray=zeros(1,1)) → op::Operators
Infer the operators with reprojection method (dispatch). NOTE: Make sure the data is constructed such that the row is the state vector and the column is the time.
Arguments
X::AbstractArray
: state data matrixVn::AbstractArray
: POD basisfull_op::Operators
: full order model operatorsoptions::AbstractOption
: options for the operator inference defined by the userU::AbstractArray
: input data matrixY::AbstractArray
: output data matixreturn_derivative::Bool=false
: return the derivative matrix (or residual matrix)
Returns
op::Operators
: inferred operators
LiftAndLearn.opinf
— Methodopinf(W::AbstractArray, Vn::AbstractArray, lm::lifting, full_op::Operators,
options::AbstractOption; U::AbstractArray=zeros(1,1),
Y::AbstractArray=zeros(1,1), IG::Operators=Operators()) → op::Operators
Infer the operators for Lift And Learn for reprojected data (dispatch). NOTE: make sure that the data is constructed such that the row dimension is the state dimension and the column dimension is the time dimension.
Arguments
W::AbstractArray
: state data matrixVn::AbstractArray
: POD basislm::lifting
: struct of the lift mapfull_op::Operators
: full order model operatorsoptions::AbstractOption
: options for the operator inference defined by the userU::AbstractArray
: input data matrixY::AbstractArray
: output data matix
Returns
op::Operators
: inferred operators
Note
- You can opt to use the unlifted version of the
opinf
function instead of this dispatch if it is not necessary to reproject the data.
LiftAndLearn.pod
— Methodpod(op, Vr, sys_struct; nonredundant_operators)
Perform intrusive model reduction using Proper Orthogonal Decomposition (POD). This implementation is liimted to
- state: up to 4th order
- input: only B matrix
- output: only C and D matrices
- state-input-coupling: bilinear
- constant term: K matrix
Arguments
op
: operators of the target systemVr
: POD basisoptions
: options for the operator inference
Return
op_new
: new operator projected onto the basis
LiftAndLearn.proj_error
— Methodproj_error(Xf, Vr) → PE
Compute the projection error
Arguments
Xf
: reference state dataVr
: POD basis
Return
PE
: projection error
LiftAndLearn.rel_output_error
— Methodrel_output_error(Yf, Y) → OE
Compute relative output error
Arguments
Yf
: reference output dataY
: testing output data
Return
OE
: output error
LiftAndLearn.rel_state_error
— Methodrel_state_error(Xf, X, Vr) → SE
Compute the relative state error
Arguments
Xf
: reference state dataX
: testing state dataVr
: POD basis
Return
SE
: state error
LiftAndLearn.reproject
— Methodreproject(Xhat::AbstractArray, V::AbstractArray, Ut::AbstractArray,
op::Operators, options::AbstractOption) → Rhat::AbstractArray
Reprojecting the data to minimize the error affected by the missing orders of the POD basis
Arguments
Xhat::AbstractArray
: state data matrix projected onto the basisV::AbstractArray
: POD basisUt::AbstractArray
: input data matrix (tall)op::Operators
: full order model operatorsoptions::AbstractOption
: options for the operator inference defined by the user
Return
Rhat::AbstractArray
: R matrix (transposed) for the regression problem
LiftAndLearn.reproject
— Methodreproject(Xhat::Matrix, V::Union{VecOrMat,BlockDiagonal}, U::VecOrMat,
lm::lifting, op::Operators, options::AbstractOption) → Rhat::Matrix
Reprojecting the lifted data
Arguments
Xhat::AbstractArray
: state data matrix projected onto the basisV::AbstractArray
: POD basisUt::AbstractArray
: input data matrix (tall)lm::lifting
: struct of the lift mapop::Operators
: full order model operatorsoptions::AbstractOption
: options for the operator inference defined by the user
Returns
Rhat::Matrix
: R matrix (transposed) for the regression problem
LiftAndLearn.tall2fat
— Methodtall2fat(A::AbstractArray)
Convert a tall matrix to a fat matrix by taking the transpose if the number of rows is less than the number of columns.
Arguments
A::AbstractArray
: input matrix
Returns
A::AbstractArray
: output matrix
LiftAndLearn.tikhonov
— Methodtikhonov(b::AbstractArray, A::AbstractArray, Γ::AbstractMatrix, tol::Real;
flag::Bool=false)
Tikhonov regression
Arguments
b::AbstractArray
: right hand side of the regression problemA::AbstractArray
: left hand side of the regression problemΓ::AbstractMatrix
: Tikhonov matrixtol::Real
: tolerance for the singular valuesflag::Bool
: flag for the tolerance
Returns
- regression solution
LiftAndLearn.tikhonovMatrix!
— MethodtikhonovMatrix!(Γ::AbstractArray, dims::Dict, options::AbstractOption)
Construct the Tikhonov matrix
Arguments
Γ::AbstractArray
: Tikhonov matrix (pass by reference)options::AbstractOption
: options for the operator inference set by the user
Returns
Γ
: Tikhonov matrix (pass by reference)
LiftAndLearn.time_derivative_approx
— Methodtime_derivative_approx(X, options)
Approximating the derivative values of the data with different integration schemes
Arguments
X::VecOrMat
: data matrixoptions::AbstractOption
: operator inference options
Returns
dXdt
: derivative dataidx
: index for the specific integration scheme (important for later use)
LiftAndLearn.unpack_operators!
— Methodunpack_operators!(
operators,
O,
Yt,
Xhat_t,
dims,
operator_symbols,
options
)
Unpack the operators from the operator matrix O including the output.
LiftAndLearn.unpack_operators!
— Methodunpack_operators!(operators, O, dims, operator_symbols)
Unpack the operators from the operator matrix O.