Reference of internal functions

In this reference, you will find a detailed overview of internal functions. They are documented here mostly for development of the package. They are not part of the public API and may change without notice.

CommonSolve.solveMethod
solve(prob::AbstractHybridProblem, solver::HybridPosteriorSolver; ...)

Perform the inversion of HVI Problem.

Optional keyword arguments

  • scenario: Scenario to query prob, defaults to Val(()).
  • rng: Random generator, defaults to Random.default_rng().
  • gdevs: NamedTuple (;gdev_M, gdev_P) functions to move computation and data of ML model on and PBM respectively to gpu (e.g. gpu_device() or cpu (identity). defaults to get_gdev_MP(scenario)
  • θmean_quant default to 0.0: deprecated
  • is_inferred: set to Val(true) to activate type stability checks

Returns a NamedTuple of

  • probo: A copy of the HybridProblem, with updated optimized parameters
  • interpreters: TODO
  • ϕ: the optimized HVI parameters: a ComponentVector with entries
    • μP: ComponentVector of the mean global PBM parameters at unconstrained scale
    • ϕg: The MLmodel parameter vector,
    • unc: ComponentVector of further uncertainty parameters
  • θP: ComponentVector of the mean global PBM parameters at constrained scale
  • resopt: the structure returned by Optimization.solve. It can contain more information on convergence.
source
HybridVariationalInference.compose_axesMethod
compose_axes(axtuples::NamedTuple)

Create a new 1d-axis that combines several other named axes-tuples such as of key = getaxes(::AbstractComponentArray).

The new axis consists of several ViewAxes. If an axis-tuple consists only of one axis, it is used for the view. Otherwise a ShapedAxis is created with the axes-length of the others, essentially dropping component information that might be present in the dimensions.

source
HybridVariationalInference.generate_ζMethod

Generate samples of (inv-transformed) model parameters, ζ, and the vector of standard deviations, σ, i.e. the diagonal of the cholesky-factor.

Adds the MV-normally distributed residuals, retrieved by sample_ζresid_norm to the means extracted from parameters and predicted by the machine learning model.

The output shape of size (n_site x n_par x n_MC) is tailored to iterating each MC sample and then transforming each parameter on block across sites.

source
HybridVariationalInference.get_loss_elboMethod

Create a loss function for parameter vector ϕ, given

  • g(x, ϕ): machine learning model
  • transPMS: transformation from unconstrained space to parameter space
  • f(θMs, θP): mechanistic model
  • interpreters: assigning structure to pure vectors, see neg_elbo_gtf
  • n_MC: number of Monte-Carlo sample to approximate the expected value across distribution
  • pbm_covars: tuple of symbols of process-based parameters provided to the ML model
  • θP: ComponentVector as a template to select indices of pbm_covars

The loss function takes in addition to ϕ, data that changes with minibatch

  • rng: random generator
  • xM: matrix of covariates, sites in columns
  • xP: drivers for the processmodel: Iterator of size n_site
  • y_o, y_unc: matrix of observations and uncertainties, sites in columns
source
HybridVariationalInference.neg_elbo_ζtfMethod

Compute the neg_elbo for each sampled parameter vector (last dimension of ζs).

  • Transform and compute log-jac
  • call forward model
  • compute log-density of joint density of predictions and unconstrained parameters, nLjoint and its components
    • nLy: The likelihood of the data, given the parameters
    • neg_log_prior: the prior of parameters at constrained scale
    • logjac, negative logarithm of the absolute value of the determinant of the Jacobian of the transformation θ=T(ζ).
  • loss_penalty: additional loss terms from floss_penalty
  • compute entropy of transformation
source
HybridVariationalInference.ones_similar_xFunction
ones_similar_x(x::AbstractArray, size_ret = size(x))

Return ones(eltype(x), size_ret). Overload this methods for specific AbstractGPUArrays to return the correct container type. See e.g. HybridVariationalInferenceCUDAExt that calls CUDA.fill to return a CuArray rather than Array.

source
HybridVariationalInference.sample_ζresid_normMethod

Extract relevant parameters from ζ and return nMC generated multivariate normal draws together with the vector of standard deviations, σ: `(ζPresids, ζMsparfirstresids, σ)The output shape(nθ, nsite?, nMC)is tailored to addingζMsparfirstresidsto ML-model predcitions of size(nθM, n_site)`.

Arguments

  • int_unc: Interpret vector as ComponentVector with components ρsP, ρsM, logσ2ζP, coeflogσ2_ζMs(intercept + slope),
source
HybridVariationalInference.transformU_block_cholesky1Method
transformU_block_cholesky1(v::AbstractVector, cor_ends)

Transform a parameterization v of a blockdiagonal of upper triangular matrices into the this matrix. cor_ends is an AbstractVector of Integers specifying the last column of each block. E.g. For a matrix with a 3x3, a 2x2, and another single-entry block, the blocks start at columns (3,5,6). It defaults to a single entire block.

source
HybridVariationalInference.transformU_cholesky1Method

Takes a vector of parameters for UnitUpperTriangular matrix and transforms it to an UpperTriangular that satisfies diag(U' * U) = 1.

This can be used to fit parameters that yield an upper Cholesky-Factor of a Covariance matrix.

It uses the upper triangular matrix rather than the lower because it involes a sum across columns, whereas the alternative of a lower triangular uses sum across rows. Sum across columns is often faster, because entries of columns are contiguous.

source
HybridVariationalInference.transpose_mPMs_sitefirstMethod

Transforms each row of a matrix (nMC x nPar) with site parameters Ms inside nPar of form (npar x nsite) to Ms of the form (nsite x n_par), i.e. neighboring entries (inside a column) are of the same parameter.

This format of having n_par as the last dimension helps transforming parameters on block.

source
HybridVariationalInference.vectuptotupvecMethod
vectuptotupvec(vectup)
vectuptotupvec_allowmissing(vectup)

Typesafe convert from Vector of Tuples to Tuple of Vectors. The first variant does not allow for missing in vectup. The second variant allows for missing but has eltype of Union{Missing, ...} in all components of the returned Tuple, also when there were not missing in vectup.

Arguments

  • vectup: A Vector of identical Tuples

Examples

vectup = [(1,1.01, "string 1"), (2,2.02, "string 2")] 
HybridVariationalInference.vectuptotupvec_allowmissing(vectup) == 
  ([1, 2], [1.01, 2.02], ["string 1", "string 2"])
source