MeasureBase API
MeasureBase.IntegerLike
— TypeMeasureBase.IntegerLike
Equivalent to Union{Integer,Static.StaticInteger}
.
MeasureBase.AbstractMeasure
— Method(m::AbstractMeasure)(s)
Convenience method for massof(m, s)
. To make a user-defined measure callable in this way, users should add the corresponding massof
method.
MeasureBase.Density
— Typestruct Density{M,B} <: AbstractDensity
μ::M
base::B
end
For measures μ
and ν
, Density(μ,ν)
represents the density function dμ/dν
, also called the Radom-Nikodym derivative: https://en.wikipedia.org/wiki/Radon%E2%80%93Nikodymtheorem#Radon%E2%80%93Nikodymderivative
Instead of calling this directly, users should call density_rel(μ, ν)
or its abbreviated form, 𝒹(μ,ν)
.
MeasureBase.DensityMeasure
— Typestruct DensityMeasure{F,B} <: AbstractDensityMeasure
density :: F
base :: B
end
A DensityMeasure
is a measure defined by a density or log-density with respect to some other "base" measure.
Users should not call DensityMeasure
directly, but should instead call ∫(f, base)
(if f
is a density function or DensityInterface.IsDensity
object) or ∫exp(f, base)
(if f
is a log-density function).
MeasureBase.Likelihood
— TypeLikelihood(k::AbstractTransitionKernel, x)
"Observe" a value x
, yielding a function from the parameters to ℝ.
Likelihoods are most commonly used in conjunction with an existing prior measure to yield a new measure, the posterior. In Bayes's Law, we have
$P(θ|x) ∝ P(θ) P(x|θ)$
Here $P(θ)$ is the prior. If we consider $P(x|θ)$ as a function on $θ$, then it is called a likelihood.
Since measures are most commonly manipulated using density
and logdensity
, it's awkward to commit a (log-)likelihood to using one or the other. To evaluate a Likelihood
, we therefore use density
or logdensity
, depending on the circumstances. In the latter case, it is of course acting as a log-density.
For example,
julia> ℓ = Likelihood(Normal{(:μ,)}, 2.0)
Likelihood(Normal{(:μ,), T} where T, 2.0)
julia> density_def(ℓ, (μ=2.0,))
1.0
julia> logdensity_def(ℓ, (μ=2.0,))
-0.0
If, as above, the measure includes the parameter information, we can optionally leave it out of the second argument in the call to density
or logdensity
.
julia> density_def(ℓ, 2.0)
1.0
julia> logdensity_def(ℓ, 2.0)
-0.0
With several parameters, things work as expected:
julia> ℓ = Likelihood(Normal{(:μ,:σ)}, 2.0)
Likelihood(Normal{(:μ, :σ), T} where T, 2.0)
julia> logdensity_def(ℓ, (μ=2, σ=3))
-1.0986122886681098
julia> logdensity_def(ℓ, (2,3))
-1.0986122886681098
julia> logdensity_def(ℓ, [2, 3])
-1.0986122886681098
Likelihood(M<:ParameterizedMeasure, constraint::NamedTuple, x)
In some cases the measure might have several parameters, and we may want the (log-)likelihood with respect to some subset of them. In this case, we can use the three-argument form, where the second argument is a constraint. For example,
julia> ℓ = Likelihood(Normal{(:μ,:σ)}, (σ=3.0,), 2.0)
Likelihood(Normal{(:μ, :σ), T} where T, (σ = 3.0,), 2.0)
Similarly to the above, we have
julia> density_def(ℓ, (μ=2.0,))
0.3333333333333333
julia> logdensity_def(ℓ, (μ=2.0,))
-1.0986122886681098
julia> density_def(ℓ, 2.0)
0.3333333333333333
julia> logdensity_def(ℓ, 2.0)
-1.0986122886681098
Finally, let's return to the expression for Bayes's Law,
$P(θ|x) ∝ P(θ) P(x|θ)$
The product on the right side is computed pointwise. To work with this in MeasureBase, we have a "pointwise product" ⊙
, which takes a measure and a likelihood, and returns a new measure, that is, the unnormalized posterior that has density $P(θ) P(x|θ)$ with respect to the base measure of the prior.
For example, say we have
μ ~ Normal()
x ~ Normal(μ,σ)
σ = 1
and we observe x=3
. We can compute the posterior measure on μ
as
julia> post = Normal() ⊙ Likelihood(Normal{(:μ, :σ)}, (σ=1,), 3)
Normal() ⊙ Likelihood(Normal{(:μ, :σ), T} where T, (σ = 1,), 3)
julia> logdensity_def(post, 2)
-2.5
MeasureBase.LogDensity
— Typestruct LogDensity{M,B} <: AbstractDensity
μ::M
base::B
end
For measures μ
and ν
, LogDensity(μ,ν)
represents the log-density function log(dμ/dν)
, also called the Radom-Nikodym derivative: https://en.wikipedia.org/wiki/Radon%E2%80%93Nikodymtheorem#Radon%E2%80%93Nikodymderivative
Instead of calling this directly, users should call logdensity_rel(μ, ν)
or its abbreviated form, log𝒹(μ,ν)
.
MeasureBase.NoArgCheck
— TypeMeasureBase.NoArgCheck{MU,T}
Indicates that there is no way to check of a values of type T
are variate of measures of type MU
.
MeasureBase.NoDOF
— TypeMeasureBase.NoDOF{MU}
Indicates that there is no way to compute degrees of freedom of a measure of type MU
with the given information, e.g. because the DOF are not a global property of the measure.
MeasureBase.NoTransport
— Typestruct MeasureBase.NoTransport{NU,MU} end
Indicates that no transformation from a measure of type MU
to a measure of type NU
could be found.
MeasureBase.NoTransportOrigin
— Typestruct MeasureBase.NoTransportOrigin{NU}
Indicates that no (default) pullback measure is available for measures of type NU
.
MeasureBase.NoVolCorr
— TypeNoVolCorr()
Indicate that density calculations should ignore the volume element of variate transformations. Should only be used in special cases in which the volume element has already been taken into account in a different way.
MeasureBase.PowerMeasure
— Typestruct PowerMeasure{M,...} <: AbstractProductMeasure
A power measure is a product of a measure with itself. The number of elements in the product determines the dimensionality of the resulting support.
Note that power measures are only well-defined for integer powers.
The nth power of a measure μ can be written μ^n.
MeasureBase.PrimitiveMeasure
— Typeabstract type PrimitiveMeasure <: AbstractMeasure end
In the MeasureTheory ecosystem, a primitive measure is a measure for which the definition and construction do not depend on any other measure. Primitive measures satisfy the following laws:
basemeasure(μ::PrimitiveMeasure) = μ
logdensity_def(μ::PrimitiveMeasure, x) = 0.0
logdensity_def(μ::M, ν::M, x) where {M<:PrimitiveMeasure} = 0.0
MeasureBase.PushforwardMeasure
— Typestruct PushforwardMeasure{F,I,M,VC<:TransformVolCorr} <: AbstractPushforward
f :: F
finv :: I
origin :: M
volcorr :: VC
end
Users should not call `PushforwardMeasure` directly. Instead call or add
methods to `pushfwd`.
MeasureBase.SuperpositionMeasure
— Typestruct SuperpositionMeasure{NT} <: AbstractMeasure
components :: NT
end
Superposition of measures is analogous to mixture distributions, but (because measures need not be normalized) requires no scaling. The superposition of two measures μ and ν can be more concisely written as μ + ν. Superposition measures satisfy
basemeasure(μ + ν) == basemeasure(μ) + basemeasure(ν)
\[ \begin{aligned}\frac{\mathrm{d}(\mu+\nu)}{\mathrm{d}(\alpha+\beta)} & =\frac{f\,\mathrm{d}\alpha+g\,\mathrm{d}\beta}{\mathrm{d}\alpha+\mathrm{d}\beta}\\ & =\frac{f\,\mathrm{d}\alpha}{\mathrm{d}\alpha+\mathrm{d}\beta}+\frac{g\,\mathrm{d}\beta}{\mathrm{d}\alpha+\mathrm{d}\beta}\\ & =\frac{f}{1+\frac{\mathrm{d}\beta}{\mathrm{d}\alpha}}+\frac{g}{\frac{\mathrm{d}\alpha}{\mathrm{d}\beta}+1}\\ & =\frac{f}{1+\left(\frac{\mathrm{d}\alpha}{\mathrm{d}\beta}\right)^{-1}}+\frac{g}{\frac{\mathrm{d}\alpha}{\mathrm{d}\beta}+1}\ . \end{aligned}\]
MeasureBase.TransformVolCorr
— Typeabstract type TransformVolCorr
Provides control over density correction by transform volume element. Either NoVolCorr()
or WithVolCorr()
MeasureBase.TransportFunction
— Typestruct TransportFunction <: Function
Transforms a variate from one measure to a variate of another.
In general TransportFunction
should not be called directly, call transport_to
instead.
MeasureBase.UnknownFiniteMass
— Typestruct UnknownFiniteMass <: AbstractUnknownMass end
See massof
MeasureBase.UnknownMass
— Typestruct UnknownMass <: AbstractUnknownMass end
See massof
MeasureBase.WithVolCorr
— TypeWithVolCorr()
Indicate that density calculations should take the volume element of variate transformations into account (typically via the log-abs-det-Jacobian of the transform).
Base.:|
— Method(m::AbstractMeasure) | constraint
Return a new measure by constraining m
to satisfy constraint
.
Note that the form of constraint
will vary depending on the structure of a given measure. For example, a measure over NamedTuple
s may allow NamedTuple
constraints, while another may require constraint
to be a predicate or a function returning a real number (in which case the constraint could be considered as the zero-set of that function).
At the time of this writing, invariants required of this function are not yet settled. Specifically, there's the question of normalization. It's common for conditional distributions to be normalized, but this can often not be expressed in closed form, and can be very expensive to compute. For more general measures, the notion of normalization may not even make sense.
Because of this, this interface is not yet stable, and users should expect upcoming changes.
DensityInterface.logdensityof
— Methodlogdensityof(m::AbstractMeasure, x)
Compute the log-density of the measure m
at x
. Density is always relative, but DensityInterface.jl
does not account for this. For compatibility with this, logdensityof
for a measure is always implicitly relative to rootmeasure(x)
.
logdensityof
works by first computing insupport(m, x)
. If this is true, then unsafe_logdensityof
is called. If insupport(m, x)
is known to be true
, it can be a little faster to directly call unsafe_logdensityof(m, x)
.
To compute log-density relative to basemeasure(m)
or define a log-density (relative to basemeasure(m)
or another measure given explicitly), see logdensity_def
.
To compute a log-density relative to a specific base-measure, see logdensity_rel
.
MeasureBase.:↣
— MethodIf
- μ is an
AbstractMeasure
or satisfies the Measure interface, and - k is a function taking values from the support of μ and returning a measure
Then μ ↣ k
is a measure, called a monadic bind. In a probabilistic programming language like Soss.jl, this could be expressed as
Note that bind is usually written >>=
, but this symbol is unavailable in Julia.
bind = @model μ,k begin
x ~ μ
y ~ k(x)
return y
end
See also bind
and Bind
MeasureBase.:⊗
— Method⊗(μs::AbstractMeasure...)
⊗
is a binary operator for building product measures. This satisfies the law
basemeasure(μ ⊗ ν) == basemeasure(μ) ⊗ basemeasure(ν)
MeasureBase.basekernel
— FunctionFor any k::TransitionKernel
, basekernel
is expected to satisfy
basekernel(k)(p) == (basemeasure ∘ k)(p)
The main purpose of basekernel
is to make it efficient to compute
basemeasure(d::ProductMeasure) == productmeasure(basekernel(d.f), d.xs)
MeasureBase.basemeasure_sequence
— Methodbasemeasure_sequence(m)
Construct the longest Tuple
starting with m
having each term as the base measure of the previous term, and with no repeated entries.
MeasureBase.check_dof
— FunctionMeasureBase.check_dof(ν, μ)::Nothing
Check if ν
and μ
have the same effective number of degrees of freedom according to MeasureBase.getdof
.
MeasureBase.checked_arg
— FunctionMeasureBase.checked_arg(μ::MU, x::T)::T
Return x
if x
is a valid variate of μ
, throw an ArgumentError
if not, return NoArgCheck{MU,T}()
if not check can be performed.
MeasureBase.commonbase
— Methodcommonbase(μ, ν, T) -> Tuple{StaticInt{i}, StaticInt{j}}
Find minimal (with respect to their sum) i
and j
such that there is a method
logdensity_def(basemeasure_sequence(μ)[i], basemeasure_sequence(ν)[j], ::T)
This is used in logdensity_rel
to help make that function efficient.
MeasureBase.fill_with
— FunctionMeasureBase.fill_with(x, sz::NTuple{N,<:IntegerLike}) where N
Creates an array of size sz
filled with x
.
Returns an instance of FillArrays.Fill
.
MeasureBase.from_origin
— FunctionMeasureBase.from_origin(ν, x)
Push x
from MeasureBase.transport_origin(μ)
forward to ν
.
MeasureBase.getdof
— Functiongetdof(μ)
Returns the effective number of degrees of freedom of variates of measure μ
.
The effective NDOF my differ from the length of the variates. For example, the effective NDOF for a Dirichlet distribution with variates of length n
is n - 1
.
Also see check_dof
.
MeasureBase.insupport
— Functioninssupport(m, x)
insupport(m)
insupport(m,x)
computes whether x
is in the support of m
.
insupport(m)
returns a function, and satisfies
insupport(m)(x) == insupport(m, x)
MeasureBase.isnormalized
— Functionisnormalized(x, p::Real=2)
Check whether norm(x, p) == 1
.
MeasureBase.isnormalized
— Methodisnormalized(m::AbstractMeasure)
Checks whether the measure m is normalized, that is, whether massof(m) == 1
.
For convenience, we also provide a method on non-measures that only depends on norm
.
MeasureBase.kernel
— FunctionA kernel is a function that returns a measure.
k1 = kernel() do x
Normal(x, x^2)
end
k2 = kernel(Normal) do x
(μ = x, σ = x^2)
end
k3 = kernel(Normal; μ = identity, σ = abs2)
k4 = kernel(Normal; μ = first, σ = last) do x
(x, x^2)
end
x = randn(); k1(x) == k2(x) == k3(x) == k4(x)
This function is not exported, because "kernel" can have so many other meanings. See for example https://github.com/JuliaGaussianProcesses/KernelFunctions.jl for another common use of this term.
Reference
- https://en.wikipedia.org/wiki/Markov_kernel
MeasureBase.likelihood_ratio
— Methodlikelihood_ratio(ℓ::Likelihood, p, q)
Compute the log of the likelihood ratio, in order to compare two choices for parameters. This is equal to
density_rel(ℓ.k(p), ℓ.k(q), ℓ.x)
but is computed using LogarithmicNumbers.jl to avoid underflow and overflow. Since density_rel
can leave common base measure unevaluated, this can be more efficient than
logdensityof(ℓ.k(p), ℓ.x) - logdensityof(ℓ.k(q), ℓ.x)
MeasureBase.likelihoodof
— Functionlikelihoodof(k::AbstractTransitionKernel, x; constraints...)
likelihoodof(k::AbstractTransitionKernel, x, constraints::NamedTuple)
A likelihood is not a measure. Rather, a likelihood acts on a measure, through the "pointwise product" ⊙
, yielding another measure.
MeasureBase.log_likelihood_ratio
— Methodlog_likelihood_ratio(ℓ::Likelihood, p, q)
Compute the log of the likelihood ratio, in order to compare two choices for parameters. This is computed as
logdensity_rel(ℓ.k(p), ℓ.k(q), ℓ.x)
Since logdensity_rel
can leave common base measure unevaluated, this can be more efficient than
logdensityof(ℓ.k(p), ℓ.x) - logdensityof(ℓ.k(q), ℓ.x)
MeasureBase.logdensity_def
— Functionlogdensity_def
is the standard way to define a log-density for a new measure. Note that this definition does not include checking for membership in the support; this is instead checked using insupport
. logdensity_def
is a low-level function, and should typically not be called directly. See logdensityof
for more information and other alternatives.
logdensity_def(m, x)
Compute the log-density of the measure m at the point x
, relative to basemeasure(m)
, and assuming insupport(m, x)
.
logdensity_def(m1, m2, x)
Compute the log-density of m1
relative to m2
at the point x
, assuming insupport(m1, x)
and insupport(m2, x)
.
MeasureBase.logdensity_rel
— Methodlogdensity_rel(m1, m2, x)
Compute the log-density of m1
relative to m2
at x
. This function checks whether x
is in the support of m1
or m2
(or both, or neither). If x
is known to be in the support of both, it can be more efficient to call unsafe_logdensity_rel
.
MeasureBase.log𝒹
— Methodlog𝒹(μ, base)
Compute the log-density (Radom-Nikodym derivative) of μ with respect to base
. This is a shorthand form for logdensity_rel(μ, base)
MeasureBase.massof
— Methodmassof(m)
Get the mass of a measure - that is, integrate the measure over its support.
massof
massof(m, dom)
Integrate the measure m
over the "domain" dom
. Note that domains are not defined universally, but may be specific to a given measure. If m
is <:AbstractMeasure
, users can also write m(dom)
. For new measures, users should not add new "call" methods, but instead extend MeasureBase.massof
.
For example, for many univariate measures m
with rootmeasure(m) == LebesgueBase()
, users can call massof(m, a_b)
where a_b::IntervalSets.Interval
.
massof
often returns a Real
. But in many cases we may only know the mass is finite, or we may know nothing at all about it. For these cases, it will return UnknownFiniteMass
or UnknownMass
, respectively. When no massof
method exists, it defaults to UnknownMass
.
MeasureBase.maybestatic_length
— MethodMeasureBase.maybestatic_length(x)::IntegerLike
Returns the length of x
as a dynamic or static integer.
MeasureBase.maybestatic_size
— MethodMeasureBase.maybestatic_size(x)::Tuple{Vararg{IntegerLike}}
Returns the size of x
as a tuple of dynamic or static integers.
MeasureBase.one_to
— MethodMeasureBase.one_to(n::IntegerLike)
Creates a range from one to n.
Returns an instance of Base.OneTo
or Static.SOneTo
, depending on the type of n
.
MeasureBase.paramnames
— Functionparamnames(μ)
returns the names of the parameters of μ
. This is equivalent to
paramnames(μ) == (keys ∘ params)(μ)
but depends only on the type. In particular, the default implementation is
paramnames(μ::M) where {M} = paramnames(M)
New ParameterizedMeasure
s will automatically have a paramnames
method. For other measures, this method is optional, but can be added by defining
paramnames(::Type{M}) where {M} = ...
See also params
MeasureBase.params
— Functionparams(μ)
returns the parameters of a measure μ
, as a NamedTuple
. The default method is
params(μ) = NamedTuple()
See also paramnames
MeasureBase.proxy
— Functionfunction proxy end
It's often useful to delegate methods like logdensity
and basemeasure
to those of a different measure. For example, a Normal{(:μ,:σ)}
is equivalent to an affine transformation of a Normal{()}
.
We could just have calls like Normal(μ=2,σ=4)
directly construct a transformed measure, but this would make dispatch awkward.
MeasureBase.pullback
— Functionpullback(f, μ, volcorr = WithVolCorr())
A pullback is a dual concept to a pushforward. While a pushforward needs a map from the support of a measure, a pullback requires a map into the support of a measure. The log-density is then computed through function composition, together with a volume correction as needed.
This can be useful, since the log-density of a PushforwardMeasure
is computing in terms of the inverse function; the "forward" function is not used at all. In some cases, we may be focusing on log-density (and not, for example, sampling).
To manually specify an inverse, call pullback(InverseFunctions.setinverse(f, finv), μ, volcorr)
.
MeasureBase.pushfwd
— Functionpushfwd(f, μ, volcorr = WithVolCorr())
Return the pushforward measure from μ
the measurable function f
.
To manually specify an inverse, call pushfwd(InverseFunctions.setinverse(f, finv), μ, volcorr)
.
MeasureBase.rebase
— Methodrebase(μ, ν)
Express μ
in terms of a density over ν
. Satisfies
basemeasure(rebase(μ, ν)) == ν
density(rebase(μ, ν)) == 𝒹(μ,ν)
MeasureBase.require_insupport
— FunctionMeasureBase.require_insupport(μ, x)::Nothing
Checks if x
is in the support of distribution/measure μ
, throws an ArgumentError
if not.
MeasureBase.rootmeasure
— Methodrootmeasure(μ::AbstractMeasure)
It's sometimes important to be able to find the fix point of a measure under basemeasure
. That is, to start with some measure and apply basemeasure
repeatedly until there's no change. That's what this does.
MeasureBase.schema
— Functionschema(::Type)
schema
turns a type into a value that's easier to work with. Example: julia> nt = (a=(b=[1,2],c=(d=[3,4],e=[5,6])),f=[7,8]); julia> NT = typeof(nt) NamedTuple{(:a, :f),Tuple{NamedTuple{(:b, :c),Tuple{Array{Int64,1},NamedTuple{(:d, :e),Tuple{Array{Int64,1},Array{Int64,1}}}}},Array{Int64,1}}} julia> schema(NT) (a = (b = Array{Int64,1}, c = (d = Array{Int64,1}, e = Array{Int64,1})), f = Array{Int64,1})
MeasureBase.smf
— Functionsmf(μ, x::Real) ::Real
Compute the Stieltjes measure function (SMF) of the measure μ
at the point x
.
The SMF is the measure-theoretic generalization of the cumulative distribution function (CDF) from probability theory. An SMF F(x) = smf(μ, x)
must have the following properties:
- F is nondecreasing
- F is right-continuous:
F(x)
should be the same aslim_{δ→0} F(x + |δ|)
. - μ((a,b]) = F(b) - F(a)
Note that unlike the CDF, an SMF is only determined up to addition by a constant. For many applications, this leads to a need to evaluate an SMF at -∞. It's therefore important that smf(μ, -Inf)
be fast. In practice, this will usually be called as smf(μ, static(-Inf))
. It's then easy to ensure speed and avoid complex control flow by adding a method smf(μ::M, ::StaticFloat64{-Inf})
.
Users who pronounce sinh
as "sinch" are advised to pronounce smf
as "smurf".
MeasureBase.to_origin
— FunctionMeasureBase.to_origin(ν, y)
Pull y
from ν
back to MeasureBase.transport_origin(ν)
.
MeasureBase.transport_def
— Functiontransport_def(ν, μ, x)
Transforms a value x
distributed according to μ
to a value y
distributed according to ν
.
If no specialized transport_def(::MU, ::NU, ...)
is available then the default implementation oftransport_def(ν, μ, x)
uses the following strategy:
Evaluate
transport_origin
for μ and ν. Transform between each and it's origin, if available, and use the origin(s) as intermediate measures for another transformation.If all else fails, try to transform from μ to a standard multivariate uniform measure and then to ν.
See transport_to
.
MeasureBase.transport_origin
— FunctionMeasureBase.transport_origin(ν)
Default measure to pullback to resp. pushforward from when transforming between ν
and another measure.
MeasureBase.transport_to
— Functionf = transport_to(ν, μ)
Generates a measurable function f
that transforms a value x
distributed according to measure μ
to a value y = f(x)
distributed according to a measure ν
.
The pushforward measure from μ
under f
is is equivalent to ν
.
If terms of random values this implies that f(rand(μ))
is equivalent to rand(ν)
(if rand(μ)
and rand(ν)
are supported).
The resulting function f
should support ChangesOfVariables.with_logabsdet_jacobian(f, x)
if mathematically well-defined, so that densities of ν
can be derived from densities of μ
via f
(using appropriate base measures).
Returns NoTransportOrigin{typeof(ν),typeof(μ)} if no transformation from μ
to ν
can be found.
To add transformation rules for a measure type MyMeasure
, specialize
MeasureBase.transport_def(ν::SomeStdMeasure, μ::CustomMeasure, x) = ...
MeasureBase.transport_def(ν::MyMeasure, μ::SomeStdMeasure, x) = ...
and/or
MeasureBase.transport_origin(ν::MyMeasure) = SomeMeasure(...)
MeasureBase.from_origin(μ::MyMeasure, x) = y
MeasureBase.to_origin(μ::MyMeasure, y) = x
and ensure MeasureBase.getdof(μ::MyMeasure)
is defined correctly.
A standard measure type like StdUniform
, StdExponential
or StdLogistic
may also be used as the source or target of the transform:
f_to_uniform(StdUniform, μ)
f_to_uniform(ν, StdUniform)
Depending on getdof(μ)
(resp. ν
), an instance of the standard distribution itself or a power of it (e.g. StdUniform()
or StdUniform()^dof
) will be chosen as the transformation partner.
MeasureBase.transport_to
— Methodtransport_to(ν, μ, x)
Transport x
from the measure μ
to the measure ν
MeasureBase.unsafe_logdensity_rel
— Methodunsafe_logdensity_rel(m1, m2, x)
Compute the log-density of m1
relative to m2
at x
, assuming x
is known to be in the support of both m1
and m2
.
See also logdensity_rel
.
MeasureBase.unsafe_logdensityof
— Methodunsafe_logdensityof(m, x)
Compute the log-density of the measure m
at x
relative to rootmeasure(m)
. This is "unsafe" because it does not check insupport(m, x)
.
See also logdensityof
.
MeasureBase.∫
— Method∫(f, base::AbstractMeasure)
Define a new measure in terms of a density f
over some measure base
.
MeasureBase.∫exp
— Method∫exp(f, base::AbstractMeasure)
Define a new measure in terms of a log-density f
over some measure base
.
MeasureBase.𝒹
— Method𝒹(μ, base)
Compute the density (Radom-Nikodym derivative) of μ with respect to base
. This is a shorthand form for density_rel(μ, base)
.