MeasureBase

Documentation for MeasureBase.

MeasureBase.DensityType
struct Density{M,B}
    μ::M
    base::B
end

For measures μ and ν with μ≪ν, the density of μ with respect to ν (also called the Radon-Nikodym derivative dμ/dν) is a function f defined on the support of ν with the property that for any measurable a ⊂ supp(ν), μ(a) = ∫ₐ f dν.

Because this function is often difficult to express in closed form, there are many different ways of computing it. We therefore provide a formal representation to allow comptuational flexibilty.

source
MeasureBase.DensityMeasureType
struct DensityMeasure{F,B} <: AbstractMeasure
    density :: F
    base    :: B
end

A DensityMeasure is a measure defined by a density with respect to some other "base" measure

source
MeasureBase.LikelihoodType
Likelihood(M<:ParameterizedMeasure, x)

"Observe" a value x, yielding a function from the parameters to ℝ.

Likelihoods are most commonly used in conjunction with an existing prior measure to yield a new measure, the posterior. In Bayes's Law, we have

$P(θ|x) ∝ P(θ) P(x|θ)$

Here $P(θ)$ is the prior. If we consider $P(x|θ)$ as a function on $θ$, then it is called a likelihood.

Since measures are most commonly manipulated using density and logdensity, it's awkward to commit a (log-)likelihood to using one or the other. To evaluate a Likelihood, we therefore use density or logdensity, depending on the circumstances. In the latter case, it is of course acting as a log-density.

For example,

julia> ℓ = Likelihood(Normal{(:μ,)}, 2.0)
Likelihood(Normal{(:μ,), T} where T, 2.0)

julia> density(ℓ, (μ=2.0,))
1.0

julia> logdensity(ℓ, (μ=2.0,))
-0.0

If, as above, the measure includes the parameter information, we can optionally leave it out of the second argument in the call to density or logdensity.

julia> density(ℓ, 2.0)
1.0

julia> logdensity(ℓ, 2.0)
-0.0

With several parameters, things work as expected:

julia> ℓ = Likelihood(Normal{(:μ,:σ)}, 2.0)
Likelihood(Normal{(:μ, :σ), T} where T, 2.0)

julia> logdensity(ℓ, (μ=2, σ=3))
-1.0986122886681098

julia> logdensity(ℓ, (2,3))
-1.0986122886681098

julia> logdensity(ℓ, [2, 3])
-1.0986122886681098

Likelihood(M<:ParameterizedMeasure, constraint::NamedTuple, x)

In some cases the measure might have several parameters, and we may want the (log-)likelihood with respect to some subset of them. In this case, we can use the three-argument form, where the second argument is a constraint. For example,

julia> ℓ = Likelihood(Normal{(:μ,:σ)}, (σ=3.0,), 2.0)
Likelihood(Normal{(:μ, :σ), T} where T, (σ = 3.0,), 2.0)

Similarly to the above, we have

julia> density(ℓ, (μ=2.0,))
0.3333333333333333

julia> logdensity(ℓ, (μ=2.0,))
-1.0986122886681098

julia> density(ℓ, 2.0)
0.3333333333333333

julia> logdensity(ℓ, 2.0)
-1.0986122886681098

Finally, let's return to the expression for Bayes's Law,

$P(θ|x) ∝ P(θ) P(x|θ)$

The product on the right side is computed pointwise. To work with this in MeasureBase, we have a "pointwise product" , which takes a measure and a likelihood, and returns a new measure, that is, the unnormalized posterior that has density $P(θ) P(x|θ)$ with respect to the base measure of the prior.

For example, say we have

μ ~ Normal()
x ~ Normal(μ,σ)
σ = 1

and we observe x=3. We can compute the posterior measure on μ as

julia> post = Normal() ⊙ Likelihood(Normal{(:μ, :σ)}, (σ=1,), 3)
Normal() ⊙ Likelihood(Normal{(:μ, :σ), T} where T, (σ = 1,), 3)

julia> logdensity(post, 2)
-2.5
source
MeasureBase.SuperpositionMeasureType
struct SuperpositionMeasure{X,NT} <: AbstractMeasure
    components :: NT
end

Superposition of measures is analogous to mixture distributions, but (because measures need not be normalized) requires no scaling. The superposition of two measures μ and ν can be more concisely written as μ + ν. Superposition measures satisfy

basemeasure(μ + ν) == basemeasure(μ) + basemeasure(ν)

\[ \begin{aligned}\frac{\mathrm{d}(\mu+\nu)}{\mathrm{d}(\alpha+\beta)} & =\frac{f\,\mathrm{d}\alpha+g\,\mathrm{d}\beta}{\mathrm{d}\alpha+\mathrm{d}\beta}\\ & =\frac{f\,\mathrm{d}\alpha}{\mathrm{d}\alpha+\mathrm{d}\beta}+\frac{g\,\mathrm{d}\beta}{\mathrm{d}\alpha+\mathrm{d}\beta}\\ & =\frac{f}{1+\frac{\mathrm{d}\beta}{\mathrm{d}\alpha}}+\frac{g}{\frac{\mathrm{d}\alpha}{\mathrm{d}\beta}+1}\\ & =\frac{f}{1+\left(\frac{\mathrm{d}\alpha}{\mathrm{d}\beta}\right)^{-1}}+\frac{g}{\frac{\mathrm{d}\alpha}{\mathrm{d}\beta}+1}\ . \end{aligned}\]

source
MeasureBase.ForMethod
For(f, base...)

For provides a convenient way to construct a ProductMeasure. There are several options for the base. With Julia's do notation, this can look very similar to a standard for loop, while maintaining semantics structure that's easier to work with.


For(f, base::Int...)

When one or several Int values are passed for base, the result is treated as depending on CartesianIndices(base).

julia> For(3) do λ Exponential(λ) end |> marginals
3-element mappedarray(MeasureBase.var"#17#18"{var"#15#16"}(var"#15#16"()), ::CartesianIndices{1, Tuple{Base.OneTo{Int64}}}) with eltype Exponential{(:λ,), Tuple{Int64}}:
 Exponential(λ = 1,)
 Exponential(λ = 2,)
 Exponential(λ = 3,)
julia> For(4,3) do μ,σ Normal(μ,σ) end |> marginals
4×3 mappedarray(MeasureBase.var"#17#18"{var"#11#12"}(var"#11#12"()), ::CartesianIndices{2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}) with eltype Normal{(:μ, :σ), Tuple{Int64, Int64}}:
 Normal(μ = 1, σ = 1)  Normal(μ = 1, σ = 2)  Normal(μ = 1, σ = 3)
 Normal(μ = 2, σ = 1)  Normal(μ = 2, σ = 2)  Normal(μ = 2, σ = 3)
 Normal(μ = 3, σ = 1)  Normal(μ = 3, σ = 2)  Normal(μ = 3, σ = 3)
 Normal(μ = 4, σ = 1)  Normal(μ = 4, σ = 2)  Normal(μ = 4, σ = 3)

For(f, base::AbstractArray...)`

In this case, base behaves as if the arrays are zipped together before applying the map.

julia> For(randn(3)) do x Exponential(x) end |> marginals
3-element mappedarray(x->Main.Exponential(x), ::Vector{Float64}) with eltype Exponential{(:λ,), Tuple{Float64}}:
 Exponential(λ = -0.268256,)
 Exponential(λ = 1.53044,)
 Exponential(λ = -1.08839,)
julia> For(1:3, 1:3) do μ,σ Normal(μ,σ) end |> marginals
3-element mappedarray((:μ, :σ)->Main.Normal(μ, σ), ::UnitRange{Int64}, ::UnitRange{Int64}) with eltype Normal{(:μ, :σ), Tuple{Int64, Int64}}:
 Normal(μ = 1, σ = 1)
 Normal(μ = 2, σ = 2)
 Normal(μ = 3, σ = 3)

For(f, base::Base.Generator)

For Generators, the function maps over the values of the generator:

julia> For(eachrow(rand(4,2))) do x Normal(x[1], x[2]) end |> marginals |> collect
4-element Vector{Normal{(:μ, :σ), Tuple{Float64, Float64}}}:
 Normal(μ = 0.255024, σ = 0.570142)
 Normal(μ = 0.970706, σ = 0.0776745)
 Normal(μ = 0.731491, σ = 0.505837)
 Normal(μ = 0.563112, σ = 0.98307)
source
MeasureBase.basekernelFunction

For any k::Kernel, basekernel is expected to satisfy

basemeasure(k(p)) == basekernel(k)(p)

The main purpose of basekernel is to make it efficient to compute

basemeasure(d::ProductMeasure) = productmeasure(basekernel(d.f), d.pars)
source
MeasureBase.kernelFunction
kernel(f, M)
kernel((f1, f2, ...), M)

A kernel κ = kernel(f, m) returns a wrapper around a function f giving the parameters for a measure of type M, such that κ(x) = M(f(x)...) respective κ(x) = M(f1(x), f2(x), ...)

If the argument is a named tuple (;a=f1, b=f1), κ(x) is defined as M(;a=f(x),b=g(x)).

Reference

  • https://en.wikipedia.org/wiki/Markov_kernel
source
MeasureBase.logdensityFunction
logdensity(μ::AbstractMeasure{X}, x::X)

Compute the logdensity of the measure μ at the point x. This is the standard way to define logdensity for a new measure. the base measure is implicit here, and is understood to be basemeasure(μ).

Methods for computing density relative to other measures will be

source
MeasureBase.rootmeasureMethod
rootmeasure(μ::AbstractMeasure)

It's sometimes important to be able to find the fix point of a measure under basemeasure. That is, to start with some measure and apply basemeasure repeatedly until there's no change. That's what this does.

source
MeasureBase.∫Method
∫(f, base::AbstractMeasure)

Define a new measure in terms of a density f over some measure base.

source
MeasureBase.∫expMethod
∫exp(f, base::AbstractMeasure; log=false)

Define a new measure in terms of a density f over some measure base.

source
MeasureBase.𝒹Method
𝒹(μ::AbstractMeasure, base::AbstractMeasure; log=false)

Compute the Radom-Nikodym derivative (or its log, if log=false) of μ with respect to base.

source
MeasureBase.@domainMacro
@domain(name, T)

Defines a new singleton struct T, and a value name for building values of that type.

For example, @domain ℝ RealNumbers is equivalent to

struct RealNumbers <: AbstractDomain end

export ℝ

ℝ = RealNumbers()

Base.show(io::IO, ::RealNumbers) = print(io, "ℝ")
source
MeasureBase.@halfMacro
@half dist([paramnames])

Starting from a symmetric univariate measure dist ≪ Lebesgue(ℝ), create a new measure Halfdist ≪ Lebesgue(ℝ₊). For example,

@half Normal()

creates HalfNormal(), and

@half StudentT(ν)

creates HalfStudentT(ν).

source
MeasureBase.@parameterizedMacro
@parameterized <declaration>

The <declaration> gives a measure and its default parameters, and specifies its relation to its base measure. For example,

@parameterized Normal(μ,σ)

declares the Normal is a measure with default parameters μ and σ. The result is equivalent to

struct Normal{N,T} <: ParameterizedMeasure{N}
    par :: NamedTuple{N,T}
end

KeywordCalls.@kwstruct Normal(μ,σ)

Normal(μ,σ) = Normal((μ=μ, σ=σ))

See KeywordCalls.jl for details on @kwstruct.

source