SG++-Doxygen-Documentation
python.uq.dists.Dist.Dist Class Reference
Inheritance diagram for python.uq.dists.Dist.Dist:

## Public Member Functions

def cdf (self, p, args, kws)

def corrcoeff (self, covMatrix=None)

def cov (self)

def crossEntropy (self, samples)

def fromJson (cls, jsonObject)

def getBounds (self)

def getDim (self)

def klDivergence (self, dist, testSamplesUnit=None, testSamplesProb=None, n=1e4)

def l2error (self, dist, testSamplesUnit=None, testSamplesProb=None, n=1e4, dtype=SampleType.ACTIVEPROBABILISTIC)

def mean (self)

def pdf (self, p, args, kws)

def ppf (self, p, args, kws)

def rvs (self, n=1)

def std (self)

def var (self)

## Detailed Description

The Dist class, which is the super class for all
distributions of this package

## ◆ cdf()

 def python.uq.dists.Dist.Dist.cdf ( self, p, args, kws )
Cumulative distribution function
@param p: (tuple) of floats
@return: cumulative distribution value

## ◆ corrcoeff()

 def python.uq.dists.Dist.Dist.corrcoeff ( self, covMatrix = None )

## ◆ cov()

 def python.uq.dists.Dist.Dist.cov ( self )
Get covariance matrix

Referenced by python.uq.dists.Dist.Dist.corrcoeff().

## ◆ crossEntropy()

 def python.uq.dists.Dist.Dist.crossEntropy ( self, samples )
this measure computes the cross entropy with respect
to some unknown probability distribution from which only samples
are available. This measure is known to minimize the kl divergence.

@param samples: numpy array

## ◆ fromJson()

 def python.uq.dists.Dist.Dist.fromJson ( cls, jsonObject )

## ◆ getBounds()

 def python.uq.dists.Dist.Dist.getBounds ( self )
Get the distribution's intervals
@return: numpy array [dim, 2]

Referenced by python.uq.dists.J.J.discretize(), and python.uq.dists.Dist.Dist.l2error().

## ◆ getDim()

 def python.uq.dists.Dist.Dist.getDim ( self )
Get number of marginal distributions
@return: int number of marginal distributions

## ◆ klDivergence()

 def python.uq.dists.Dist.Dist.klDivergence ( self, dist, testSamplesUnit = None, testSamplesProb = None, n = 1e4 )
computes the KL-divergence from this distribution with respect to dist

\approx \frac{1}{n} \sum_{i = 1}^{n} p(x_i) log_2 (p(x_i) / q(x_i))

and for samples obtained via importance sampling it holds

\approx \frac{1}{n} \sum_{i = 1}^{n} log_2 (p(y_i) / q(y_i))
= \frac{1}{n} \sum_{i = 1}^{n} log_2 p(y_i) - log_2 q(y_i))
= [\frac{1}{n} \sum_{i = 1}^{n} log_2 p(y_i)] - [\frac{1}{n} \sum_{i = 1}^{n} log_2 q(y_i)]
= mean(log_2 p(y_i)) - mean(log_2 q(y_i))

@param dist: Dist
@param testSamplesUnit: numpy array
@param testSamplesProb: numpy array

Referenced by python.uq.dists.Dist.Dist.corrcoeff().

## ◆ l2error()

 def python.uq.dists.Dist.Dist.l2error ( self, dist, testSamplesUnit = None, testSamplesProb = None, n = 1e4, dtype = SampleType.ACTIVEPROBABILISTIC )
mean squared error, defined as

|| p - p_n ||^2 = \int (p(x) - p_n(x))^2 * p(x) dx
~ 1/n \sum_i (p(x_i) - p_n(x_i)^2
for x_i drawn from p.
@param dist: Dist
@param testSamplesUnit: numpy array
@param testSamplesProb: numpy array
@param n: int, if no test samples are given, just select them
uniformly within he range of the distribution

Referenced by python.uq.dists.Dist.Dist.crossEntropy().

## ◆ mean()

 def python.uq.dists.Dist.Dist.mean ( self )

## ◆ pdf()

 def python.uq.dists.Dist.Dist.pdf ( self, p, args, kws )
Probability distribution function
@param p: (tuple) of floats
@return: probability distribution value

## ◆ ppf()

 def python.uq.dists.Dist.Dist.ppf ( self, p, args, kws )
Point percentile function
@param p: (tuple) of floats
@return: point percentile value

## ◆ rvs()

 def python.uq.dists.Dist.Dist.rvs ( self, n = 1 )
Generates n random numbers w.r.t. the marginal distributions
@param n: int number of random values
@return: numpy array [n, dim]

## ◆ std()

 def python.uq.dists.Dist.Dist.std ( self )
@return: standard deviation