entropy - Entropy calculation¶
entropy |
Return entropy estimate and uncertainty from a random sample. |
Estimate entropy after a fit.
The entropy()
method computes the entropy directly from a set of
MCMC samples, normalized by a scale factor computed from the kernel density
estimate at a subset of the points.[1]
The cov_entropy()
method computes the entropy associated with the
covariance matrix. This covariance matrix can be estimated during the
fitting procedure (BFGS updates an estimate of the Hessian matrix for example),
or computed by estimating derivatives when the fit is complete.
The MVNEntropy
estimates the covariance from an MCMC sample and
uses this covariance to estimate the entropy. This gives a better
estimate of the entropy than the equivalent direct calculation, which requires
many more samples for a good kernel density estimate. The reject_normal
attribute is True if the MCMC sample is significantly different from normal.
[1] | Kramer, A., Hasenauer, J., Allgower, F., Radde, N., 2010. Computation of the posterior entropy in a Bayesian framework for parameter estimation in biological networks, in: 2010 IEEE International Conference on Control Applications (CCA). Presented at the 2010 IEEE International Conference on Control Applications (CCA), pp. 493-498. doi:10.1109/CCA.2010.5611198 |
[2] |
|
[3] |
|
[4] |
|
[5] |
|
-
bumps.dream.entropy.
entropy
(points, logp, N_entropy=10000, N_norm=2500)[source]¶ Return entropy estimate and uncertainty from a random sample.
points is a set of draws from an underlying distribution, as returned by a Markov chain Monte Carlo process for example.
logp is the log-likelihood for each draw.
N_norm is the number of points \(k\) to use to estimate the posterior density normalization factor \(P(D) = \hat N\), converting from \(\log( P(D|M) P(M) )\) to \(\log( P(D|M)P(M)/P(D) )\). The relative uncertainty \(\Delta\hat S/\hat S\) scales with \(\sqrt{k}\), with the default N_norm=2500 corresponding to 2% relative uncertainty. Computation cost is \(O(nk)\) where \(n\) is number of points in the draw.
N_entropy is the number of points used to estimate the entropy \(\hat S = - \int P(M|D) \log P(M|D)\) from the normalized log likelihood values.