Learn¶
Example:
import probpy as pp
prior = pp.normal.med(mu=1.0, sigma=2.0)
likelihood = pp.normal.med(sigma=1.0)
data = pp.normal.sample(mu=5.0, sigma=2.0, size=1000)
prior = pp.parameter_posterior(data, likelihood=likelihood, priors=prior)
-
probpy.learn.posterior.posterior.parameter_posterior(data: typing.Union[numpy.ndarray, typing.Tuple[numpy.ndarray]], likelihood: typing.Union[probpy.core.RandomVariable, typing.Callable[[typing.Tuple[numpy.ndarray]], numpy.ndarray]], prior: probpy.core.RandomVariable, mode='mcmc', **kwargs) → probpy.core.RandomVariable[source]¶ Estimate the posterior distribution of some likelihood and priors. This function uses conjugate priors, mcmc or ga. If a likelihood is given conjugate priors then the mode argument will be ignored and a conjugate update will be done, because it is much faster.
Parameters: - data – data for likelihood
- likelihood – likelihood function / distribution
- priors – prior or list of priors
- mode – mcmc or search
- kwargs – arguments passed to mcmc / ga
Returns: RandomVariable
-
probpy.learn.posterior.mcmc.mcmc(data: typing.Tuple[numpy.ndarray], likelihood: typing.Union[probpy.core.RandomVariable, typing.Callable[[typing.Tuple[numpy.ndarray]], numpy.ndarray]], prior: probpy.core.RandomVariable, samples: int = 1000, mixing: int = 0, energy: float = 0.5, batch: int = 5, match_moments_for: probpy.core.Distribution = None, normalize: bool = True, density: probpy.core.Density = None)[source]¶ Don’t call this function directly, always use parameter_posterior with mode=”mcmc”
Parameters: - data – data passed to likelihood
- likelihood – likelihood function / distribution
- prior – prior distribution
- samples – number of mcmc samples to generate
- mixing – number of initial samples to ignore
- energy – variance in exploration
- batch – number of particles to run concurrently
- match_moments_for – distributions to force posterior into using moment matching
- normalize – normalize the resulting density
- density – density estimator
Returns: RandomVariable
-
probpy.learn.posterior.search.search(data: typing.Tuple[numpy.ndarray], likelihood: typing.Union[probpy.core.RandomVariable, typing.Callable[[typing.Tuple[numpy.ndarray]], numpy.ndarray]], prior: probpy.core.RandomVariable, samples: int = 1000, energy: float = 0.5, batch=5, volume=10.0, normalize: bool = False, density: probpy.core.Density = None, **ubrk_args)[source]¶ Don’t call this function directly, always use parameter_posterior with mode=”search”
Parameters: - volume – volume of elements
- data – data passed to likelihood
- likelihood – likelihood function
- prior – prior / priors
- samples – samples in search estimate
- energy – energies in exploration
- batch – number of samples run concurrently
- normalize – normalize posterior
- density – density estimator
- ubrk_args – arguments to ubrk (default density estimator)
Returns: RandomVariable