# fitters - Wrappers for various optimization algorithms¶

 BFGSFit BFGS quasi-newton optimizer. CheckpointMonitor Periodically save fit state so that it can be resumed later. ConsoleMonitor Display fit progress on the console DEFit Classic Storn and Price differential evolution optimizer. DreamFit DreamModel DREAM wrapper for fit problems. FitBase FitBase defines the interface from bumps models to the various fitting engines available within bumps. FitDriver LevenbergMarquardtFit Levenberg-Marquardt optimizer. MPFit MPFit optimizer. MonitorRunner Adaptor which allows solvers to accept progress monitors. MultiStart Multi-start monte carlo fitter. PSFit Particle swarm optimizer. PTFit Parallel tempering optimizer. RLFit Random lines optimizer. Resampler SimplexFit Nelder-Mead simplex optimizer. SnobFit StepMonitor Collect information at every step of the fit and save it to a file. fit Simplified fit interface. load_history Load fitter details from a history file. parse_tolerance register Register a new fitter with bumps, if it is not already there. save_history Save fitter details to a history file as JSON.

Interfaces to various optimizers.

class bumps.fitters.BFGSFit(problem)[source]

BFGS quasi-newton optimizer.

BFGS estimates Hessian and its Cholesky decomposition, but initial tests give uncertainties quite different from the directly computed Jacobian in Levenburg-Marquardt or the Hessian estimated at the minimum by numdifftools.

To use the internal ‘H’ and ‘L’ and save some computation time, then use:

C = lsqerror.chol_cov(fit.result['L'])
stderr = lsqerror.stderr(C)

id = 'newton'
name = 'Quasi-Newton BFGS'
settings = [('steps', 3000), ('starts', 1), ('ftol', 1e-06), ('xtol', 1e-12)]
solve(monitors=None, abort_test=None, mapper=None, **options)[source]
class bumps.fitters.CheckpointMonitor(checkpoint, progress=1800)[source]

Periodically save fit state so that it can be resumed later.

checkpoint = None

Function to call at each checkpoint.

config_history(history)

Indicate which fields are needed by the monitor and for what duration.

show_improvement(history)[source]
show_progress(history)[source]
class bumps.fitters.ConsoleMonitor(problem, progress=1, improvement=30)[source]

Display fit progress on the console

config_history(history)

Indicate which fields are needed by the monitor and for what duration.

show_improvement(history)[source]
show_progress(history)[source]
class bumps.fitters.DEFit(problem)[source]

Classic Storn and Price differential evolution optimizer.

id = 'de'
load(input_path)[source]
name = 'Differential Evolution'
save(output_path)[source]
settings = [('steps', 1000), ('pop', 10), ('CR', 0.9), ('F', 2.0), ('ftol', 1e-08), ('xtol', 1e-06)]
solve(monitors=None, abort_test=None, mapper=None, **options)[source]
class bumps.fitters.DreamFit(problem)[source]
entropy(**kw)[source]
error_plot(figfile)[source]
id = 'dream'
load(input_path)[source]
name = 'DREAM'
plot(output_path)[source]
save(output_path)[source]
settings = [('samples', 10000), ('burn', 100), ('pop', 10), ('init', 'eps'), ('thin', 1), ('alpha', 0.01), ('outliers', 'none'), ('trim', False), ('steps', 0)]
show()[source]
solve(monitors=None, abort_test=None, mapper=None, **options)[source]
stderr()[source]

Approximate standard error as 1/2 the 68% interval fo the sample, which is a more robust measure than the mean of the sample for non-normal distributions.

class bumps.fitters.DreamModel(problem=None, mapper=None)[source]

DREAM wrapper for fit problems.

bounds = None
labels = None
log_density(x)[source]
map(pop)[source]
nllf(x)[source]

Negative log likelihood of seeing models given x

plot(x)
class bumps.fitters.FitBase(problem)[source]

Bases: object

FitBase defines the interface from bumps models to the various fitting engines available within bumps.

Each engine is defined in its own class with a specific set of attributes and methods.

The name attribute is the name of the optimizer. This is just a simple string.

The settings attribute is a list of pairs (name, default), where the names are defined as fields in FitOptions. A best attempt should be made to map the fit options for the optimizer to the standard fit options, since each of these becomes a new command line option when running bumps. If that is not possible, then a new option should be added to FitOptions. A plugin architecture might be appropriate here, if there are reasons why specific problem domains might need custom fitters, but this is not yet supported.

Each engine takes a fit problem in its constructor.

The solve() method runs the fit. It accepts a monitor to track updates, a mapper to distribute work and key-value pairs defining the settings.

There are a number of optional methods for the fitting engines. Basically, all the methods in FitDriver first check if they are specialized in the fit engine before performing a default action.

The load/save methods load and save the fitter state in a given directory with a specific base file name. The fitter can choose a file extension to add to the base name. Some care is needed to be sure that the extension doesn’t collide with other extensions such as .mon for the fit monitor.

The plot method shows any plots to help understand the performance of the fitter, such as a convergence plot showing the the range of values in the population over time, as well as plots of the parameter uncertainty if available. The plot should work within is given a figure canvas to work with

The stderr/cov methods should provide summary statistics for the parameter uncertainties. Some fitters, such as MCMC, will compute these directly from the population. Others, such as BFGS, will produce an estimate of the uncertainty as they go along. If the fitter does not provide these estimates, then they will be computed from numerical derivatives at the minimum in the FitDriver method.

solve(monitors=None, mapper=None, **options)[source]
class bumps.fitters.FitDriver(fitclass=None, problem=None, monitors=None, abort_test=None, mapper=None, **options)[source]

Bases: object

chisq()[source]
clip()[source]

Force parameters within bounds so constraints are finite.

The problem is updated with the new parameter values.

Returns a list of parameter names that were clipped.

cov()[source]

Return an estimate of the covariance of the fit.

Depending on the fitter and the problem, this may be computed from existing evaluations within the fitter, or from numerical differentiation around the minimum.

If the problem uses $$\chi^2/2$$ as its nllf, then the covariance is derived from the Jacobian:

x = fit.problem.getp()
J = bumps.lsqerror.jacobian(fit.problem, x)
cov = bumps.lsqerror.jacobian_cov(J)


Otherwise, the numerical differentiation will use the Hessian estimated from nllf:

x = fit.problem.getp()
H = bumps.lsqerror.hessian(fit.problem, x)
cov = bumps.lsqerror.hessian_cov(H)

entropy(method=None)[source]
fit(resume=None)[source]
load(input_path)[source]
plot(output_path, view=None)[source]
save(output_path)[source]
show()[source]
show_cov()[source]
show_entropy(method=None)[source]
show_err()[source]

Display the error approximation from the numerical derivative.

Warning: cost grows as the cube of the number of parameters.

stderr()[source]

Return an estimate of the standard error of the fit.

Depending on the fitter and the problem, this may be computed from existing evaluations within the fitter, or from numerical differentiation around the minimum.

stderr_from_cov()[source]

Return an estimate of standard error of the fit from covariance matrix.

Unlike stderr, which uses the estimate from the underlying fitter (DREAM uses the MCMC sample for this), stderr_from_cov estimates the error from the diagonal of the covariance matrix. Here, the covariance matrix may have been estimated by the fitter instead of the Hessian.

class bumps.fitters.LevenbergMarquardtFit(problem)[source]

Levenberg-Marquardt optimizer.

cov()[source]
id = 'scipy.leastsq'
name = 'Levenberg-Marquardt (scipy.leastsq)'
settings = [('steps', 200), ('ftol', 1.5e-08), ('xtol', 1.5e-08)]
solve(monitors=None, abort_test=None, mapper=None, **options)[source]
class bumps.fitters.MPFit(problem)[source]

MPFit optimizer.

id = 'lm'
name = 'Levenberg-Marquardt'
settings = [('steps', 200), ('ftol', 1e-10), ('xtol', 1e-10)]
solve(monitors=None, abort_test=None, mapper=None, **options)[source]
class bumps.fitters.MonitorRunner(monitors, problem)[source]

Bases: object

Adaptor which allows solvers to accept progress monitors.

class bumps.fitters.MultiStart(fitter)[source]

Multi-start monte carlo fitter.

This fitter wraps a local optimizer, restarting it a number of times to give it a chance to find a different local minimum. If the keep_best option is True, then restart near the best fit, otherwise restart at random.

name = 'Multistart Monte Carlo'
settings = [('starts', 100)]
solve(monitors=None, mapper=None, **options)[source]
class bumps.fitters.PSFit(problem)[source]

Particle swarm optimizer.

id = 'ps'
name = 'Particle Swarm'
settings = [('steps', 3000), ('pop', 1)]
solve(monitors=None, mapper=None, **options)[source]
class bumps.fitters.PTFit(problem)[source]

Parallel tempering optimizer.

id = 'pt'
name = 'Parallel Tempering'
settings = [('steps', 400), ('nT', 24), ('CR', 0.9), ('burn', 100), ('Tmin', 0.1), ('Tmax', 10)]
solve(monitors=None, mapper=None, **options)[source]
class bumps.fitters.RLFit(problem)[source]

Random lines optimizer.

id = 'rl'
name = 'Random Lines'
settings = [('steps', 3000), ('starts', 20), ('pop', 0.5), ('CR', 0.9)]
solve(monitors=None, abort_test=None, mapper=None, **options)[source]
class bumps.fitters.Resampler(fitter)[source]
solve(**options)[source]
class bumps.fitters.SimplexFit(problem)[source]

id = 'amoeba'
name = 'Nelder-Mead Simplex'
settings = [('steps', 1000), ('starts', 1), ('radius', 0.15), ('xtol', 1e-06), ('ftol', 1e-08)]
solve(monitors=None, abort_test=None, mapper=None, **options)[source]
class bumps.fitters.SnobFit(problem)[source]
id = 'snobfit'
name = 'SNOBFIT'
settings = [('steps', 200)]
solve(monitors=None, mapper=None, **options)[source]
class bumps.fitters.StepMonitor(problem, fid, fields=['step', 'time', 'value', 'point'])[source]

Collect information at every step of the fit and save it to a file.

fid is the file to save the information to fields is the list of “step|time|value|point” fields to save

The point field should be last in the list.

FIELDS = ['step', 'time', 'value', 'point']
config_history(history)[source]

Indicate which fields are needed by the monitor and for what duration.

bumps.fitters.fit(problem, method='amoeba', verbose=False, **options)[source]

Simplified fit interface.

Given a fit problem, the name of a fitter and the fitter options, it will run the fit and return the best value and standard error of the parameters. If verbose is true, then the console monitor will be enabled, showing progress through the fit and showing the parameter standard error at the end of the fit, otherwise it is completely silent.

Returns an OptimizeResult object containing “x” and “dx”. The dream fitter also includes the “state” object, allowing for more detailed uncertainty analysis. Optimizer information such as the stopping condition and the number of function evaluations are not yet included.

To run in parallel (with multiprocessing and dream):

from bumps.mapper import MPMapper
mapper = MPMapper.start_mapper(problem, None, cpu=0) #cpu=0 for all CPUs
result = fit(problem, method="dream", mapper=mapper)

bumps.fitters.load_history(path)[source]

Load fitter details from a history file.

bumps.fitters.parse_tolerance(options)[source]
bumps.fitters.register(fitter, active=True)[source]

Register a new fitter with bumps, if it is not already there.

active is False if you don’t want it showing up in the GUI selector.

bumps.fitters.save_history(path, state)[source]

Save fitter details to a history file as JSON.

The content of the details are fitter specific.