fitters  Wrappers for various optimization algorithms¶
BFGSFit 
BFGS quasinewton optimizer. 
CheckpointMonitor 
Periodically save fit state so that it can be resumed later. 
ConsoleMonitor 
Display fit progress on the console 
DEFit 
Classic Storn and Price differential evolution optimizer. 
DreamFit 

DreamModel 
DREAM wrapper for fit problems. 
FitBase 
FitBase defines the interface from bumps models to the various fitting engines available within bumps. 
FitDriver 

LevenbergMarquardtFit 
LevenbergMarquardt optimizer. 
MPFit 
MPFit optimizer. 
MonitorRunner 
Adaptor which allows solvers to accept progress monitors. 
MultiStart 
Multistart monte carlo fitter. 
PSFit 
Particle swarm optimizer. 
PTFit 
Parallel tempering optimizer. 
RLFit 
Random lines optimizer. 
Resampler 

SimplexFit 
NelderMead simplex optimizer. 
SnobFit 

StepMonitor 
Collect information at every step of the fit and save it to a file. 
fit 
Simplified fit interface. 
load_history 
Load fitter details from a history file. 
parse_tolerance 

register 
Register a new fitter with bumps, if it is not already there. 
save_history 
Save fitter details to a history file as JSON. 
Interfaces to various optimizers.

class
bumps.fitters.
BFGSFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase
BFGS quasinewton optimizer.
BFGS estimates Hessian and its Cholesky decomposition, but initial tests give uncertainties quite different from the directly computed Jacobian in LevenburgMarquardt or the Hessian estimated at the minimum by numdifftools.
To use the internal ‘H’ and ‘L’ and save some computation time, then use:
C = lsqerror.chol_cov(fit.result['L']) stderr = lsqerror.stderr(C)

id
= 'newton'¶

name
= 'QuasiNewton BFGS'¶

settings
= [('steps', 3000), ('starts', 1), ('ftol', 1e06), ('xtol', 1e12)]¶


class
bumps.fitters.
CheckpointMonitor
(checkpoint, progress=1800)[source]¶ Bases:
bumps.monitor.TimedUpdate
Periodically save fit state so that it can be resumed later.

checkpoint
= None¶ Function to call at each checkpoint.

config_history
(history)¶ Indicate which fields are needed by the monitor and for what duration.


class
bumps.fitters.
ConsoleMonitor
(problem, progress=1, improvement=30)[source]¶ Bases:
bumps.monitor.TimedUpdate
Display fit progress on the console

config_history
(history)¶ Indicate which fields are needed by the monitor and for what duration.


class
bumps.fitters.
DEFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase
Classic Storn and Price differential evolution optimizer.

id
= 'de'¶

name
= 'Differential Evolution'¶

settings
= [('steps', 1000), ('pop', 10), ('CR', 0.9), ('F', 2.0), ('ftol', 1e08), ('xtol', 1e06)]¶


class
bumps.fitters.
DreamFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase

id
= 'dream'¶

name
= 'DREAM'¶

settings
= [('samples', 10000), ('burn', 100), ('pop', 10), ('init', 'eps'), ('thin', 1), ('alpha', 0.01), ('outliers', 'none'), ('trim', False), ('steps', 0)]¶


class
bumps.fitters.
DreamModel
(problem=None, mapper=None)[source]¶ Bases:
bumps.dream.model.MCMCModel
DREAM wrapper for fit problems.

bounds
= None¶

labels
= None¶

plot
(x)¶


class
bumps.fitters.
FitBase
(problem)[source]¶ Bases:
object
FitBase defines the interface from bumps models to the various fitting engines available within bumps.
Each engine is defined in its own class with a specific set of attributes and methods.
The name attribute is the name of the optimizer. This is just a simple string.
The settings attribute is a list of pairs (name, default), where the names are defined as fields in FitOptions. A best attempt should be made to map the fit options for the optimizer to the standard fit options, since each of these becomes a new command line option when running bumps. If that is not possible, then a new option should be added to FitOptions. A plugin architecture might be appropriate here, if there are reasons why specific problem domains might need custom fitters, but this is not yet supported.
Each engine takes a fit problem in its constructor.
The
solve()
method runs the fit. It accepts a monitor to track updates, a mapper to distribute work and keyvalue pairs defining the settings.There are a number of optional methods for the fitting engines. Basically, all the methods in
FitDriver
first check if they are specialized in the fit engine before performing a default action.The load/save methods load and save the fitter state in a given directory with a specific base file name. The fitter can choose a file extension to add to the base name. Some care is needed to be sure that the extension doesn’t collide with other extensions such as .mon for the fit monitor.
The plot method shows any plots to help understand the performance of the fitter, such as a convergence plot showing the the range of values in the population over time, as well as plots of the parameter uncertainty if available. The plot should work within is given a figure canvas to work with
The stderr/cov methods should provide summary statistics for the parameter uncertainties. Some fitters, such as MCMC, will compute these directly from the population. Others, such as BFGS, will produce an estimate of the uncertainty as they go along. If the fitter does not provide these estimates, then they will be computed from numerical derivatives at the minimum in the FitDriver method.

class
bumps.fitters.
FitDriver
(fitclass=None, problem=None, monitors=None, abort_test=None, mapper=None, **options)[source]¶ Bases:
object

clip
()[source]¶ Force parameters within bounds so constraints are finite.
The problem is updated with the new parameter values.
Returns a list of parameter names that were clipped.

cov
()[source]¶ Return an estimate of the covariance of the fit.
Depending on the fitter and the problem, this may be computed from existing evaluations within the fitter, or from numerical differentiation around the minimum.
If the problem uses \(\chi^2/2\) as its nllf, then the covariance is derived from the Jacobian:
x = fit.problem.getp() J = bumps.lsqerror.jacobian(fit.problem, x) cov = bumps.lsqerror.jacobian_cov(J)
Otherwise, the numerical differentiation will use the Hessian estimated from nllf:
x = fit.problem.getp() H = bumps.lsqerror.hessian(fit.problem, x) cov = bumps.lsqerror.hessian_cov(H)

show_err
()[source]¶ Display the error approximation from the numerical derivative.
Warning: cost grows as the cube of the number of parameters.

stderr
()[source]¶ Return an estimate of the standard error of the fit.
Depending on the fitter and the problem, this may be computed from existing evaluations within the fitter, or from numerical differentiation around the minimum.

stderr_from_cov
()[source]¶ Return an estimate of standard error of the fit from covariance matrix.
Unlike stderr, which uses the estimate from the underlying fitter (DREAM uses the MCMC sample for this), stderr_from_cov estimates the error from the diagonal of the covariance matrix. Here, the covariance matrix may have been estimated by the fitter instead of the Hessian.


class
bumps.fitters.
LevenbergMarquardtFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase
LevenbergMarquardt optimizer.

id
= 'scipy.leastsq'¶

name
= 'LevenbergMarquardt (scipy.leastsq)'¶

settings
= [('steps', 200), ('ftol', 1.5e08), ('xtol', 1.5e08)]¶


class
bumps.fitters.
MPFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase
MPFit optimizer.

id
= 'lm'¶

name
= 'LevenbergMarquardt'¶

settings
= [('steps', 200), ('ftol', 1e10), ('xtol', 1e10)]¶


class
bumps.fitters.
MonitorRunner
(monitors, problem)[source]¶ Bases:
object
Adaptor which allows solvers to accept progress monitors.

class
bumps.fitters.
MultiStart
(fitter)[source]¶ Bases:
bumps.fitters.FitBase
Multistart monte carlo fitter.
This fitter wraps a local optimizer, restarting it a number of times to give it a chance to find a different local minimum. If the keep_best option is True, then restart near the best fit, otherwise restart at random.

name
= 'Multistart Monte Carlo'¶

settings
= [('starts', 100)]¶


class
bumps.fitters.
PSFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase
Particle swarm optimizer.

id
= 'ps'¶

name
= 'Particle Swarm'¶

settings
= [('steps', 3000), ('pop', 1)]¶


class
bumps.fitters.
PTFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase
Parallel tempering optimizer.

id
= 'pt'¶

name
= 'Parallel Tempering'¶

settings
= [('steps', 400), ('nT', 24), ('CR', 0.9), ('burn', 100), ('Tmin', 0.1), ('Tmax', 10)]¶


class
bumps.fitters.
RLFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase
Random lines optimizer.

id
= 'rl'¶

name
= 'Random Lines'¶

settings
= [('steps', 3000), ('starts', 20), ('pop', 0.5), ('CR', 0.9)]¶


class
bumps.fitters.
Resampler
(fitter)[source]¶ Bases:
bumps.fitters.FitBase

class
bumps.fitters.
SimplexFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase
NelderMead simplex optimizer.

id
= 'amoeba'¶

name
= 'NelderMead Simplex'¶

settings
= [('steps', 1000), ('starts', 1), ('radius', 0.15), ('xtol', 1e06), ('ftol', 1e08)]¶


class
bumps.fitters.
SnobFit
(problem)[source]¶ Bases:
bumps.fitters.FitBase

id
= 'snobfit'¶

name
= 'SNOBFIT'¶

settings
= [('steps', 200)]¶


class
bumps.fitters.
StepMonitor
(problem, fid, fields=['step', 'time', 'value', 'point'])[source]¶ Bases:
bumps.monitor.Monitor
Collect information at every step of the fit and save it to a file.
fid is the file to save the information to fields is the list of “steptimevaluepoint” fields to save
The point field should be last in the list.

FIELDS
= ['step', 'time', 'value', 'point']¶


bumps.fitters.
fit
(problem, method='amoeba', verbose=False, **options)[source]¶ Simplified fit interface.
Given a fit problem, the name of a fitter and the fitter options, it will run the fit and return the best value and standard error of the parameters. If verbose is true, then the console monitor will be enabled, showing progress through the fit and showing the parameter standard error at the end of the fit, otherwise it is completely silent.
Returns an OptimizeResult object containing “x” and “dx”. The dream fitter also includes the “state” object, allowing for more detailed uncertainty analysis. Optimizer information such as the stopping condition and the number of function evaluations are not yet included.
To run in parallel (with multiprocessing and dream):
from bumps.mapper import MPMapper mapper = MPMapper.start_mapper(problem, None, cpu=0) #cpu=0 for all CPUs result = fit(problem, method="dream", mapper=mapper)