interalg

From OpenOpt

Jump to: navigation, search
interalg

Contents

Introduction

interalg (interval algorithm) is a free (even for commercial purposes) solver with specifiable accuracy (| f - f* | < user_defined_fTol) with possibility to handle general constraints (linear / nonlinear, equality / inequality), for

  • searching (preferably global) minimum/maximum of nonlinear problems: NLP, NSP, GLP (example)
  • (since v. 0.38) searching global extremum of nonlinear problems with some discrete variables (MINLP)
  • (since v. 0.38) searching full cover of Pareto front for MOP (multi-objective optimization problem), subjected to required tolerances on objective functions, (optionally) using parallel computations (see MOP page for examples)
  • getting solution (or ALL solutions) of nonlinear equation / systems of equations (SNLE), see some examples
  • numerical integration, see IP page for examples
  • ordinary differential equation dy/dt = f(t) (ODE)
  • future plans include some general ODE dy/dt = f(y,t), handling of GDP (Generalized Disjunctive Programming), some speedup and parallel computations for other than MOP problems

written in Python and NumPy by Dmitrey, initially released in March 2011.

A successful completion guarantees the accuracy specified, assuming that rounding errors are negligible. Thus uncertainties in the result due to roundoff errors are not accounted for now (it is in future plans, as soon as FuncDesigner interval analysis engine will be capable of taking them into account, as it is done in some other interval analysis software).

interalg requires finite box bounds on variables (lb_i <= x_i <= ub_i), but the bounds can be very huge (e.g. +/- 1015).

Only FuncDesigner models are handled. Convergence of the underlying algorithm has been proved (that was quite easy to do, unlike ralg).


There are many excellent approximate approaches for searching global extremum of nonlinear functions (GLP): differential evolution, annealing, ant colony, particle swarm, multistart local optimization etc. Unfortunately, almost all of them cannot guarantee that you haven't stop too far from desired solution, and sometimes it matters.

There are much less software for searching exact global extremum with required tolerance. First of all I would mention:

  • Direct and intsolver (they require commercial MATLAB)
  • Commercial BARON and LGO

Their number is much smaller in comparison with inexact solvers because their speed and required memory makes them absolutely inappropriate for rather mature problems, that generally are NP-Hard, for example even for quadratic objective with a single negative eigenvalue.

interalg has been written as absolutely free alternative the the solvers mentioned above.

Unlike BARON (that cannot handle even sin and cos), interalg is capable of handling bigger set of functions: +, -, *, /, pow (**), sin, cos, arcsin, arccos, arctan, sinh, cosh, exp, sqrt, abs, log, log2, log10, floor, ceil, sum, min, max, sorted monotone 1-d splines (R->R) with order 1 or 3; since v. 0.38: arcsinh, arccosh, tanh, arctanh; future plans: general 1-d splines, N-dimensional splines, erf, tan, cotan, maybe others (some monotone, unimodal functions or functions with all known local extrema could be connected on demand).

Unlike LGO and BARON, interalg doesn't require functions to be Lipschitz-continuous (e.g. sqrt(x) is not Lipschitz-continuous if its domain includes zero). Moreover, even some discontinuous functions can be handled (floor and ceil had been tried).

Numerical tests show that interalg greatly outperforms Direct and intsolver, despite current Python version hasn't JIT yet (unlike MATLAB). A small number of comparisons with BARON has been performed using NEOS server (I haven't their license and enough time, and they require input in GAMS and AMPL I'm not skilled in and use other default tolerances), so I can't say for sure, but it seems like on difficult enough tasks (where Lipschitz constant is huge or absent at all, e.g. log(x), sqrt(x), x**alpha, alpha < 1, on interval (epsilon, some_value)) interalg outperforms even them. You're welcome to publish your own bench results in our forum.

It seems like the ideas implemented in interalg also can be useful in solving ODEs and PDEs with guaranteed precision. It is in our FuturePlans.

Parameters

Name Default value Valid for problems Description
maxActiveNodes 150 All You definitely should try modifying the parameter to speedup calculations. It is too hardware-depended to be adjusted automatically (depends on speed and volume of your CPU cash level 1,2,3, RAM speed), also may depend on maturity of the problem involved. You can set it from 1 to inf (recommended values to try: 5, 15, 25, 50, 100, 250, 500, 1000, 2500, 5000, ...). I have some ideas how to set more exact automatic default, but haven't time to try them.
maxNodes 150000 NLP, NSP, GLP, SNLE, MINLP, future plans: IP If number of nodes exceeds it (may happen with mature tasks), then set of nodes will be truncated and no global optimality of solution will not be guarantied. (unestablished) you can observe r.extras['isRequiredPrecisionReached'] to ensure you found global optimum subjected to required fTol. Fortunately, sometimes even after truncating interalg can yield solution with required precision and, moreover, sometimes the solver can determine that required fTol has been surely achieved. If p.iprint >= 0, relevant message will be shown; also, you could be interested in (unestablished) r.extras['extremumBounds'] that is a pair (extremum_inf, extremum_sup) such that extremum_inf < f* < extremum_sup (may be out of the region only due to rounding errors for rather noisy functions). Maybe, in future maxNodes will be replaced by max memory allowed for solver to occupy.
maxSolutions 1 SNLE, future plans: NLP, NSP, GLP Maximal number of stored solutions (currently valid for systems of nonlinear equations only). You can use positive integer numbers (as well as real numbers like "1e5" - they will be converted to integers), numpy.inf and string value 'all'.
(unestablished) dataHandling 'auto' NLP, NSP, GLP, MINLP, SNLE Not important for SNLE "all solutions" mode. Valid (string) values: 'sorted', 'raw'. If objective function or a constraint can become very huge inside the involved box domain lbi <= xi <= ubi (for example, 1/a inside domain [10-6,1]), 'sorted' usually works better, otherwise 'raw' usually works better. This parameter is intended to be removed in future interalg versions, when its engine will be essentially reworked. For MOP only 'raw' mode works.
(unestablished) sigma 0.1 MOP Allowed range [0...1]; increasing this parameter you'll got less number of puny solutions, but it will slow down your calculation (maybe very essentially)
fStart - NLP, NSP, GLP, MINLP A good estimation to upper bound of objective value in minimum, i.e. lowest known Value: Value >= f* (for sure). Can speedup algorithm and reduce peak memory. If you know a point with value equal to fStart, don't use the parameter and pass that point as start point instead. Also, this parameter hasn't been properly tested with maximization problems. Pay attention that fStart is solver (not prob) parameter, use oosolver('interalg', fStart = some_val) or p.minimize(..., fStart = some_val), but not p.fStart = some_val or p = NLP(..., fStart = some_val).

Parallel computations and other speed up

  • Using a good start point can essentially speed up interalg and decrease amount of peak memory
  • For MOP you can change parameter p.nProc (default: 1) to perform some computations in parallel mode
  • Used algorithm by itself allows very effective parallelization, but it hasn't done yet (and I'll hardly bother with it without any real task for it). Also, you may speedup your computations by linking NumPy with AMD ACML, Intel MKL, maybe other BLAS/LAPACK implementations with parallel calculations
  • You could somehow estimate objective function value in optimal point and run several interalg instances with fStart, fStart - fTol, fStart - 2*fTol etc.
  • You could try to use Unladen Swallow or wait till PyPy (with dynamic compilation) will got full NumPy support.

Notices

  • sometimes for problems with other than box-bound constraints other solvers may yield value F* that differs from interalg f* more than required fTol. Pay attention to obtained residual r.rf for that solver - if it is non-zero, probably that gap is due to a small constraint violation wrt allowed tolerance for a constraint. Probably interalg cut a piece of space containing that point because it was detected as non-feasible region.
  • interalg description and algorithm probably will be published in nearest release of one of Ukrainian journals, and then it will became available in the webpage
  • You can encounter difficulties on funcs like f1/f2 where interval of f2 contains zero, or even worse - both intervals of f1 and f2 contain zero
  • You should try to avoid duplicating usage of variables (for to make interval calculations more precise):
    • using a*(b+c) is better than of a*b + a*c
    • using 1+1/a is better than (a+1)/a, etc
  • Presolving prob by inexact solver to get good start point could be very helpful. Depending on prob type, you could pay attention to the following GLP solvers: mlsl for smooth differentiable multiextremum; pswarm, de (or interalg with small maxNodes, like 10-100) for any other.

Future plans

See here for info on it

interalg benchmark with intsolver, Direct, commercial BARON

Known issues

  • interalg is relatively new software (initially released in March 2011) with already ~1500 Python language source code lines (it means in C or Fortran it would be several times more) and more relatively new code from FuncDesigner interval analysis engine, thus you may encounter a bug (especially for probs with general constraints). The work on revealing and fixing them is going on, along with writing new functionality and speedup.
  • if you've got another one solution by alternative solver and f_alt is better than f_interalg by more than fTol, it may be due to a small constraint residual (inside required contol), while interalg could cut region containing point x_alt with the value f_alt. Also, if your problem has discrete variables, sometimes some solvers yield better value due to small discrtol violation (difference between solution and required discrete domain value), while interalg yields only exact values from domain. If problem is only box-bounded and x_alt is strictly inside domain (lb <= x <= ub), than it's interalg bug (except of issues with roundoff effects), you should submit bug report to openopt forum.

Poll

You can vote for most required interalg development direction, see here for details.

Retrieved from "http://openopt.org/interalg"
Personal tools
    Latest OOSuite 0.38

    from 2012-03-15

    Next release:

    2012-06-15

    Development