>Business >Basin hopping optimization in Python

Basin hopping optimization in Python

Basin hopping is a global optimization algorithm. 

It was created to find solution to issues within chemical physics, even though it is an effective algorithm apt for nonlinear objective functions with multiple optima. 

In this guide, you will find out about the basin hopping global optimization algorithm. 

After going through this guide, you will be aware of: 

  • Basin hopping optimization is a global optimization that leverages random perturbations to jump basins, and a local search algorithm to optimize every basin. 
  • How to leverage the basin hopping optimization algorithm API within python. 
  • Instances of leveraging basin hopping to find solutions to global optimization problems with multiple optima. 

Tutorial Summarization 

This tutorial is subdivided into three portions, which are: 

1] Basin hopping optimization 

2] Basin hopping API 

3] Basin hopping examples 

  • Multimodal optimization with local optima
  • Multimodal optimization with multiple global optima

Basin Hopping Optimization 

Basin hopping is a global optimization algorithm created for leveraging in the domain of chemical physics. 

Basin-hopping (BH) or Monte-Carlo Minimization (MCM) is the far the most dependable algorithms in chemical physics to look for the lowest-energy structure of atomic clusters and macromolecular systems. 

Local optimization references to optimization algorithms intended to locate an optima for a univariate objective function or operate in a region where an optima is thought to be present. While global optimization algorithms are devised to locate the singular global optima amongst possibly multiple local (non-global) optimal. 

Basin hopping was detailed by David Wales and Johnathan Doyle in their 1997 paper entitled “Global Optimization by Basin-hopping and the lowest energy structures of Lennard-Jones Clusters Containing up to 110 atoms.” 

The algorithms consist of cycling two steps, a perturbation of good candidate solutions and the application of a local search to the perturbed solution. 

Basin hopping converts the complicated energy landscape into a collection of basins, and explores them through hopping, which is accomplished by random Monte Carlo moves and acceptance/rejection leveraging the Metropolis criterion. 

The perturbation facilitates the search algorithm to jump to new regions of the search space and possibly locate a new basin leading to a differing optima, example, “basin hopping” in the techniques name. 

The local search facilitates the algorithm to traverse the new basin to the optima. 

The new optima might be kept as the basis for new random perturbations, otherwise, it is discarded. The decision to keep the new solution is managed by a stochastic decision function with a “temperature” variable, a lot like simulated annealing. 

Temperature is adjusted as a function of the number of iterations of the algorithm. This facilitates arbitrary solutions to be accepted early in the run when the temperature is high, and a stricter policy of just accepting solutions of improved quality later on in the search when the temperature is low. 

In this fashion, the algorithm is a lot like an iterated local search with differing (perturbed) beginning points. 

The algorithm runs for a particular number of iterations or function evaluations and can be run several times to improve confidence that the global optima was located or that a relative good location was located. 

Now that we are acquainted with the basic hopping algorithm from a high level, let’s look at the API for basin hopping within python. 

Basin Hopping API 

Basin hopping is available in Python through the basinhopping() SciPy function. 

The function takes the name of the objective function to be minimized and the preliminary beginning point. 

1 

2 

3 

 

# perform the basin hopping search 

result = basinhopping(objective, pt) 

 

Another critical hyperparameter is the number of iterations to execute the search set via the “niter” argument and defaults to 100. 

This can be set to thousands of iterations or more. 

 

[Control] 

1 

2 

3 

 

# perform the basin hopping search 

result = basinhopping(objective, pt, niter=10000) 

 

The amount of perturbation applied to the candidate solution can be managed through the “stepsize” that defines the maximum amount of change applied in the context of the bounds of the problem domain. By default, this is set to 0.5 but should be set to something reasonable in the domain that might allow the search to identify a new basin.  

For instance, if the reasonable bounds of a search space were -100 to 100, then perhaps a step size of 5.0 or 10.0 units would be appropriate (example, 2.5% or 5% of the domain) 

 

[Control] 

1 

2 

3 

 

# perform the basin hopping search 

result = basinhopping(objective, pt, stepsize=10.0) 

 

By default, the local search algorithm leveraged is the “L-BFGS-B” algorithm. 

This can be modified by setting the “minimizer_kwargs” argument to a directory with a key of “method” and the value as the name of the local search algorithm to leverage, like “nelder-mead.” Any of the local search algorithms furnished by the SciPy library can be leveraged. 

 

[Control] 

1 

2 

3 

 

# perform the basin hopping search 

result = basinhopping(objective, pt, minimizer_kwargs={‘method’:’nelder-mead’}) 

 

The outcome of the search is a OptimizeResult object where attributes can be accessed like a dictionary. The success (or not) of the search can be accessed through the ‘success’ or ‘message’ key. 

The total number of function evaluations can be accessed through ‘nfev’ and the optimal input found for the search is accessible through the ‘x’ key. 

Now that we are acquainted with the basin hopping API in Python, let’s observe some worked instances. 

Basin Hopping Examples 

In this portion of the blog, we will look at some instances of leveraging the basin hopping algorithm on multi-modal objective functions. 

Multimodal objective functions are those that possess multiple optima, like a global optima and several local optima, or several global optima with the same objective function output. 

We will look at instances of basin hopping on both functions. 

Multimodal Optimization with Local Optima 

The Ackley function is an instance of an objective function that has a singular global optima and several local optima in which a local search might get stuck. 

As such, a global optimization strategy is needed. It is a two-dimensional objective function that has a global optima at [0,0], which evaluates to 0.0. 

The instance below implements the Ackley and develops a three-dimensional surface plot displaying the global optima and several local optima. 

 

[Control] 

1 

2 

3 

4 

5 

6 

7 

8 

9 

10 

11 

12 

13 

14 

15 

16 

17 

18 

19 

20 

21 

22 

23 

24 

25 

26 

27 

28 

29 

30 

# ackley multimodal function 

from numpy import arange 

from numpy import exp 

from numpy import sqrt 

from numpy import cos 

from numpy import e 

from numpy import pi 

from numpy import meshgrid 

from matplotlib import pyplot 

from mpl_toolkits.mplot3d import Axes3D 

 

# objective function 

def objective(x, y): 

return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) – exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20 

 

# define range for input 

r_min, r_max = -5.0, 5.0 

# sample input range uniformly at 0.1 increments 

xaxis = arange(r_min, r_max, 0.1) 

yaxis = arange(r_min, r_max, 0.1) 

# create a mesh from the axis 

x, y = meshgrid(xaxis, yaxis) 

# compute targets 

results = objective(x, y) 

# create a surface plot with the jet color scheme 

figure = pyplot.figure() 

axis = figure.gca(projection=’3d’) 

axis.plot_surface(x, y, results, cmap=’jet’) 

# show the plot 

pyplot.show() 

 

Running the instance develops the surface plot of the Ackley function demonstrating the massive number of local optima. 

We can apply the basin hopping to the Ackley objective function.

In this scenario, we will begin the search leveraging a random point drawn from the input domain between -5 and 5.

1

2

3

# define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max – r_min)

 

We will leverage a step size of 0.5, 200 iterations, and the default local search algorithm. This configuration was selected following a bit of trial and error.

 

1

2

3

# perform the basin hopping search

result = basinhopping(objective, pt, stepsize=0.5, niter=200)

 

After the search is finished, it will report the status of the search and the number of iterations carried out as well as the ideal result identified with its evaluation.

 

1

2

3

4

5

6

7

8

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

# evaluate solution

solution = result[‘x’]

evaluation = objective(solution)

print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

 

Connecting this together, the complete example of applying basin hopping to the Ackley objective function is detailed below.

# basin hopping global optimization for the ackley multimodal objective function

from scipy.optimize import basinhopping

from numpy.random import rand

from numpy import exp

from numpy import sqrt

from numpy import cos

from numpy import e

from numpy import pi

 

# objective function

def objective(v):

x, y = v

return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) – exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

 

# define range for input

r_min, r_max = -5.0, 5.0

# define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max – r_min)

# perform the basin hopping search

result = basinhopping(objective, pt, stepsize=0.5, niter=200)

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

# evaluate solution

solution = result[‘x’]

evaluation = objective(solution)

print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

 

Running the instance carries out the optimization, then reports the outcomes.

Your outcomes might demonstrate variance provided the stochastic nature of the algorithm or evaluation procedure, or variations in numerical accuracy. Take up running the instance a few times and contrast the average outcome.

In this scenario, we can observe that the algorithm located the optima with inputs very near to zero and an objective function evaluation that is nearly zero.

We can observe that 200 iterations of the algorithms had the outcome of 86,020 function evaluations.

 

1

2

3

Status: [‘requested number of basinhopping iterations completed successfully’]

Total Evaluations: 86020

Solution: f([ 5.29778873e-10 -2.29022817e-10]) = 0.00000

 

Multimodal Optimization with Multiple Global Optima

The Himmelblau is an instance of an objective function that has several global optima. Particularly, it has four optima and each has the same objective function evaluation. It is a two-dimensional objective function that possesses a global optima at [3.0, 2.0], [-2.805118, 3.131312], [-3.779310, -3.283186], [3.584428, -1.848126]

This implies every run of a global optimization algorithm might identify a differing global optima.

The instance below implements the Himmelblau and creates a three-dimensional surface plot to furnish an intuition for the objective function.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

# himmelblau multimodal test function

from numpy import arange

from numpy import meshgrid

from matplotlib import pyplot

from mpl_toolkits.mplot3d import Axes3D

 

# objective function

def objective(x, y):

return (x**2 + y – 11)**2 + (x + y**2 -7)**2

 

# define range for input

r_min, r_max = -5.0, 5.0

# sample input range uniformly at 0.1 increments

xaxis = arange(r_min, r_max, 0.1)

yaxis = arange(r_min, r_max, 0.1)

# create a mesh from the axis

x, y = meshgrid(xaxis, yaxis)

# compute targets

results = objective(x, y)

# create a surface plot with the jet color scheme

figure = pyplot.figure()

axis = figure.gca(projection=’3d’)

axis.plot_surface(x, y, results, cmap=’jet’)

# show the plot

pyplot.show()

 

Running the instance develops the surface plot of the Himmelblau function displaying the four global optima as dark blue basins.

We can go about applying the basin hopping algorithm to the Himmelblau objective function.

As in the prior instance, we will begin the search leveraging a random point drawn from the input domain between -5 and 5.

We will leverage a step size of 0.5, 200 iterations, and the default local search algorithm. At the conclusion of the search, we will report the input for the best located optima,

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

# basin hopping global optimization for the himmelblau multimodal objective function

from scipy.optimize import basinhopping

from numpy.random import rand

 

# objective function

def objective(v):

x, y = v

return (x**2 + y – 11)**2 + (x + y**2 -7)**2

 

# define range for input

r_min, r_max = -5.0, 5.0

# define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max – r_min)

# perform the basin hopping search

result = basinhopping(objective, pt, stepsize=0.5, niter=200)

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

# evaluate solution

solution = result[‘x’]

evaluation = objective(solution)

print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

 

Running the instance executes the optimization, then reports the outcomes.

In this scenario, we can observe that the algorithms located an optima at about [3.0, 2.0]

We can observe that 200 iterations of the algorithm had the outcome of 7,660 function evaluations.

 

1

2

3

Status : [‘requested number of basinhopping iterations completed successfully’]

Total Evaluations: 7660

Solution: f([3. 1.99999999]) = 0.00000

 

If we carry out the search again, we might expect a differing global optima to be located.

For instance, below, we can observe an optima located at approximately [-2.805118, 3.131312], differing from the prior run.

 

1

2

3

Status : [‘requested number of basinhopping iterations completed successfully’]

Total Evaluations: 7636

Solution: f([-2.80511809 3.13131252]) = 0.00000

 

Further Reading

This portion of the blog furnishes additional resources on the subject if you are seeking to delve deeper.

Papers

  • Global Optimization by Basin-Hopping and the Lowest Energy Structures of Lennard-Jones Clusters Containing up to 110 Atoms, 1997
  • Basin hopping with occasional jumping, 2004u

Books

  • Energy Landscape Applications to Clusters, Biomolecules and Glasses, 2004.

APIs

  • optimize.basinhopping API
  • optimize.OptimizeResult API

Articles

  • Global optimization, Wikipedia
  • Basin-hopping, Wikipedia

Conclusion

In this guide, you found out about the basin hopping global optimization algorithm.

Particularly, you learned:

  • Basin hopping optimization is a global optimization that leverages random perturbations to jump basins, and a local search algorithm to optimize every basin.
  • How to leverage the basin hopping optimization algorithm API in python.
  • Instances of leveraging basin hopping to find solutions to global optimization problems with multiple optima.
Add Comment