>Business >Iterated local search from the ground up in Python

Iterated local search from the ground up in Python

Iterated local search is a stochastic global optimization algorithm.

It consists of the repetitive application of a local search to modified versions of a good solution identified previously. In this fashion, it is like a clever variant of the stochastic hill climbing with random restarts algorithm.

The intuition that is the basis of the algorithm is that arbitrary restarts can assist in locating several local optima in a problem and that better local optima are often near to other local optima. Thus modest perturbations to current local optima may locate better or even ideal solutions to an optimization problem.

In this guide, you will find out how to implement the iterated local search algorithm can scratch.

After going through this guide, you will be aware of:

  • Iterated local search is a stochastic global search optimization algorithm that is an intelligent variant of stochastic hill climbing with random restarts.
  • How to implement stochastic hill climbing with arbitrary restarts from scratch.
  • How to implement and apply the iterated local search algorithm to a nonlinear objective function.

Tutorial Summarization

This tutorial is subdivided into five portions, which are:

1] What is Iterated Local Search

2] Ackley Objective Function

3] Stochastic Hill Climbing Algorithm

4] Stochastic Hill Climbing with Random Restarts

5] Iterated Local Search Algorithm

What is Iterated Local Search

Iterated Local Search, or ILS for short, is a stochastic global search optimization algorithm.

It is connected to or an extension of stochastic hill climbing and stochastic hill climbing with random starts.

It is basically a more clever version of Hill-Climbing with Random Restarts.

Stochastic hill climbing is a local search algorithm that consists of making arbitrary modifications to a current solution and accepting the alteration only if it has the outcome of improved results that the present working solutions.

Local search algorithms in general can get stuck in local optima. One strategy to tackle this issue is to restart the search from a new arbitrarily chosen beginning point. The restart procedure can be carried out several times and might be triggered after a static number of function evaluations or if no subsequent improvement is observed for a provided number of algorithm iterations. This algorithm is referred to as stochastic hill climbing with random restarts.

The simplest possibility to enhance upon a cost identified by LocalSearch is to repeat the search from another beginning point.

Iterated local search is like stochastic hill climbing with arbitrary restarts, except rather than choosing a random starting point for every restart, a point is chosen on the basis of a modified variant of the best point identified thus far during the wider search.

The perturbation of the best solution thus far is like a big jump in the search space to a fresh region, while the perturbations made by the stochastic hill climbing algorithm are much smaller, confined to a particular region of the search space.

The heuristic here is that you can often identify better local optima near to the one you’re currently in, and walking from local optimum to local optimum in this fashion often outperforms just attempting new locations totally at random.

This enables the search to be carried out at two levels. The hill climbing algorithm is the local search for obtaining the most out of a particular candidate solution or region of the search space, and the restart approach enables differing regions of the search space to be explored.

In this fashion, the algorithm iterated local search looks into several local optima in the search space, increasing the likelihood of locating the global optima.

The iterated local search was put forth for combinatorial optimisation problems, like the travelling salesman problem (TSP), even though it can be applied to continuous function optimization by leveraging differing step sizes in the search space. Smaller steps for the hill climbing and bigger steps for the random restart.

Now that we are acquainted with the iterated local search algorithm, let’s look into how to implement the algorithm from the ground up.

Ackley Objective Function

To start with, let’s define a channeling optimization problem as the foundation for implementation of the iterated local search algorithm.

The Ackley Function is an instance of a multimodal objective function that possesses a singular global optima and several local optima in which a local search might get stuck.

As such, a global optimization strategy is needed. It is a two-dimensional objective function that possesses a global optima at [0,0], which evaluates to 0.0

The instance below implements the Ackley and creates a 3D surface plot displaying the global optima and several local optima.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

# ackley multimodal function

from numpy import arange

from numpy import exp

from numpy import sqrt

from numpy import cos

from numpy import e

from numpy import pi

from numpy import meshgrid

from matplotlib import pyplot

from mpl_toolkits.mplot3d import Axes3D

 

# objective function

def objective(x, y):

return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) – exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

 

# define range for input

r_min, r_max = -5.0, 5.0

# sample input range uniformly at 0.1 increments

xaxis = arange(r_min, r_max, 0.1)

yaxis = arange(r_min, r_max, 0.1)

# create a mesh from the axis

x, y = meshgrid(xaxis, yaxis)

# compute targets

results = objective(x, y)

# create a surface plot with the jet color scheme

figure = pyplot.figure()

axis = figure.gca(projection=’3d’)

axis.plot_surface(x, y, results, cmap=’jet’)

# show the plot

pyplot.show()

 

Running the instance creates the surface plot of the Ackley function displaying the massive number of local optima.

We will leverage this as the foundation for implementing and contrasting a simple stochastic hill climbing algorithm, stochastic hill climbing with arbitrary restarts, and finally iterated local search.

We would expect a stochastic hill climbing algorithm to get stuck very easily in local optima. We would expect stochastic hill climbing with restarts to identify several local minima, and we would expected iterated local search to have better performance than either method on this problem if configured appropriately.

Stochastic Hill Climbing Algorithm

Fundamental to the iterated local search algorithm is a local search, and in this guide, we will leverage the stochastic hill climbing algorithm for this reason.

The stochastic hill climbing algorithm consists of first producing a random starting point and present working solution, then producing perturbed variants of the current working solution and accepting them if they are improved than the present working solution.

Provided that we are operating on a continuous optimization problem, a solution is a vector of values to be assessed by the objective function, in this scenario, a point in two-dimensional space bounded by -5 and 5.

We can produce a random point by sampling the search space with a uniform probability distribution. For instance

 

1

2

3

# generate a random point in the search space

solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0])

 

We can generate perturbed versions of a presently working solution leveraging a Gaussian probability distribution with the mean of the present values in the solution and a standard deviation controlled by a hyperparameter that manages how far the search is allowed to look into from the present working solution.

We will reference to this hyperparameter as “step_size” for instance:

 

1

2

3

# generate a perturbed version of a current working solution

candidate = solution + randn(len(bounds)) * step_size

 

Critically, we must check that produced solutions are within the search space.

This can be accomplished with a customized function named in_bounds() that takes a candidate solution and the bounds of the search space and returns True if the point is in the search space, False otherwise.

 

1

2

3

4

5

6

7

8

# check if a point is within the bounds of the search

def in_bounds(point, bounds):

# enumerate all dimensions of the point

for d in range(len(bounds)):

# check if out of bounds for this dimension

if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:

return False

return True

 

This function can then be referenced during the hill climb to validate that new points are within the bounds of the search space, and if not, new points can be produced.

Connecting this together, the function hillclimbing() below implements the stochastic hill climbing local search algorithm. It takes the name of the objective function, bounds of the problem, number of iterations, and steps size as arguments and returns the ideal solution and its assessment.

# hill climbing local search algorithm

def hillclimbing(objective, bounds, n_iterations, step_size):

# generate an initial point

solution = None

while solution is None or not in_bounds(solution, bounds):

solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0])

# evaluate the initial point

solution_eval = objective(solution)

# run the hill climb

for i in range(n_iterations):

# take a step

candidate = None

while candidate is None or not in_bounds(candidate, bounds):

candidate = solution + randn(len(bounds)) * step_size

# evaluate candidate point

candidte_eval = objective(candidate)

# check if we should keep the new point

if candidte_eval <= solution_eval:

# store the new point

solution, solution_eval = candidate, candidte_eval

# report progress

print(‘>%d f(%s) = %.5f’ % (i, solution, solution_eval))

return [solution, solution_eval]

 

We can evaluate this algorithm on the Ackley function.

We will fix the seed for the pseudorandom number generator to make sure we obtain the same outcomes every time the code is executed.

The algorithm will be executed for 1,000 iterations and a step size of 0.05 units will be leveraged, both hyperparameters were selected after a bit of trial and error.

At the conclusion of the run, we will report the ideal solution identified.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

# seed the pseudorandom number generator

seed(1)

# define range for input

bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]])

# define the total iterations

n_iterations = 1000

# define the maximum step size

step_size = 0.05

# perform the hill climbing search

best, score = hillclimbing(objective, bounds, n_iterations, step_size)

print(‘Done!’)

print(‘f(%s) = %f’ % (best, score))

 

Connecting this together, the complete instance of application of the stochastic hill climbing algorithm to the Ackley objective function is detailed here.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

# hill climbing search of the ackley objective function

from numpy import asarray

from numpy import exp

from numpy import sqrt

from numpy import cos

from numpy import e

from numpy import pi

from numpy.random import randn

from numpy.random import rand

from numpy.random import seed

 

# objective function

def objective(v):

x, y = v

return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) – exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

 

# check if a point is within the bounds of the search

def in_bounds(point, bounds):

# enumerate all dimensions of the point

for d in range(len(bounds)):

# check if out of bounds for this dimension

if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:

return False

return True

 

# hill climbing local search algorithm

def hillclimbing(objective, bounds, n_iterations, step_size):

# generate an initial point

solution = None

while solution is None or not in_bounds(solution, bounds):

solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0])

# evaluate the initial point

solution_eval = objective(solution)

# run the hill climb

for i in range(n_iterations):

# take a step

candidate = None

while candidate is None or not in_bounds(candidate, bounds):

candidate = solution + randn(len(bounds)) * step_size

# evaluate candidate point

candidte_eval = objective(candidate)

# check if we should keep the new point

if candidte_eval <= solution_eval:

# store the new point

solution, solution_eval = candidate, candidte_eval

# report progress

print(‘>%d f(%s) = %.5f’ % (i, solution, solution_eval))

return [solution, solution_eval]

 

# seed the pseudorandom number generator

seed(1)

# define range for input

bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]])

# define the total iterations

n_iterations = 1000

# define the maximum step size

step_size = 0.05

# perform the hill climbing search

best, score = hillclimbing(objective, bounds, n_iterations, step_size)

print(‘Done!’)

print(‘f(%s) = %f’ % (best, score))

 

Executing the instance performs the stochastic hill climbing search on the objective function. Every enhancement identified during the search is reported and the ideal solution is then reported at the conclusion of the search.

Your outcomes may demonstrate variance provided the stochastic nature of the algorithm or assessment procedure, or variations in numerical accuracy. Take up running the instance a few times and contrast the average outcome.

In this scenario, we can observe around 13 improvements during the search and a final solution of about f(-0.981, 1.965), having the outcome in an evaluation of approximately 5.381, which is far from f(0.0, 0.0)=0.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

>0 f([-0.85618854 2.1495965 ]) = 6.46986

>1 f([-0.81291816 2.03451957]) = 6.07149

>5 f([-0.82903902 2.01531685]) = 5.93526

>7 f([-0.83766043 1.97142393]) = 5.82047

>9 f([-0.89269139 2.02866012]) = 5.68283

>12 f([-0.8988359 1.98187164]) = 5.55899

>13 f([-0.9122303 2.00838942]) = 5.55566

>14 f([-0.94681334 1.98855174]) = 5.43024

>15 f([-0.98117198 1.94629146]) = 5.39010

>23 f([-0.97516403 1.97715161]) = 5.38735

>39 f([-0.98628044 1.96711371]) = 5.38241

>362 f([-0.9808789 1.96858459]) = 5.38233

>629 f([-0.98102417 1.96555308]) = 5.38194

Done!

f([-0.98102417 1.96555308]) = 5.381939

 

Then, we will modify the algorithm to carry out arbitrary restarts and observe if we can accomplish improved results.

Stochastic Hill Climbing with Random Restarts

The stochastic hill climbing with random restarts algorithm consists of the repeated running of the stochastic hill climbing algorithm and maintaining track of the ideal solution that has been identified.

To start with, let’s modify the hillclimbing() function to take the beginning point of the search instead of producing it arbitrarily. This will be beneficial later when we implement the iterated local search algorithm later.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

# hill climbing local search algorithm

def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):

# store the initial point

solution = start_pt

# evaluate the initial point

solution_eval = objective(solution)

# run the hill climb

for i in range(n_iterations):

# take a step

candidate = None

while candidate is None or not in_bounds(candidate, bounds):

candidate = solution + randn(len(bounds)) * step_size

# evaluate candidate point

candidte_eval = objective(candidate)

# check if we should keep the new point

if candidte_eval <= solution_eval:

# store the new point

solution, solution_eval = candidate, candidte_eval

return [solution, solution_eval]

 

Then, we can implement the arbitrary restart algorithm by repeatedly calling the hillclimbing() function a static number of times.

Every call, we will produce a new randomly selected starting point for the hill climbing search.

 

1

2

3

4

5

6

7

# generate a random initial point for the search

start_pt = None

while start_pt is None or not in_bounds(start_pt, bounds):

start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0])

# perform a stochastic hill climbing search

solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)

 

We can then look into the outcome and keep it if it is better than any outcome of the search we have observed so far.

 

1

2

3

4

5

# check for new best

if solution_eval < best_eval:

best, best_eval = solution, solution_eval

print(‘Restart %d, best: f(%s) = %.5f’ % (n, best, best_eval))

 

Connecting this together, the random_restarts() function implemented the stochastic hill climbing algorithm with random restarts.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

# hill climbing with random restarts algorithm

def random_restarts(objective, bounds, n_iter, step_size, n_restarts):

best, best_eval = None, 1e+10

# enumerate restarts

for n in range(n_restarts):

# generate a random initial point for the search

start_pt = None

while start_pt is None or not in_bounds(start_pt, bounds):

start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0])

# perform a stochastic hill climbing search

solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)

# check for new best

if solution_eval < best_eval:

best, best_eval = solution, solution_eval

print(‘Restart %d, best: f(%s) = %.5f’ % (n, best, best_eval))

return [best, best_eval]

 

We can then go about applying this algorithm to the Ackley objective function. In this scenario, we will restrict the number of random restarts to 30, selected randomly.

The complete instance is detailed below:

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

# hill climbing search with random restarts of the ackley objective function

from numpy import asarray

from numpy import exp

from numpy import sqrt

from numpy import cos

from numpy import e

from numpy import pi

from numpy.random import randn

from numpy.random import rand

from numpy.random import seed

 

# objective function

def objective(v):

x, y = v

return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) – exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

 

# check if a point is within the bounds of the search

def in_bounds(point, bounds):

# enumerate all dimensions of the point

for d in range(len(bounds)):

# check if out of bounds for this dimension

if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:

return False

return True

 

# hill climbing local search algorithm

def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):

# store the initial point

solution = start_pt

# evaluate the initial point

solution_eval = objective(solution)

# run the hill climb

for i in range(n_iterations):

# take a step

candidate = None

while candidate is None or not in_bounds(candidate, bounds):

candidate = solution + randn(len(bounds)) * step_size

# evaluate candidate point

candidte_eval = objective(candidate)

# check if we should keep the new point

if candidte_eval <= solution_eval:

# store the new point

solution, solution_eval = candidate, candidte_eval

return [solution, solution_eval]

 

# hill climbing with random restarts algorithm

def random_restarts(objective, bounds, n_iter, step_size, n_restarts):

best, best_eval = None, 1e+10

# enumerate restarts

for n in range(n_restarts):

# generate a random initial point for the search

start_pt = None

while start_pt is None or not in_bounds(start_pt, bounds):

start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0])

# perform a stochastic hill climbing search

solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)

# check for new best

if solution_eval < best_eval:

best, best_eval = solution, solution_eval

print(‘Restart %d, best: f(%s) = %.5f’ % (n, best, best_eval))

return [best, best_eval]

 

# seed the pseudorandom number generator

seed(1)

# define range for input

bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]])

# define the total iterations

n_iter = 1000

# define the maximum step size

step_size = 0.05

# total number of random restarts

n_restarts = 30

# perform the hill climbing search

best, score = random_restarts(objective, bounds, n_iter, step_size, n_restarts)

print(‘Done!’)

print(‘f(%s) = %f’ % (best, score))

 

Running the instance will perform a stochastic hill climbing with arbitrary restarts search for the Ackley objective function. Every time an enhanced overall solution is attained, it is reported and the final ideal solution identified by the search is summarized.

Your outcomes may demonstrate variance provided the stochastic nature of the algorithm or evaluation procedure, or variations in numerical accuracy. Take up running the instance a few times and contrast the average outcome.

In this scenario, we can observe three enhancements during the search and that the ideal solution identified was approximately f(0.002, 0.002) which evaluated to about 0.009, which is much improved than a single run of the hill climbing algorithm.

 

1

2

3

4

5

Restart 0, best: f([-0.98102417 1.96555308]) = 5.38194

Restart 2, best: f([1.96522236 0.98120013]) = 5.38191

Restart 4, best: f([0.00223194 0.00258853]) = 0.00998

Done!

f([0.00223194 0.00258853]) = 0.009978

 

Then, let’s observe how we can go about implementing the iterated local search algorithm.

Iterated Local Search Algorithm

The iterated local search algorithm is an altered variant of the stochastic hill climbing with arbitrary restarts algorithm.

The critical difference is that the beginning point for every application of the stochastic hill climbing algorithm is a perturbed variant of the ideal point identified thus far.

We can go about implementing this algorithm by leveraging the random_restarts() functions as a beginning point. Every restart iteration, we can produce a modified version of the best solution identified thus far rather than an arbitrary starting point.

This can be accomplished by leveraging a step size hyperparameter, much like it is leveraged in the stochastic hill climber. In this scenario, a bigger step size value will be leveraged provided the requirement for larger perturbations in the search space.

 

1

2

3

4

5

# generate an initial point as a perturbed version of the last best

start_pt = None

while start_pt is None or not in_bounds(start_pt, bounds):

start_pt = best + randn(len(bounds)) * p_size

 

Connecting this together, the iterated_local_search() function is defined below.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

# iterated local search algorithm

def iterated_local_search(objective, bounds, n_iter, step_size, n_restarts, p_size):

# define starting point

best = None

while best is None or not in_bounds(best, bounds):

best = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0])

# evaluate current best point

best_eval = objective(best)

# enumerate restarts

for n in range(n_restarts):

# generate an initial point as a perturbed version of the last best

start_pt = None

while start_pt is None or not in_bounds(start_pt, bounds):

start_pt = best + randn(len(bounds)) * p_size

# perform a stochastic hill climbing search

solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)

# check for new best

if solution_eval < best_eval:

best, best_eval = solution, solution_eval

print(‘Restart %d, best: f(%s) = %.5f’ % (n, best, best_eval))

return [best, best_eval]

 

We can then apply the algorithm to the Ackley objective function. In this scenario, we will leverage a bigger step size value of 1.0 for the arbitrary restarts, selected after a bit of trial and error.

The total instance is detailed below:

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

# iterated local search of the ackley objective function

from numpy import asarray

from numpy import exp

from numpy import sqrt

from numpy import cos

from numpy import e

from numpy import pi

from numpy.random import randn

from numpy.random import rand

from numpy.random import seed

 

# objective function

def objective(v):

x, y = v

return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) – exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

 

# check if a point is within the bounds of the search

def in_bounds(point, bounds):

# enumerate all dimensions of the point

for d in range(len(bounds)):

# check if out of bounds for this dimension

if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:

return False

return True

 

# hill climbing local search algorithm

def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):

# store the initial point

solution = start_pt

# evaluate the initial point

solution_eval = objective(solution)

# run the hill climb

for i in range(n_iterations):

# take a step

candidate = None

while candidate is None or not in_bounds(candidate, bounds):

candidate = solution + randn(len(bounds)) * step_size

# evaluate candidate point

candidte_eval = objective(candidate)

# check if we should keep the new point

if candidte_eval <= solution_eval:

# store the new point

solution, solution_eval = candidate, candidte_eval

return [solution, solution_eval]

 

# iterated local search algorithm

def iterated_local_search(objective, bounds, n_iter, step_size, n_restarts, p_size):

# define starting point

best = None

while best is None or not in_bounds(best, bounds):

best = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0])

# evaluate current best point

best_eval = objective(best)

# enumerate restarts

for n in range(n_restarts):

# generate an initial point as a perturbed version of the last best

start_pt = None

while start_pt is None or not in_bounds(start_pt, bounds):

start_pt = best + randn(len(bounds)) * p_size

# perform a stochastic hill climbing search

solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)

# check for new best

if solution_eval < best_eval:

best, best_eval = solution, solution_eval

print(‘Restart %d, best: f(%s) = %.5f’ % (n, best, best_eval))

return [best, best_eval]

 

# seed the pseudorandom number generator

seed(1)

# define range for input

bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]])

# define the total iterations

n_iter = 1000

# define the maximum step size

s_size = 0.05

# total number of random restarts

n_restarts = 30

# perturbation step size

p_size = 1.0

# perform the hill climbing search

best, score = iterated_local_search(objective, bounds, n_iter, s_size, n_restarts, p_size)

print(‘Done!’)

print(‘f(%s) = %f’ % (best, score))

 

Running the instance will carry out an iterated local search of the Ackley objective function.

Every time an enhanced overall solution is identified, it is reported and the final ideal solution identified by the search is summarized at the conclusion of the run.

Your outcomes may demonstrated variance provided the stochastic nature of the algorithm or assessment procedure, or variations in numerical accuracy. Take up running the instance a few times and contrast the average outcome.

In this scenario, we can observe four improvements during the search and that the ideal solution identified was two very minimal inputs that are near to zero, which evaluated to nearly 0.0003, which is better than either a singular run of the hill climber or the hill climber with restarts.

 

1

2

3

4

5

6

Restart 0, best: f([-0.96775653 0.96853129]) = 3.57447

Restart 3, best: f([-4.50618519e-04 9.51020713e-01]) = 2.57996

Restart 5, best: f([ 0.00137423 -0.00047059]) = 0.00416

Restart 22, best: f([ 1.16431936e-04 -3.31358206e-06]) = 0.00033

Done!

f([ 1.16431936e-04 -3.31358206e-06]) = 0.000330

Further Reading

This section furnishes additional resources on the subject if you are seeking to delve deeper.

Books

Essentials of Metaheuristics, 2011

Handbook of Metaheuristics, 3rd edition, 2019

Articles

Hill climbing, Wikipedia

Iterated local search, Wikipedia

 

1

2

3

4

5

6

Restart 0, best: f([-0.96775653 0.96853129]) = 3.57447

Restart 3, best: f([-4.50618519e-04 9.51020713e-01]) = 2.57996

Restart 5, best: f([ 0.00137423 -0.00047059]) = 0.00416

Restart 22, best: f([ 1.16431936e-04 -3.31358206e-06]) = 0.00033

Done!

f([ 1.16431936e-04 -3.31358206e-06]) = 0.000330

 

Conclusion

In this guide, you found out how to go about implementing the iterated local search algorithm from the ground up.

Particularly, you learned:

  • Iterated local search is a stochastic global search optimization algorithm that is a more intelligent variant of stochastic hill climbing with arbitrary restarts
  • How to go about implementing stochastic hill climbing with arbitrary restarts from the ground up.
  • How to implement and apply the iterated local search algorithm to a nonlinear objective function.
Add Comment