>Business >Random Search and Grid Search for Function Optimization

Random Search and Grid Search for Function Optimization

Function optimization needs the selection of an algorithm to effectively sample the search space and locate a good or ideal solution.

There are several algorithms to opt from, even though it is a critical to establish a baseline for what variants of solutions are viable or possible for a problem. This can be accomplished leveraging a naïve optimization algorithm, like random search or a grid search.

The outcomes accomplished by a naïve optimization algorithm are computationally effective to produce and furnish a point of comparison for more advanced optimization algorithms. At times, naïve algorithms are identified to accomplish ideal performance, specifically on those problems that are noisy or non-smooth and those problems where domain expertise usually biases the selection of optimization algorithm.

In this guide, you will find out about naïve algorithms for function optimization.

After going through this guide, you will be aware of:

  • The role of naïve algorithms in function optimization projects.
  • How to generate and evaluate a random search for function optimization.
  • How to generate and evaluate a grid search for function optimization.

Tutorial Summarization

This tutorial is subdivided into three portions, which are:

1] Naïve function optimization algorithms

2] Random search for function optimization

3] Grid search for function optimization

Naïve Function Optimization Algorithms

There are several differing algorithms you can leverage for optimization, but how are you aware whether the outcomes you obtain are any good?

One strategy to finding a solution to this problem is to determine a baseline in performance leveraging a naïve optimization algorithm.

A naïve optimization algorithm is an algorithm that makes no assumptions about the objective function that is being optimized.

It can be applied with very minimal effort and the best outcome accomplished by the algorithm can be leveraged as a point of reference to contrast more advanced algorithms. If a more advanced algorithms cannot accomplish a better outcome than a naïve algorithm on average, then in does not possess skill on your problem and should be abandoned.

There are two naïve algorithms that can be leveraged for function optimization, which are:

  • Random search
  • Grid search

These algorithms are referenced to as “search” algorithms as, at base, optimization can be framed as a search problem. Example, identify the inputs that minimize or maximize the output of the objective function.

There is another algorithm that can be leveraged referred to as “exhaustive search” that enumerates all potential inputs. This is uncommonly leveraged practically as enumerating all potential inputs is not viable, example, would need too much time to run.

Nonetheless, if you identify yourself working on an optimization problem for which all inputs can be enumerated and assessed in reasonable time, this should be the default technique you should leverage.

Let’s take a closer look at each one in turn.

Random Search for Function Optimization

Random search is also referenced to as random optimization or random sampling.

Random search consists of producing and assessing random inputs to the objective function. It’s efficient as it does not make any assumptions about the structure of the objective function. This can be beneficial for issues where there is a lot of domain expertise that might impact or bias the optimization technique, facilitating non-intuitive solutions to be discovered.

Random sampling, which merely draws m random samples over the design space leveraging a pseudorandom number generator. To produce a random sample x, we can sample every variable independently from a distribution.

Random search might also be the ideal technique for very complicated problems with noisy or non-smooth (discontinuous) spheres of the search space that can cause algorithms that are dependent on reliable gradients.

We can produce a random sample from a domain leveraging a pseudorandom number generator. Every variable needs a well-defined bound or range and a uniformly arbitrary value can be sampled from the range, then assessed.

Producing random samples is computationally trivial and does not take up much memory, thus it might be effective to produce a large sample of inputs, then evaluate them. Each sample is independent, so samples can be evaluated in parallel if required to accelerate the procedure.

The instance below furnishes an instance of a simple one-dimensional maximization objective function and produces, then evaluates an arbitrary sample of 100 inputs. The input with the ideal performance is subsequently reported.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

# example of random search for function optimization

from numpy.random import rand

 

# objective function

def objective(x):

return x**2.0

 

# define range for input

r_min, r_max = -5.0, 5.0

# generate a random sample from the domain

sample = r_min + rand(100) * (r_max – r_min)

# evaluate the sample

sample_eval = objective(sample)

# locate the best solution

best_ix = 0

for i in range(len(sample)):

if sample_eval[i] < sample_eval[best_ix]:

best_ix = i

# summarize best solution

print(‘Best: f(%.5f) = %.5f’ % (sample[best_ix], sample_eval[best_ix]))

 

Running the instance produces an arbitrary sample of input values, which are then evaluated. The best performing point is then identified and reported.

Your outcomes may demonstrate variance provided the stochastic nature of the algorithm or evaluation procedure, or variations in numerical accuracy. Take up running the instance a few times and contrast the average outcome.

In this scenario, we observe that the outcome is very close to the optimal input of 0.0.

Best: f(-0.01762) = 0.00031

We can update the instance to plot the objective function and display the sample and best outcome. The complete instance is detailed below.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

# example of random search for function optimization with plot

from numpy import arange

from numpy.random import rand

from matplotlib import pyplot

 

# objective function

def objective(x):

return x**2.0

 

# define range for input

r_min, r_max = -5.0, 5.0

# generate a random sample from the domain

sample = r_min + rand(100) * (r_max – r_min)

# evaluate the sample

sample_eval = objective(sample)

# locate the best solution

best_ix = 0

for i in range(len(sample)):

if sample_eval[i] < sample_eval[best_ix]:

best_ix = i

# summarize best solution

print(‘Best: f(%.5f) = %.5f’ % (sample[best_ix], sample_eval[best_ix]))

# sample input range uniformly at 0.1 increments

inputs = arange(r_min, r_max, 0.1)

# compute targets

results = objective(inputs)

# create a line plot of input vs result

pyplot.plot(inputs, results)

# plot the sample

pyplot.scatter(sample, sample_eval)

# draw a vertical line at the best input

pyplot.axvline(x=sample[best_ix], ls=’–‘, color=’red’)

# show the plot

pyplot.show()

 

Running the instance again produces the random sample and reports the ideal outcome.

Best: f(0.01934) = 0.00037

A line plot is then developed displaying the shape of the objective function, the random sample, and a red line for the ideal result located from the sample.

Grid Search for Function Optimization

Grid search is also referenced to as a grid sampling of full factorial sampling.

Grid search consists of producing uninform grid inputs for an objective function. In one-dimension, this would be inputs evenly spaced along a line. In two-dimensions this would be a lattice of evenly spaced points throughout the surface, and so on for higher dimensions.

The full factorial sampling plan places a grid of evenly spaced points over the search space. This strategy is simple to implement, is not reliant on randomness and covers the space, but it leverages a huge number of points.

A lot like random search, a grid search can be specifically effective on problems where domain expertise is usually leveraged to impact the selection of particular optimization algorithms. The grid can assist to swiftly detect areas of a search space that might deserve more attention.

The grid of samples is usually uniform, even though this does not have to be the scenario. For instance, a log-10 scale could be leveraged with a uniform spacing, enabling sampling to be carried out across orders of magnitude.

The downside is that the coarseness of the grid might step over entire regions of the search space where good solutions reside, a issue that worsens as the number of inputs (dimensions of the search space) to the problem increases.

A grid of samples can be produced by selecting the uniform separation of points, then enumerating every variable in turn and incrementing every variable by the selected separation.

The instance below furnishes an instance of a simplistic two-dimensional minimization objective function and produces then evaluates a grid sample with a spacing of 0.1 for both input variables. The input with the ideal performance is then reported.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

# example of grid search for function optimization

from numpy import arange

from numpy.random import rand

 

# objective function

def objective(x, y):

return x**2.0 + y**2.0

 

# define range for input

r_min, r_max = -5.0, 5.0

# generate a grid sample from the domain

sample = list()

step = 0.1

for x in arange(r_min, r_max+step, step):

for y in arange(r_min, r_max+step, step):

sample.append([x,y])

# evaluate the sample

sample_eval = [objective(x,y) for x,y in sample]

# locate the best solution

best_ix = 0

for i in range(len(sample)):

if sample_eval[i] < sample_eval[best_ix]:

best_ix = i

# summarize best solution

print(‘Best: f(%.5f,%.5f) = %.5f’ % (sample[best_ix][0], sample[best_ix][1], sample_eval[best_ix]))

 

Running the instance produces a grid of input values, which are then evaluated. The ideal performing point is then detected and reported.

Your outcomes might demonstrate variance provided the stochastic nature of the algorithm or evaluation procedure, or variations in numerical accuracy. Take up running the instance a few times and contrast the average outcome.

In this scenario, we can observe that the outcome identifies the optima exactly.

Best: f(-0.00000,-0.00000) = 0.00000

We can update the instance to plot the objective function and display the sample and best outcome. The full instance is detailed below.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

# example of grid search for function optimization with plot

from numpy import arange

from numpy import meshgrid

from numpy.random import rand

from matplotlib import pyplot

 

# objective function

def objective(x, y):

return x**2.0 + y**2.0

 

# define range for input

r_min, r_max = -5.0, 5.0

# generate a grid sample from the domain

sample = list()

step = 0.5

for x in arange(r_min, r_max+step, step):

for y in arange(r_min, r_max+step, step):

sample.append([x,y])

# evaluate the sample

sample_eval = [objective(x,y) for x,y in sample]

# locate the best solution

best_ix = 0

for i in range(len(sample)):

if sample_eval[i] < sample_eval[best_ix]:

best_ix = i

# summarize best solution

print(‘Best: f(%.5f,%.5f) = %.5f’ % (sample[best_ix][0], sample[best_ix][1], sample_eval[best_ix]))

# sample input range uniformly at 0.1 increments

xaxis = arange(r_min, r_max, 0.1)

yaxis = arange(r_min, r_max, 0.1)

# create a mesh from the axis

x, y = meshgrid(xaxis, yaxis)

# compute targets

results = objective(x, y)

# create a filled contour plot

pyplot.contourf(x, y, results, levels=50, cmap=’jet’)

# plot the sample as black circles

pyplot.plot([x for x,_ in sample], [y for _,y in sample], ‘.’, color=’black’)

# draw the best result as a white star

pyplot.plot(sample[best_ix][0], sample[best_ix][1], ‘*’, color=’white’)

# show the plot

pyplot.show()

 

Running the instance again produces the grid sample and reports the best outcome.

Best: f(0.00000,0.00000) = 0.00000

A contour plot is then developed displaying the shape of the objective function, the grid sample as black dots, and a white star for the ideal outcome is located from the sample.

Observe that some of the black dots for the edge of the domain prop up to be off the plot, this is merely an artifact for how we are opting to draw the dots (example, not centred on the sample).

Further Reading

This portion of the blog furnishes additional resources on the subject if you are seeking to delve deeper.

Books

Algorithms for Optimization, 2019.

Articles

Random search, Wikipedia

Hyperparameter optimization, Wikipedia

Brute-force search, Wikipedia

Conclusion

In this guide, you learned about naïve algorithms for function optimization.

Particularly, you learned:

  • The role of naïve algorithms in function optimization projects.
  • How to produce and evaluate a random search for function optimization.
  • How to generate and evaluate a grid search for function optimization.
Add Comment