An intro to Stochastic Optimization Algorithms
Stochastic optimization references to the leveraging o f randomness within the objective function or in the optimization algorithm.
Challenging optimization algorithms, like high-dimensional nonlinear objective problems, may consist of several local optima in which deterministic optimization algorithms might get stuck.
Stochastic optimization algorithms furnish another strategy that enables less optimal local decisions to be made within the search procedure that might increase the odds of the process situating the global optima of the objective function.
In this guide, you will find out about stochastic optimization, with a brief introduction.
After going through this guide, you will be aware of:
- Stochastic optimization algorithms leverage randomness as portion of the search process.
- Instances of stochastic optimization algorithms like simulated annealing and genetic algorithms.
- Practical considerations when leveraging stochastic optimization algorithms such as repetitive assessments.
Tutorial Summarization
This guide is subdivided into three portions, which are:
1] What is Stochastic Optimization?
2] Stochastic Optimization Algorithms
3] Practical Considerations for Stochastic Optimization
What is stochastic optimization?
Optimization references to optimization algorithms that seek the inputs to a function that have the outcome of the minimum or maximum of an objective function.
Stochastic optimization or stochastic search references to an optimization task that consists of randomness in some fashion, like either from the objective function or in the optimization algorithm.
Stochastic search and optimization pertains to issues where there is randomness noise in the measurements furnished to the algorithm and/or there is injected (Monte Carlo) randomness in the algorithm itself.
Randomness in the objective function implies that the evaluation of candidate solutions consists of some uncertainty or noise and algorithms must be selected that can make progress in the search in the presence of this noise.
Randomness is the algorithm is leveraged as a strategy, for example, stochastic or probabilistic decisions. It is leveraged as an alternative to deterministic decisions in an attempt to enhance the probability of situating the global optima or a better local optima.
Conventional stochastic optimization strategies are brittle, sensitive to stepwise choice and other algorithmic parameters, and they demonstrate instability outside of well-behaved families of objectives.
It is more typical to refer to an algorithm that leverages randomness than an objective function that contains noisy evaluations when discussing stochastic optimization. This is due to the fact that randomness in the objective function can be tackled by leveraging randomness in the optimization algorithm. As such, stochastic optimization might be referenced to as “robust optimization”
A deterministic algorithm might be misled, for example, “deceived” or “confused” by the noisy evaluation of candidate solutions or noisy function gradients, causing the algorithm to bounce around or get stuck (for example, not converging.)
Strategies for stochastic optimization furnish a means of coping with inherent system noise and coping with models or systems that are very non-linear, high-dimensional, or otherwise not relevant for classical deterministic methods of optimization.
Leveraging randomness in an optimization algorithm enables the search procedure to perform well on challenging optimization issues that may possess nonlinear response surface. This is accomplished by the algorithm taking locally suboptimal steps or moves within the search space that enable it to evade local optima.
Randomness can assist in escaping local optima and improve the odds of identifying a global optimum.
The randomness leveraged in a stochastic optimization algorithm does not have to be true randomness, rather, pseudorandom is adequate. A pseudorandom number generator is nearly universally leveraged in stochastic optimization.
Leveraging of randomness in a stochastic optimization algorithm does not imply that the algorithm is random. Rather, it implies that some decisions committed in the search process consist of some portion of randomness. For instance, we can conceptualize this as the move from the present to the next point within the search space made by the algorithm may be made according to a probability distribution relative to the optimal move.
Now that we are acquainted with what stochastic optimization is, let’s observe some instances of stochastic optimization algorithms.
Stochastic Optimization Algorithms
The leveraging of randomness in the algorithm typically means that the strategies are referenced to as “heuristic search” as they leverage a rough rule-of-thumb process that might or might not work to identify the optima rather than a precise procedure.
Several stochastic algorithms are traced back to a biological or natural process and might be referenced to as “metaheuristics” as a higher order process furnishing the conditions for a particular search of the objective function. They are also referenced to as “black box” optimization algorithms.
Metaheuristics is a rather unlucky term often leveraged to detail a major subfield, indeed the primary subfield, of stochastic optimization.
There are several stochastic optimization algorithms.
Some instances of stochastic optimization algorithms include:
- Iterated Local Search
- Stochastic Hill Climbing
- Stochastic Gradient Descent
- Tabu Search
- Greedy Randomized Adaptive Search Procedure
Some instances of stochastic optimization algorithms that can be traced back to biological or physical processes include:
- Simulated annealing
- Evolution strategies
- Genetic algorithm
- Differential evolution
- Particle Swarm optimization
Now that we are acquainted with some instances of stochastic optimization algorithm, let’s observe some practical considerations when leveraging them.
Practical Considerations for Stochastic Optimization
There are critical considerations when leveraging stochastic optimization algorithms.
The stochastic nature of the procedure implies that any singular run of an algorithm will be differing, provided a differing source of randomness leveraged by the algorithm and, in turn, differing beginning points for the search and decisions committed during the search.
The pseudorandom number generator leveraged as the source of randomness can be seeded to ensure the same sequence of arbitrary numbers is furnished every run of the algorithm. This is good for small demos and tutorials, even though it is fragile as it operating against the inherent randomness of the algorithm.
Rather, a provided algorithm can be carried out several times to manage the randomness of the procedure.
The idea of several runs of the algorithm can be leveraged in two critical situations:
- Comparing/contrasting algorithms
- Assessing final outcome
Algorithms might be contrasted on the basis of the relative quality of the outcome identified, the number of function evaluations performed, or some combo or derivation of these considerations. The outcome of any single run will be dependent upon the randomness leveraged by the algorithm and alone cannot meaningfully indicate the capacity of the algorithm. Rather, a technique of repetitive assessment should be leveraged.
Any comparisons amongst stochastic optimization algorithms will need the repeated evaluation of every algorithm with a differing source of randomness and the summarization of the probability distribution of the best outcomes identified, like the mean and the standard deviation of objective values. The mean outcome from every algorithm can then be contrasted.
In scenarios where several local minima are probable to exist, it can be advantageous to integrate random restarts following our termination conditions are met where we restart our local descent method from arbitrarily chosen initial points.
Likewise, any singular run of a selected optimization algorithm alone does not meaningfully indicate the global optima of the objective function. Rather, a technique of repeated evaluation should be leveraged to produce a distribution of optimal solutions.
The maximum or minimum of the distribution can be taken as the last solution, and the distribution itself will furnish a point of reference and confidence that the answer identified is “relatively good” or “good enough” provided the resources expended.
Multi-restart: A strategy for enhancing the likelihood of locating the global optima through the repeated application of a stochastic optimization algorithm to an optimization problem.
The repeated application of a stochastic optimization algorithm on an objective function is at times referred to as a multi-restart technique and might be built in to the optimization algorithm itself or prescribed more generally as a procedure around the selected stochastic optimization algorithm.
Every time you perform a random restart, the hill-climber then winds up in some (potentially new) local optimum.
Further Reading
This section furnishes additional resources on the subject if you are seeking to delve deeper.
Papers
Stochastic Optimization, 2011
The Importance of Better Models in Stochastic Optimization, 2019
Books
Algorithms for Optimization, 2019
Introduction to Stochastic Search and Optimization, 2003
Essentials of Metaheuristics, 2011
Articles
Stochastic optimization, Wikipedia
Heuristic (Computer Science), Wikipedia
Metaheuristic, Wikipedia
Conclusion
In this guide, you found out about stochastic optimization with a brief introduction.
Particularly, you learned:
- Stochastic optimization algorithm leverages randomness as aspect of the search procedure.
- Instances of stochastic optimization algorithms like simulated annealing and genetic algorithms
- Practical considerations when leveraging stochastic optimization algorithms like repeated evaluations