Parallel Speed-Up of Monte Carlo Methods for Global Optimization

Author(s)
Van Vleck, Erik S.
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Supplementary to:
Abstract
We introduce the notion of expected hitting time to a goal as a measure of the con- vergence rate of a Monte Carlo optimization method. The techniques developed apply to Simulated Annealing, Genetic Algorithms and other stochastic search schemes. The expected hitting time can itself be calculated from the more fundamental complementary hitting time distribution (CHTD) which completely characterizes a Monte Carlo method. The CHTD is asymptotically a geometric series, (1/s)/(1-lambda), characterized by two parameters, s, lambda, related to the search process in a simple way. The main utility of the CHTD is in comparing Monte Carlo algorithms. In particular we show that independent, identical Monte Carlo algorithms run in parallel, IIP parallelism, exhibit superlinear speedup. We give conditions under which this occurs and note that equally likely search is linearly spedup. Further we observe that a serial Monte Carlo search can have in nite expected hitting time, but the same algorithm when parallelized can have nite expected hitting time. One consequence of the observed superlinear speedup is an improved uni-processor algorithm by the technique of in-code parallelism.
Sponsor
Date
1990-08
Extent
Resource Type
Text
Resource Subtype
Pre-print
Rights Statement
Rights URI