Title:
Algorithmic Framework for Improving Heuristics in Stochastic, Stage-Wise Optimization Problems
Algorithmic Framework for Improving Heuristics in Stochastic, Stage-Wise Optimization Problems
Author(s)
Choi, Jaein
Advisor(s)
Lee, Jay H.
Realff, Matthew J.
Realff, Matthew J.
Editor(s)
Collections
Supplementary to
Permanent Link
Abstract
Algorithmic Framework for Improving Heuristics in
Stochastic, Stage-Wise Optimization Problems
Jaein Choi
172 Pages
Directed by Dr. Jay H. Lee and Dr. Matthew J. Realff
The goal of this thesis is the development of a computationally tractable solution method for stochastic, stage-wise optimization problems. In order to achieve the goal, we have developed a novel algorithmic framework based on Dynamic Programming (DP) for improving heuristics. The propose method represents a systematic way to take a family of solutions and patch them together as an improved solution. However, patching is accomplished in state space, rather than in solution space. Since the proposed approach utilizes simulation with heuristics to circumvent the curse of dimensionality of the DP, it is named as Dynamic Programming in Heuristically Restricted State Space. The proposed algorithmic framework is applied to stochastic Resource Constrained Project Scheduling problems, a real-world optimization problem with a high dimensional state space and significant uncertainty equivalent to billions of scenarios. The real-time decision making policy obtained by the proposed approach outperforms the best heuristic applied in simulation stage to form the policy. The proposed approach is extended with the idea of Q-Learning technique, which enables us to build empirical state transition rules through simulation, for stochastic optimization problems with complicated state transition rules. Furthermore, the proposed framework is applied to a stochastic supply chain management problem, which has high dimensional action space as well as high dimensional state space, with a novel concept of implicit sub-action space that efficiently restricts action space for each state in the restricted state space. The resulting real-time policy responds to the time varying demand for products by stitching together decisions made by the heuristics and improves overall performance of the supply chain. The proposed approach can be applied to any problem formulated as a stochastic DP, provided that there are reasonable heuristics available for simulation.
Sponsor
Date Issued
2004-11-24
Extent
9149223 bytes
Resource Type
Text
Resource Subtype
Dissertation