##
Title:
Risk neutral and risk averse stochastic optimization

Risk neutral and risk averse stochastic optimization

##### Author(s)

Cheng, Yi

##### Advisor(s)

Shapiro, Alexander

Lan, Guanghui

Lan, Guanghui

##### Editor(s)

##### Collections

##### Supplementary to

##### Permanent Link

##### Abstract

In this thesis, we focus on the modeling, computational methods and applications of multistage/single-stage stochastic optimization, which entail risk aversion under certain circumstances. Chapters 2-4 concentrate on multistage stochastic programming while Chapter 5-6 deal with a class of single-stage functional constrained stochastic optimization problems.
First, we investigate the deterministic upper bound of a Multistage Stochastic Linear Program (MSLP). We first present the Dual SDDP algorithm, which solves the Dynamic Programming equations for the dual and computes a sequence of nonincreasing deterministic upper bounds for the optimal value of the problem, even without the presence of Relatively Complete Recourse (RCR) condition. We show that optimal dual solutions can be obtained using Primal SDDP when computing the duals of the subproblems in the backward pass. As a byproduct, we study the sensitivity of the optimal value as a function of the involved problem parameters. In particular, we provide formulas for the derivatives of the value function with respect to the parameters and illustrate their application on an inventory problem. Next, we extend to the infinite-horizon MSLP and show how to construct a deterministic upper bound (dual bound) via the proposed Periodical Dual SDDP. Finally, as a proof of concept of the developed tools, we present the numerical results of (1) the sensitivity of the optimal value as a function of the demand process parameters; (2) conduct Dual SDDP on the inventory and the Brazilian hydro-thermal planning problems under both finite-horizon and infinite-horizon settings.
Third, we discuss sample complexity of solving stationary stochastic programs by the Sample Average Approximation (SAA) method. We investigate this in the framework of Stochastic Optimal Control (in discrete time) setting. In particular we derive a Central Limit Theorem type asymptotics for the optimal values of the SAA problems. The main conclusion is that the sample size, required to attain a given relative error of the SAA solution, is not sensitive to the discount factor, even if the discount factor is very close to one. We consider the risk neutral and risk averse settings. The presented numerical experiments confirm the theoretical analysis.
Fourth, we propose a novel projection-free method, referred to as Level Conditional Gradient (LCG) method, for solving convex functional constrained optimization. Different from the constraint-extrapolated conditional gradient type methods (CoexCG and CoexDurCG), LCG, as a primal method, does not assume the existence of an optimal dual solution, thus improving the convergence rate of CoexCG/CoexDurCG by eliminating the dependence on the magnitude of the optimal dual solution. Similar to existing level-set methods, LCG uses an approximate Newton method to solve a root-finding problem. In each approximate Newton update, LCG calls a conditional gradient oracle (CGO) to solve a saddle point subproblem. The CGO developed herein employs easily computable lower and upper bounds on these saddle point problems. We establish the iteration complexity of the CGO for solving a general class of saddle point optimization. Using these results, we show that the overall iteration complexity of the proposed LCG method is $\mathcal{O}\left(\frac{1}{\epsilon^2}\log(\frac{1}{\epsilon})\right)$ for finding an $\epsilon$-optimal and $\epsilon$-feasible solution of the considered problem. To the best of our knowledge, LCG is the first primal conditional gradient method for solving convex functional constrained optimization. For the subsequently developed nonconvex algorithms in this thesis, LCG can also serve as a subroutine or provide high-quality starting points that expedites the solution process.
Last, to cope with the nonconvex functional constrained optimization problems, we develop three approaches: the Level Exact Proximal Point (EPP-LCG) method, the Level Inexact Proximal Point (IPP-LCG) method and the Direct Nonconvex Conditional Gradient (DNCG) method. The proposed EPP-LCG and IPP-LCG methods utilize the proximal point framework and solve a series of convex subproblems. By solving each subproblem, they leverage the proposed LCG method, thus averting the effect from large Lagrangian multipliers. We show that the iteration complexity of the algorithms is bounded by $\mathcal{O}\left(\frac{1}{\epsilon^3}\log(\frac{1}{\epsilon})\right)$ in order to obtain an (approximate) KKT point. However, the proximal-point type methods have triple-layer structure and may not be easily implementable. To alleviate the issue, we also propose the DNCG method, which is the first single-loop projection-free algorithm for solving nonconvex functional constrained problem in the literature. This algorithm provides a drastically simpler framework as it only contains three updates in one loop. We show that the iteration complexity to find an $\epsilon$-Wolfe point is bounded by $\mathcal{O}\big(1/{\epsilon^4}\big)$. To the best of our knowledge, all these developments are new for projection-free methods for nonconvex optimization. We demonstrate the effectiveness of the proposed nonconvex projection-free methods on a portfolio selection problem and the intensity modulated radiation therapy treatment planning problem. Moreover, we compare the results with the LCG method proposed in Chapter \ref{chapter-noncvx}. The outcome of the numerical study shows all methods are efficient in jointly minimizing risk while promoting sparsity in a rather short computational time for the real-world and large-scale datasets.

##### Sponsor

##### Date Issued

2022-12-07

##### Extent

##### Resource Type

Text

##### Resource Subtype

Dissertation