#### Stochastic Methods Based on -Decomposition Methods for Stochastic Convex Minimax Problems

Stochastic Methods Based on -Decomposition Methods for Stochastic Convex Minimax Problems
Yuan Lu,1 Wei Wang,2 Shuang Chen,3 and Ming Huang3
1Normal College, Shenyang University, Shenyang 110044, China
2School of Mathematics, Liaoning Normal University, Dalian 116029, China
3School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
Received 6 August 2014; Revised 29 November 2014; Accepted 29 November 2014; Published 4 December 2014
Academic Editor: Hamid R. Karimi
Copyright © 2014 Yuan Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper applies sample average approximation (SAA) method based on -space decomposition theory to solve stochastic convex minimax problems. Under some moderate conditions, the SAA solution converges to its true counterpart with probability approaching one and convergence is exponentially fast with the increase of sample size. Based on the -theory, a superlinear convergent -algorithm frame is designed to solve the SAA problem.
1. Introduction
In this paper, the following stochastic convex minimax problem (SCMP) is considered: where and the functions , , are convex and , is a random vector defined on probability space ; denotes the mathematical expectation with respect to the distribution of .
SCMP is a natural extension of deterministic convex minimax problems (CMP for short). The CMP has a number of important applications in operations research, engineering problems, and economic problems. While many practical problems only involve deterministic data, there are some important instances where problems data contains some uncertainties and consequently SCMP models are proposed to reflect the uncertainties.
A blanket assumption is made that, for every , , , are well defined. Let be a sampling of . A well-known approach based on the sampling is the so-called SAA method, that is, using sample average value of to approximate its expected value because the classical law of large number for random functions ensures that the sample average value of converges with probability 1 to when the sampling is independent and identically distributed (idd for short). Specifically, we can write down the SAA of our SCMP (1) as follows: where The problem (3) is called the SAA problem and (1) the true problem.
The SAA method has been a hot topic of research in stochastic optimization. Pagnoncelli et al. [1] present the SAA method for chance constrained programming. Shapiro et al. [2] consider the stochastic generalized equation by using the SAA method. Xu [3] raises the SAA method for a class of stochastic variational inequality problems. Liu et al. [4] give the penalized SAA methods for stochastic mathematical programs with complementarity constraints. Chen et al. [5] discuss the SAA methods based on Newton method to the stochastic variational inequality problem with constraint conditions. Since the objective functions of the SAA problems in the references talking above are smooth, then they can be solved by using Newton method.
More recently, new conceptual schemes have been developed, which are based on the -theory introduced in [6]; see else [7–11]. The idea is to decompose into two orthogonal subspaces and at a point , where the nonsmoothness of is concentrated essentially on and the smoothness of appears on the subspace. More precisely, for a given , where denotes the subdifferential of at in the sense of convex analysis, then can be decomposed into direct sum of two orthogonal subspaces, that is, , where , and . As a result an algorithm frame can be designed for the SAA problem that makes a step in the space, followed by a -Newton step in order to obtain superlinear convergence. A -space decomposition method for solving a constrained nonsmooth convex program is presented in [12]. A decomposition algorithm based on proximal bundle-type method with inexact data is presented for minimizing an unconstrained nonsmooth convex function in [13].
In this paper, the objective function in (1) is nonsmooth, but it has the structure which has the connection with -space decomposition. Based on the -theory, a superlinear convergent -algorithm frame is designed to solve the SAA problem. The rest of the paper is organized as follows. In the next section, the SCMP is transformed to the nonsmooth problem and the proof of the approximation solution set converges to the true solution set in the sense that Hausdorff distance is obtained. In Section 3, the -theory of the SAA problem is given. In the final section, the -decomposition algorithm frame of the SAA problem is designed.
2. Convergence Analysis of SAA Problem
In this section, we discuss the convergence of (3) to (1) as increases. Specifically, we investigate the fact that the solution of the SAA problem (3) converges to its true counterpart as . Firstly, we make the basic assumptions for SAA method. In the following, we give the basic assumptions for SAA method.
Assumption 1. (a) Letting be a set, for , the limits exist for every .
(b) For every , the moment-generating function is finite-valued for all in a neighborhood of zero.
(c) There exists a measurable function such that for all and all .
(d) The moment-generating function of is finite-valued for all in a neighborhood of zero, where is the moment-generating function of the random variable .
Theorem 2. Let and denote the solution sets of (1) and (3). Assuming that both and are nonempty, then, for any , one has , where .
Proof. For any points and , we have From Assumption 1, we know that, for any , there exist ; if , , then By letting , we obtain This shows that , which implies .
We now move on to discuss the exponential rate of convergence of SAA problem (3) to the true problem (1) as sample increases.
Theorem 3. Let be a solution to the SAA problem (3) and is the solution set of the true problem (1). Suppose Assumption 1 holds. Then, for every , there exist positive constants and , such that for sufficiently large.
Proof. Let be any small positive number. By Theorem 2 and we have . Therefore, by Assumption 1, we have The proof is complete.
3. The -Theory of the SAA Problem
In the following sections, we give the -theory, -decomposition algorithm frame, and convergence analysis of the SAA problem.
The subdifferential of at a point can be computed in terms of the gradients of the function that are active at . More precisely, where is the set of active indices at , and Let be a solution of (3). By continuity of the structure functions, there exists a ball such that For convenience, we assume that the cardinality of is and reorder the structure functions, so that . From now on, we consider that The following assumption will be used in the rest of this paper.
Assumption 4. The set is linearly independent.
Theorem 5. Suppose Assumption 4 holds. Then can be decomposition at , where
Proof. The proof can be directly obtained by using Assumption 4 and the definition of the spaces and .
Given a subgradient with -component , the -Lagrangian of , depending on , is defined by The associated set of -space minimizers is defined by
Theorem 6. Suppose Assumption 4 holds. Let be a trajectory leading to and let . Then for all sufficiently small the following hold: (i)the nonlinear system, with variable and the parameter , has a unique solution and is a function;(ii) is a -function with ;(iii);(iv);(v), .
Proof. Item (i) follows from the assumption that are and applying a Second-Order Implicit Function Theorem (see [14], Theorem 2.1). Since is , is and the Jacobians exist and are continuous. Differentiating the primal track with respect to , we obtain the expression of and item (ii) follows.
(iii) By the definition of and , we have
According to the second-order expansion of , we obtain
Since , , , and ,
Similar to (iii), we get (iv): The conclusion of (v) can be obtained in terms of (i) and the definition of .
4. Algorithm and Convergence Analysis
Supposing , we give an algorithm frame which can solve (3). This algorithm makes a step in the -subspace, followed by a -Newton step in order to obtain superlinear convergence rate.
Algorithm 7 (algorithm frame).
Step 0. Initialization: given , choose a starting point close to enough and a subgradient and set .
Step 1. Stop if
Step 2. Find the active index set .
Step 3. Construct -decomposition at ; that is, . Compute where
Step 4. Perform -step. Compute which denotes in (22) and set .
Step 5. Perform -step. Compute from the system where is such that . Compute .
Step 6. Update: set and return to Step 1.
Theorem 8. Suppose the starting point is close to enough and , . Then the iteration points generated by Algorithm 7 converge and satisfy
Proof. Let and . It follows from Theorem 6(i) that Since exists and , we have from the definition of -Hessian matrix that By virtue of (30), we have . It follows from the hypothesis that is invertible and hence . In consequence, one has The proof is completed by combining (33) and (35).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The research is supported by the National Natural Science Foundation of China under Project nos. 11301347, 11171138, and 11171049 and General Project of the Education Department of Liaoning Province no. L201242.
References B. K. Pagnoncelli, S. Ahmed, and A. Shapiro, “Sample average approximation method for chance constrained programming: theory and applications,” Journal of Optimization Theory and Applications, vol. 142, no. 2, pp. 399–416, 2009. View at Google ScholarA. Shapiro, D. Dentcheva, and A. Ruszczynski, Lecture on Stochastic Programming: Modelling and Theory, SIAM, Philadelphia, Pa, USA, 2009. H. Xu, “Sample average approximation methods for a class of stochastic variational inequality problems,” Asia-Pacific Journal of Operational Research, vol. 27, no. 1, pp. 103–119, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusY. Liu, H. Xu, and J. J. Ye, “Penalized sample average approximation methods for stochastic mathematical programs with complementarity constraints,” Mathematics of Operations Research, vol. 36, no. 4, pp. 670–694, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusS. Chen, L.-P. Pang, F.-F. Guo, and Z.-Q. Xia, “Stochastic methods based on Newton method to the stochastic variational inequality problem with constraint conditions,” Mathematical and Computer Modelling, vol. 55, no. 3-4, pp. 779–784, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusC. Lemarechal, F. Oustry, and C. Sagastizabal, “The U-Lagrangian of a convex function,” Transactions of the American Mathematical Society, vol. 352, no. 2, pp. 711–729, 2000. View at Google ScholarR. Mifflin and C. Sagastizábal, “VU-decomposition derivatives for convex max-functions,” in Ill-Posed Variational Problems and Regularization Techniques, M. Théra and R. Tichatschke, Eds., vol. 477 of Lecture Notes in Economics and Mathematical Systems, pp. 167–186, Springer, Berlin, Germany, 1999. View at Google ScholarC. Lemaréchal and C. Sagastizábel, “More than first-order developments of convex functions: primal-dual relations,” Journal of Convex Analysis, vol. 3, no. 2, pp. 255–268, 1996. View at Google ScholarR. Mifflin and C. Sagastizabal, “On VU-theory for functions with primal-dual gradient strcture,” SIAM Journal on Optimization, vol. 11, no. 2, pp. 547–571, 2000. View at Publisher · View at Google ScholarR. Mifflin and C. Sagastizábal, “Functions with primal-dual gradient structure and U-Hessians,” in Nonlinear Optimization and Related Topics, G. Pillo and F. Giannessi, Eds., vol. 36 of Applied Optimization, pp. 219–233, Kluwer Academic, 2000. View at Google ScholarR. Mifflin and C. Sagastizábal, “Primal-dual gradient structured functions: second-order results; links to EPI-derivatives and partly smooth functions,” SIAM Journal on Optimization, vol. 13, no. 4, pp. 1174–1194, 2003. View at Google ScholarY. Lu, L. P. Pang, F. F. Guo, and Z. Q. Xia, “A superlinear space decomposition algorithm for constrained nonsmooth convex program,” Journal of Computational and Applied Mathematics, vol. 234, no. 1, pp. 224–232, 2010. View at Google ScholarY. Lu, L.-P. Pang, J. Shen, and X.-J. Liang, “A decomposition algorithm for convex nondifferentiable minimization with errors,” Journal of Applied Mathematics, vol. 2012, Article ID 215160, 15 pages, 2012. View at Publisher · View at Google ScholarS. Lang, Real and Functional Analysis, vol. 142 of Graduate Texts in Mathematics, Springer, New York, NY, USA, 3rd edition, 1993.