Regularized gradient-projection methods for finding the minimum-norm solution of the constrained convex minimization problem

Journal of Inequalities and Applications, Jan 2017

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Assume that g is a real-valued convex function and the gradient ∇g is 1 L -ism with L > 0 . Let 0 < λ < 2 L + 2 , 0 < β n < 1 . We prove that the sequence { x n } generated by the iterative algorithm x n + 1 = P C ( I − λ ( ∇ g + β n I ) ) x n , ∀ n ≥ 0 converges strongly to q ∈ U , where q = P U ( 0 ) is the minimum-norm solution of the constrained convex minimization problem, which also solves the variational inequality 〈 − q , p − q 〉 ≤ 0 , ∀ p ∈ U . Under suitable conditions, we obtain some strong convergence theorems. As an application, we apply our algorithm to solving the split feasibility problem in Hilbert spaces. MSC: 58E35, 47H09, 65J15.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://www.journalofinequalitiesandapplications.com/content/pdf/s13660-016-1289-4.pdf

Regularized gradient-projection methods for finding the minimum-norm solution of the constrained convex minimization problem

Tian and Zhang Journal of Inequalities and Applications Regularized gradient-projection methods for finding the minimum-norm solution of the constrained convex minimization problem Ming Tian Hui-Fang Zhang Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Assume that g is a real-valued convex function and the gradient ∇g is 1L -ism with L > 0. Let 0 < λ < L+22 , 0 < βn < 1. We prove that the sequence {xn} generated by the iterative algorithm xn+1 = PC(I - λ(∇g + βnI))xn, ∀n ≥ 0 converges strongly to q ∈ U, where q = PU(0) is the minimum-norm solution of the constrained convex minimization problem, which also solves the variational inequality -q, p - q ≤ 0, ∀p ∈ U. Under suitable conditions, we obtain some strong convergence theorems. As an application, we apply our algorithm to solving the split feasibility problem in Hilbert spaces. regularized gradient-projection method; minimum-norm; the constrained convex minimization problem; variational inequality 1 Introduction Let H be a real Hilbert space with inner product ·, · and norm · . Let C be a nonempty closed convex subset of H. Let N and R denote the sets of positive integers and real numthe fixed point of T . Firstly, consider the constrained convex minimization problem: ∀n ≥ , where g : C → R is a real-valued convex function. Assume that the constrained convex minimization problem (.) is solvable, let U denote its solution set. The gradientprojection algorithm (GPA) is an effective method for solving the constrained convex minimization problem (.). A sequence {xn} generated by the following recursive formula: © The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. converges strongly to a minimizer of (.). However, if the gradient ∇g is only to be L -ism with L > ,  < λ < L , the sequence {xn} generated by (.) converges weakly to a minimizer of (.). Recently, many authors combined the constrained convex minimization problem with a fixed point problem [–] and proposed composited iterative algorithms to find a solution of the constrained convex minimization problem [–]. In , Moudafi [] introduced the viscosity approximation method for nonexpansive mappings. ∀n ≥ . ∀n ≥ , In , Yamada [] introduced the so-called hybrid steepest-descent algorithm: where F is Lipschitzian and strongly monotone operator. In , Marino and Xu [] considered a generative algorithm: where A is a strongly positive operator. In , Tian [] combined the iterative algorithm of (.), (.), and proposed a new iterative algorithm: ∀n ≥ , ∀n ≥ . ∀n ≥ , In , Tian [] generalized (.), obtained the following iterative algorithm: where V is Lipschitzian operator. Based on these iterative algorithms, some authors combined GPA with averaged operator to solve the constrained convex minimization problem [, ]. In , Ceng et al. [] proposed a sequence {xn} generated by the following iterative algorithm: ∀n ≥ , where h : C → H is an l-Lipschitzian mapping with a constant l > , and F : C → H is a k-Lipschitzian and η-strongly monotone operator with constants k, η > . θn = –λnL , PC(I – λn∇g) = θnI + ( – θn)Tn, ∀n ≥ . Then a sequence {xn} generated by (.) converges strongly to a minimizer of (.). On the other hand, Xu [] proposed that regularization can be used to find the minimum-norm solution of the minimization problem. Consider the following regularized minimization problem: where the regularization parameter β > . g is a convex function and the gradient ∇g is L -ism with L > . Then the sequence {xn} generated by the following formula: xn+ = PC(I – λ∇gβn )xn = PC I – λ(∇g + βnI) xn, ∀n ≥ , where the regularization parameters  < βn < ,  < λ < L converges weakly. But, if a sequence {xn} defined by xn+ = PC(I – λn∇gβn )xn = PC I – λn(∇g + βnI) xn, ∀n ≥ , where the initial guess x ∈ C, {λn}, {βn} satisfy the following conditions: βn (i)  < λn ≤ (L+βn) , ∀n ≥ , (ii) βn →  (and λn → ) as n → ∞, (iii) n∞= λnβn = ∞, (iv) (|λn–λn–|+|λnβn–λn–βn–|) →  as n → ∞. (λnβn) Then the sequence {xn} generated by (.) converges strongly to x∗, which is the minimum-norm solution of (.) []. Secondly, Yu et al. [] proposed a strong convergence theorem with a regularized-like method to find an element of the set of solutions for a monotone inclusion problem in a Hilbert space. Theorem . ([]) Let H be a real Hilbert space and C be a nonempty closed and convex subset of H. Let L > , F is a L -ism mapping of C into H. Let B be a maximal monotone mapping on H and let G be a maximal monotone mapping on H such that the domains of B and G are included in C. Let Jρ = (I + ρB)– and Tr = (I + rG)– for each ρ >  and r > . Suppose that (F + B)–() ∩ G–() = ∅. Let {xn} ⊂ H defined by ∀n > , where ρ ∈ (, ∞), βn ∈ (, ), r ∈ (, ∞). Assume that (i)  < a ≤ ρ < +L , (ii) limn→∞ βn = , n∞= βn = ∞. Then the sequence {xn} generated by (.) converges strongly to x, where x = P(F+B)–()∩G–()(). From the article of Yu et al. [], we obtain a new condition of parameter ρ,  < ρ < L+ , which is used widely in our article. Motivated and inspired by Lin, when  < λ < L+ , {βn} satisfy certain conditions, a sequence {xn} generated by the iterative algorithm (.): ∀n ≥ , converges strongly to a point q ∈ U, where q = PU () is the minimum-norm solution of the constrained convex minimization problem. Finally, we give concrete example and the numerical results to illustrate our algorithm is with fast convergence. 2 Preliminaries In this part, we introduce some lemmas that will be used in the rest part. Let H be a real Hilbert space and C be a nonempty closed convex subset of H. We use ‘→’ to denote strong convergence of the sequence {xn} and use ‘ ’ to denote weak convergence. Recall PC is the metric projection from H into C, then to each point x ∈ H, the unique point PC ∈ C satisfy the property: PC has the following characteristics. Lemma . ([]) For a given x ∈ H: () z = PCx ⇐⇒ x – z, z – y ≥ , ∀y ∈ C; () z = PCx ⇐⇒ x – z  ≤ x – y  – y – z , ∀y ∈ C; () PCx – PCy, x – y ≥ PCx – PCy , ∀x, y ∈ H. From (), we can derive that PC is nonexpansive and monotone. Lemma . (Demiclosed principle []) Let T : C → C be a nonexpansive mapping with F(T ) = ∅. If {xn} is a sequence in C weakly converging to x and if {(I – T )xn} converges strongly to y, then (I – T )x = y. In particular, if y = , then x ∈ F(T ). Lemma . ([]) Let {an} is a sequence of nonnegative real numbers such that n ≥ , where {αn}n∞= and {δn}n∞= are sequences of real numbers in (, ) and such that (i) n∞= αn = ∞; (ii) lim supn→∞ δn ≤  or n∞= αn|δn| < ∞. Then limn→∞ an = . 3 Main results Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Assume that g : C → R is real-valued convex function and the gradient ∇g is L -ism with L > . Suppose that the minimization problem (.) is consistent and let U denote its solution set. Let  < λ < L+ ,  < βn < . Consider the following mapping Gn on C defined by ∀x ∈ C, n ∈ N. = ( – λβn) x – y  + λ ∇g(x) – ∇g(y)  ≤ ( – λβn) x – y  + λ ∇g(x) – ∇g(y)  Since  <  – λβn < , it follows that Gn is a contraction. Therefore, by the Banach contraction principle, Gn has a unique fixed point xn, such that –q, p – q ≤ , ∀p ∈ U. Next, we prove that the sequence {xn} converges strongly to q ∈ U, which also solves the variational inequality Equivalently, q = PU (), that is, q is the minimum-norm solution of the constrained convex minimization problem. Theorem . Let C be a nonempty closed convex subset of a real Hilbert space H. Let g : C → R is real-valued convex function and assume that the gradient ∇g is L -ism with L > . Assume that U = ∅. Let {xn} be a sequence generated by ∀n ∈ N. Let λ, {βn} satisfy the following conditions: (i)  < λ < +L , (ii) {βn} ⊂ (, ), limn→∞ βn = , n∞= βn = ∞. Then {xn} converges strongly to a point q ∈ U, where q = PU (), which is the minimumnorm solution of the minimization problem (.) and also solves the variational inequality (.). xn – p = PC I – λ(∇g + βnI) xn – PC(I – λ∇g)p I – λ(∇g + βnI) xn – I – λ(∇g + βnI) p Then we derive that xn – p ≤ p , and hence {xn} is bounded. xn – PC(I – λ∇g)xn = PC I – λ(∇g + βnI) xn – PC(I – λ∇g)xn ∇g is L -ism. Consequently, PC(I – λ∇g) is a nonexpansive self-mapping on C. As a matter of fact, we have for each x, y ∈ C PC(I – λ∇g)x – PC(I – λ∇g)y  ≤ (I – λ∇g)x – (I – λ∇g)y  = x – y – λ ∇g(x) – ∇g(y)  = x – y  – λ x – y, ∇g(x) – ∇g(y) + λ ∇g(x) – ∇g(y)  ≤ x – y . ∇g(x) – ∇g(y)  {xn} is bounded, consider a subsequence {xni } of {xn}. Since {xni } is bounded, there exists a subsequence {xnij } of {xni } which converges weakly to z. Without loss of generality, we can assume that xni z. Then by Lemma ., we obtain z ∈ U. On the other hand xn – z  = PC I – λ(∇g + βnI) xn – PC(I – λ∇g)z  ≤ I – λ(∇g + βnI) xn – (I – λ∇g)z, xn – z = I – λ(∇g + βnI) xn – I – λ(∇g + βnI) z, xn – z ≤ ( – λβn) xn – z  + λβn –z, xn – z . xn – z  ≤ –z, xn – z . xni – z  ≤ –z, xni – z . z. Then we derive that xni → z as i → ∞. Let q be the minimum-norm solution of U, that is, q = PU (). Since {xn} is bounded, there exists a subsequence {xni } of {xn} such that xni z. As the above proof, we know that xni → z, z ∈ U. Then we derive that xn – q  = PC I – λ(∇g + βnI) xn – q  ≤ I – λ(∇g + βnI) xn – (I – λ∇g)q, xn – q = I – λ(∇g + βnI) xn – I – λ(∇g + βnI) q, xn – q ≤ ( – λβn) xn – q  + λβn –q, xn – q . xn – q  ≤ –q, xn – q . xni – q  ≤ –q, xni – q . Since xni → z, z ∈ U, z – q  ≤ –q, z – q ≤ . So, we have z = q. From the arbitrariness of z ∈ U, it follows that q ∈ U is a solution of the variational inequality (.). By the uniqueness of solution of the variational inequality (.), we conclude that xn → q as n → ∞, where q = PU (). Theorem . Let C be a nonempty closed convex subset of a real Hilbert space H and g : C → R is real-valued convex function and assume that the gradient ∇g is L -ism with L > . Assume that U = ∅. Let {xn} be a sequence generated by x ∈ C and ∀n ∈ N, where λ and {βn} satisfy the following conditions: (i)  < λ < L+ ; (ii) {βn} ⊂ (, ), limn→∞ βn = , n∞= βn = ∞, n∞= |βn+ – βn| < ∞. Then {xn} converges strongly to a point q ∈ U, where q = PU (), which is the minimumnorm solution of the minimization problem (.) and also solves the variational inequality (.). Proof First, we claim that {xn} is bounded. Indeed, pick any p ∈ U, then we know that, for any n ∈ N, xn+ – p ≤ PC I – λ(∇g + βnI) xn – PC I – λ(∇g + βnI) p + PC I – λ(∇g + βnI) p – PC(I – λ∇g)p By the introduction xn – p ≤ max x – p , p , and hence {xn} is bounded. Next, we show that xn+ – xn → . xn+ – xn = PC I – λ(∇g + βnI) xn – PC I – λ(∇g + βn–I) xn– I – λ(∇g + βnI) xn – I – λ(∇g + βn–I) xn– I – λ(∇g + βnI) xn – I – λ(∇g + βnI) xn– ≤ ( – λβn) xn – xn– + λ|βn – βn–| · xn– ≤ ( – λβn) xn – xn– + λ|βn – βn–| · M, Then we claim that xn – PC(I – λ∇g)xn → . xn – PC(I – λ∇g)xn = xn – xn+ + xn+ – PC(I – λ∇g)xn ≤ xn – xn+ + PC I – λ(∇g + βnI) xn – PC(I – λ∇g)xn Next, we show that Let q be the minimum-norm solution of U, that is, q = PU (). Since {xn} is bounded, without loss of generality, we assume that xnj z. By the same argument as in the proof of Theorem ., we have z ∈ U. lim sup –q, xn – q = lim –q, xnj – q = –q, z – q ≤ . n→∞ j→∞ It follows that xn+ – q  = PC I – λ(∇g + βnI) xn – PC(I – λ∇g)q  = PC I – λ(∇g + βnI) xn – PC I – λ(∇g + βnI) q, xn+ – q + PC I – λ(∇g + βnI) q – PC(I – λ∇g)q, xn+ – q xn+ – q  ≤ ( – λβn) xn – q  + λβn –q, xn+ – q where δn = –q, xn+ – q . It is easy to see that limn→∞ λβn = , n∞= λβn = ∞ and lim supn→∞ δn ≤ . Hence, by Lemma ., the sequence {xn} converges strongly to q, where q = PU (). This completes the proof. 4 Application In this part, we will illustrate the practical value of our algorithm in the split feasibility problem. In , Censor and Elfving [] came up with the split feasibility problem. The SFP is formulated as finding a point x with the property: where C and Q are nonempty closed and convex subset of real Hilbert spaces H and H, A : H → H is bounded linear operator. Next, we consider the constrained convex minimization problem: x ∈ C Ax ∈ Q, If x∗ is a solution of SFP, then Ax∗ ∈ Q and Ax∗ – PQAx∗ = , x∗ is the solution of the minimization problem (.). The gradient of g is ∇g, where ∇g = A∗(I – PQ)A. Applying Theorem ., we obtain the following theorem. Theorem . Assume that the SFP (.) is consistent. Let C be a nonempty closed convex subset of a real Hilbert space H. Assume that A : H → H is bounded linear operator, W = ∅, where W denotes the solution set of SFP (.). Let {xn} be a sequence generated by x ∈ C and ∀n ∈ N. Let λ and {βn} satisfy the following conditions: (i)  < λ < + A  ; (ii) {βn} ⊂ (, ), limn→∞ βn = , n∞= βn = ∞, n∞= |βn+ – βn| < ∞. Then {xn} converges strongly to a point q ∈ W , where q = PW (). Proof We only need to show that ∇g is A  -ism, then Theorem . can be obtained by Theorem .. ∇g = A∗(I – PQ)A. Since PQ is firmly nonexpansive, so PQ is  -averaged mapping, then I – PQ is -ism, for any x, y ∈ C, we derive that ∇g(x) – ∇g(y), x – y = A∗(I – PQ)Ax – A∗(I – PQ)Ay, x – y So, ∇g is A  -ism. 5 Numerical result In this part, we use the algorithm in Theorem . to solve a system of linear equations. Then we calculate the  ×  system of linear equations. Example  Let H = H = R. Take where C = R, Q = {b}. That is, x∗ is the solution of the system of linear equations Ax = b, and x∗ ∈ C Ax∗ ∈ Q, Table 1 Numerical results as regards Example 1 Table 2 Numerical results as regards Example 1 xn+ = xn – A∗Axn + A∗b – As n → ∞, we have {xn} → x∗ = (, , , )T . From Table , we can easily see that with iterative number increasing xn approaches to In Tian and Jiao [], they use another iterative algorithm to calculate the same example. 6 Conclusion In a real Hilbert space, there are many methods to solve the constrained convex minimization problem. However, most of them cannot find the minimum-norm solution. In this article, we use the regularized gradient-projection algorithm to find the minimumunder some suitable conditions, new strong convergence theorems are obtained. Finally, we apply this algorithm to the split feasibility problem and use a concrete example and numerical results to illustrate that our algorithm has fast convergence. Competing interests The authors declare that they have no competing interests. Authors’ contributions All the authors read and approved the final manuscript. Acknowledgements The authors thank the referees for their helping comments, which notably improved the presentation of this paper. This work was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing. First author was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing. Hui-Fang Zhang was supported in part by Technology Innovation Funds of Civil Aviation University of China for Graduate in 2017. 1. Ceng , LC, Ansari, QH, Yao, JC : Some iterative methods for finding fixed points and for solving constrained convex minimization problems . Nonlinear Anal . 74 , 5286 - 5302 ( 2011 ) 2. Ceng , LC, Ansari, QH, Yao, JC: Extragradient-projection method for solving constrained convex minimization problems . Numer. Algebra Control Optim . 1 ( 3 ), 341 - 359 ( 2011 ) 3. Ceng , LC, Ansari, QH, Wen, CF: Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems . J. Inequal. Appl . 2013 , 240 ( 2013 ) 4. Deutsch , F, Yamada, I: Minimizing certain convex functions over the intersection of the fixed point sets of the nonexpansive mappings . Numer. Funct. Anal. Optim . 19 , 33 - 56 ( 1998 ) 5. Xu , HK: Iterative algorithms for nonlinear operators . J. Lond. Math. Soc. 66 , 240 - 256 ( 2002 ) 6. Xu , HK: An iterative approach to quadratic optimization . J. Optim. Theory Appl . 116 , 659 - 678 ( 2003 ) 7. Yamada , I, Ogura , N, Yamashita , Y, Sakaniwa , K: Quadratic approximation of fixed points of nonexpansive mappings in Hilbert spaces . Numer. Funct. Anal. Optim . 19 , 165 - 190 ( 1998 ) 8. Moudafi , A: Viscosity approximation methods for fixed-points problem . J. Math. Anal. Appl . 241 , 46 - 55 ( 2000 ) 9. Yamada , I: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings . In: Inherently Parallel Algorithms in Feasibility and Optimization and Their Application , Haifa ( 2001 ) 10. Marino , G, Xu, HK: A general method for nonexpansive mappings in Hilbert space . J. Math. Anal. Appl . 318 , 43 - 52 ( 2006 ) 11. Tian , M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces . Nonlinear Anal . 73 , 689 - 694 ( 2010 ) 12. Tian , M: A general iterative method based on the hybrid steepest descent scheme for nonexpansive mappings in Hilbert spaces . In: International Conference on Computational Intelligence and Software Engineering, CiSE 2010 , art. 5677064. IEEE, Piscataway, NJ ( 2010 ) 13. Tian , M, Liu, L: General iterative methods for equilibrium and constrained convex minimization problem . Optimization 63 , 1367 - 1385 ( 2014 ) 14. Tian , M, Liu, L: Iterative algorithms based on the viscosity approximation method for equilibrium and constrained convex minimization problem . Fixed Point Theory Appl . 2012 , 201 ( 2012 ) 15. Xu , HK: Kim: averaged mappings and the gradient-projection algorithm . J. Optim. Theory Appl . 150 , 360 - 378 ( 2011 ) 16. Yu , ZT, Lin, LJ, Chuang, CS: A unified study of the split feasible problems with applications . J. Nonlinear Convex Anal . 15 ( 3 ), 605 - 622 ( 2014 ) 17. Takahashi , W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama ( 2000 ) 18. Hundal , H: An alternating projection that does not converge in norm . Nonlinear Anal . 57 , 35 - 61 ( 2004 ) 19. Xu , HK: Viscosity approximation methods for nonexpansive mappings . J. Math. Anal. Appl . 298 , 279 - 291 ( 2004 ) 20. Censor , Y, Elfving , T: A multiprojection algorithm using Bregman projections in a product space . Numer. Algorithms 8 , 221 - 239 ( 1994 ) 21. Tian , M, Jiao, SW : Regularized gradient-projection methods for the constrained convex minimization problem and the zero points of maximal monotone operator . Fixed Point Theory Appl . 2015 , 11 ( 2015 )


This is a preview of a remote PDF: http://www.journalofinequalitiesandapplications.com/content/pdf/s13660-016-1289-4.pdf

Ming Tian, Hui-Fang Zhang. Regularized gradient-projection methods for finding the minimum-norm solution of the constrained convex minimization problem, Journal of Inequalities and Applications, 2017, 13, DOI: 10.1186/s13660-016-1289-4