Non-optimality of the Greedy Algorithm for Subspace Orderings in the Method of Alternating Projections

Results in Mathematics, Jul 2017

The method of alternating projections involves projecting an element of a Hilbert space cyclically onto a collection of closed subspaces. It is known that the resulting sequence always converges in norm and that one can obtain estimates for the rate of convergence in terms of quantities describing the geometric relationship between the subspaces in question, namely their pairwise Friedrichs numbers. We consider the question of how best to order a given collection of subspaces so as to obtain the best estimate on the rate of convergence. We prove, by relating the ordering problem to a variant of the famous Travelling Salesman Problem, that correctness of a natural form of the Greedy Algorithm would imply that \(\mathrm {P}=\mathrm {NP}\), before presenting a simple example which shows that, contrary to a claim made in the influential paper (Kayalar and Weinert in Math Control Signals Syst 1(1):43–59, 1988), the result of the Greedy Algorithm is not in general optimal. We go on to establish sharp estimates on the degree to which the result of the Greedy Algorithm can differ from the optimal result. Underlying all of these results is a construction which shows that for any matrix whose entries satisfy certain natural assumptions it is possible to construct a Hilbert space and a collection of closed subspaces such that the pairwise Friedrichs numbers between the subspaces are given precisely by the entries of that matrix.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

https://link.springer.com/content/pdf/10.1007%2Fs00025-017-0721-5.pdf

Non-optimality of the Greedy Algorithm for Subspace Orderings in the Method of Alternating Projections

Non-optimality of the Greedy Algorithm for Subspace Orderings in the Method of Alternating Pro jections O. Darwin A. Jha S. Roy D. Seifert R. Steele L. Stigant The method of alternating projections involves projecting an element of a Hilbert space cyclically onto a collection of closed subspaces. It is known that the resulting sequence always converges in norm and that one can obtain estimates for the rate of convergence in terms of quantities describing the geometric relationship between the subspaces in question, namely their pairwise Friedrichs numbers. We consider the question of how best to order a given collection of subspaces so as to obtain the best estimate on the rate of convergence. We prove, by relating the ordering problem to a variant of the famous Travelling Salesman Problem, that correctness of a natural form of the Greedy Algorithm would imply that P = NP, before presenting a simple example which shows that, contrary to a claim made in the influential paper (Kayalar and Weinert in Math Control Signals Syst 1(1):43-59, 1988), the result of the Greedy Algorithm is not in general optimal. We go on to establish sharp estimates on the degree to which the result of the Greedy Algorithm can differ from the optimal result. Underlying all of these results is a construction which shows that for any matrix whose entries satisfy certain natural assumptions it is possible to construct a Hilbert space and a collection of closed subspaces such that the pairwise Friedrichs numbers between the subspaces are given precisely by the entries of that matrix. Mathematics Subject Classification. 47J25, 65F10 (68Q25). Method of alternating projections; orderings; subspaces; rate of convergence; travelling salesman problem; complexity 1. Introduction Let X be a real or complex Hilbert space, N ≥ 2 an integer, and suppose that M1, . . . , MN are closed subspaces of X. Furthermore let Pk denote the orthogonal projection onto Mk, 1 ≤ k ≤ N , and let PM denote the orthogonal projection onto the intersection M = M1 ∩ . . . ∩ MN . If we let T = PN · · · P1 then it follows from a classical theorem due to Halperin [8] that T nx − PM x → 0, n → ∞, (1.1) for all x ∈ X. It follows easily that, for any x ∈ X, the sequence in X obtained by starting at X and then projecting cyclically onto the N subspaces M1, . . . , MN must converge to the point PM x, which is the point in M closest to the starting vector x. This procedure is known as the method of alternating projections and has many applications, for instance to the iterative solution of large linear systems but also in the theory of partial differential equations and in image restoration; see [3] for a survey. In view of these applications it is important to understand the rate at which the convergence in (1.1) takes place; see for instance [1, 2, 6, 7] for indepth investigations. Recall that the Friedrichs number c(L1, L2) between the two subspaces L1, L2 of X is defined as c(L1, L2) = sup |(x1, x2)| : xk ∈ Lk ∩ L⊥ and xk ≤ 1 for k = 1, 2 , where L = L1 ∩ L2. The Friedrichs number lies in the interval [0, 1] and may be thought of as the cosine of the ‘angle’ between the subspaces L1 and L2. It is shown in [9, Theorem 2] that for N = 2 in the method of alternating projections we have When N ≥ 3 no sharp upper bound of this form is known, but it is shown in [5, Corollary 2.10] that provided the subspaces are pairwise quasi-disjoint in the sense that Mk ∩ M ∩ M ⊥ = {0} for 1 ≤ k, ≤ N with k = . Moreover, the assumption on the subspaces cannot be omitted. The same bound was obtained earlier in [9] in the special case where the subspaces M1 ∩ M ⊥, . . . , MN ∩ M ⊥ are independent, which is to say that if vectors xk ∈ Mk ∩ M ⊥, 1 ≤ k ≤ N , satisfy x1 + · · · + xN = 0 then x1 = · · · = xN = 0. Examples in [5, Section 3] show both that the bound in (1.4) fails to be sharp in some special cases, thus disproving a conjecture made in [9], and more generally that it is not possible for N ≥ 3 to obtain a sharp upper bound for T n − PM , n ≥ 1, which depends only on the pairwise Friedrichs numbers between the subspaces M1, . . . , MN . Nevertheless, the estimate in (1.3) recovers the sharp bound in (1.2) when N = 2 and holds with equality in a number of other cases, for instance if all of the spaces M1, . . . , MN are one-dimensional. We also see from (1.3) that if the Friedrichs number between a pair of consecutive subspaces is zero then we have convergence in the method of alternating projections after at most two steps. Since our interest here is primarily in the asymptotic rate of convergence as n → ∞, there is no significant loss of generality in assuming that c(Mk, M ) > 0 for 1 ≤ k, ≤ N with k = . In this case (1.3) may be recast as where C = c(M1, MN )−1 and r = kN=1 c(Mk, Mk+1), indices henceforth being considered modulo N . Since the asymptotic rate of convergence is determined by the value of r ∈ (0, 1], it is natural to seek the reordering of the subspaces M1, . . . , MN which leads to the smallest possible value of r. More formally, given N ≥ 2 we let SN denote the symmetric group on N letters and for each σ ∈ SN we let rσ = kN=1 c(Mσ(k), Mσ(k+1)), so that for the reordered product Tσ = Pσ(N) · · · Pσ( 1 ) we obtain T n σ − PM ≤ Cσrσn, n ≥ 1, where Cσ = c(Mσ( 1 ), Mσ(N))−1. The objective therefore is to find a permutation σ ∈ SN such that rσ = r∗, where r∗ = min{rσ : σ ∈ SN }, and to find such a permutation a version of the following ‘greedy’ algorithm was proposed in [9, Section 9]. Greedy Algorithm: Given N ≥ 2 independent closed subspaces M1, . . . , MN of a Hilbert space X whose mutual Friedrichs numbers are known we obtain permutations σk ∈ SN , 1 ≤ k ≤ N , as follows. Let σk( 1 ) = k and for j = 2, . . . , N consider as possible values for σk(j) any previously unused index which minimises c(Mσk(j−1), M ). If at any stage there is more than one choice of such an index then proceed by considering all possible choices of this index and take σk to be that permutation which among those leading to the least value of rσk comes first in the lexicographical ordering. Return the permutation σG = σ where ∈ {1, . . . , N } is the smallest index such that rσ = min{rσk : 1 ≤ k ≤ N }. If we let rG = rσG , N ≥ 2, then the Greedy Algorithm is correct if and only if rG = r∗ for all constellations of subspaces. By definition of r∗ it is clear that r∗ ≤ rG, N ≥ 2. In Sect. 3 we show that if the Greedy Algorithm were correct then it would follow that P = NP. We then exhibit a simple example with N = 4 in which r∗ < rG. Both results are obtained as a consequence of a construction, presented in Sect. 2, which shows that any suitable collection of numbers in [0, 1] arises as the set of pairwise Friedrichs numbers between subspaces of some Hilbert space. This result is of independent interest and in particular implies that the problem of finding an optimal ordering is at least as hard as solving a multiplicative form of the Travelling Salesman Problem (TSP). In Sect. 4 we give sharp estimates for the maximal discrepancies between r∗ and rG. In particular, we show that generically rG < r∗1/2, and that the estimate is optimal in the sense that for every ε ∈ ( 0, 1 ) there exists some N ≥ 2 and a suitable collection of N subspaces of some Hilbert space such that rG > (1 − ε)r∗1/2. The last step once again requires the construction from Sect. 2. 2. Friedrichs Matrices Given N ≥ 2 closed subspaces M1, . . . , MN of a Hilbert space, we may consider the N ×N -matrix (c(Mk, M ))1≤k, ≤N whose entries are the pairwise Friedrichs numbers between the various subspaces. We call the matrix arising in this way the Friedrichs matrix corresponding to the collection of subspaces. It is clear that any Friedrichs matrix must be symmetric, have zeros along its main diagonal and elsewhere must have entries lying in the interval [0, 1]. Is every square matrix which has these three properties a Friedrichs matrix for some collection of closed subspaces? The following result answers this question in the affirmative. Here and in what follows we use the same notation as in Sect. 1. Theorem 2.1. Let F ∈ {R, C} and N ≥ 2, and suppose that C is an N × N matrix which is symmetric, has zeros along its main diagonal and elsewhere has entries lying in the interval [0, 1]. Then there exists a Hilbert space X over the field F and closed subspaces M1, . . . , MN of X such that C is the corresponding Friedrichs matrix. Furthermore, the subspaces can be constructed in such a way that Mk ∩ M = {0} for 1 ≤ k, ≤ N with k = and, if N ≥ 3, PkP Pm = 0 for 1 ≤ k, , m ≤ N mutually distinct. Proof. Let C = (ck, ) and suppose first that 0 ≤ ck, < 1 for 1 ≤ k, ≤ N . Let {ek, : 1 ≤ k, ≤ N, k = } be an orthonormal basis for the space X = FN(N−1) endowed with the Euclidean norm, and set xk, = ek, , c ,ke ,k + (1 − c2,k)1/2ek, , 1 ≤ 1 ≤ k < ≤ N, < k ≤ N. For 1 ≤ k ≤ N let Bk = {xk, : 1 ≤ ≤ N, = k}, noting that these sets are orthonormal, and consider the closed subspaces of X given by Mk = span Bk. By our assumption that the entries of C be strictly smaller than 1 we see that Mk ∩ M = {0} for 1 ≤ k, ≤ N with k = , and in particular M = {0}. Furthermore, for 1 ≤ k, k , , ≤ N with k = and k = we have ⎧ 1, k = k , ⎪ (xk, , xk ) = ⎨ ck, , k = , = , = k, ⎪⎩ 0, otherwise, from which it follows that c(Mk, M ) = ck, for 1 ≤ k, ≤ N with k = if N ≥ 3, that PkP Pm = 0 for 1 ≤ k, , m ≤ N mutually distinct. and, Now consider the general case where 0 ≤ ck, ≤ 1 for 1 ≤ k, consider the matrix B = (bk, ) with entries ≤ N and bk, = ck, , ck, < 1, 0, ck, = 1, for 1 ≤ k, ≤ N . By the first part we may find closed subspaces L1, . . . , LN of FN(N−1) whose Friedrichs matrix is B. Let X = FN(N−1) ⊕ Y , where Y = 1≤ <m≤N 2, and endow X with its natural Hilbert space norm. Moreover, let U, V be two closed subspaces of 2 such that U + V is not closed. For 1 ≤ k, , m ≤ N with < m define the subspaces Yk ,m of 2 by ⎧ U, Yk ,m = ⎪⎨ V, c ,m = 1 and k = , c ,m = 1 and k = m, ⎪⎩ {0}, otherwise, and for 1 ≤ k ≤ N define the closed subspace Mk of X by Mk = Lk ⊕ Yk, where Yk = 1≤ <m≤N Yk ,m. If 1 ≤ k < ≤ N are such that ck, < 1, then for 1 ≤ m < n ≤ N we have either Ykm,n = {0} or Y m,n = {0} and therefore cc(kM,k=, M1. T)h=enc(fLork,1L≤) m= <bk,n =≤ Nck, w. eSsuepeptohsaettYhamt,n1 +≤Ykm<,n = U + V if and ≤ N and that k only if k = m and = n, and that otherwise Ykm,n + Y m,n equals either U, V or {0}. It follows that Yk + Y is not closed, and hence Mk + M is not closed. By [4, Theorem 9.35] this implies that c(Mk, M ) = 1 = ck, , and hence we have the required subspaces. Moreover, it is clear from the construction that Mk ∩ M = {0} for 1 ≤ k, ≤ N with k = and, if N ≥ 3, that PkP Pm = 0 for 1 ≤ k, , m ≤ N mutually distinct. Remark 2.2. Note that the result in particular provides a new proof of the fact that in general the optimal value of r in (1.4) cannot be expressed as a function of pairwise Friedrichs numbers between the subspaces M1, . . . , MN when N ≥ 3, as was first observed in a particular case in [5, Example 3.7]. Indeed, for any collection of closed subspaces M1, . . . , MN , N ≥ 3, of some Hilbert space such that in the method of alternating projections we do not have convergence in one step, by Theorem 2.1 we may find an alternative collection of closed subspaces M1, . . . , MN of some Hilbert space with the same pairwise Friedrichs numbers but for which T = PM = 0. 3. Incorrectness of the Greedy Algorithm In this section we turn to the Greedy Algorithm presented in Sect. 1, and in particular we ask whether the algorithm is correct in the sense that the ordering it produces leads to the optimal value of r ∈ [0, 1] in (1.4). We first consider the connection between our problem of finding an optimal ordering and the classical TSP, and we show in Corollary 3.3 below that correctness of the Greedy Algorithm for a sufficiently large class of cases would imply that P = NP. We then exhibit a simple example in which the Greedy Algorithm gives a suboptimal ordering. Recall that in the graph-theoretical formulation of the TSP we are given, for some N ≥ 2, a complete graph KN with vertices VN = {1, 2, . . . , N } and a weight function w : (k, ) ∈ VN2 : k = → R such that w(k, ) = w( , k) for 1 ≤ k, ≤ N with k = , and the objective is to find a permutation σ∗ ∈ SN such that Σσ∗ = min{Σσ : σ ∈ SN }, where for a permutation σ ∈ SN we let Σσ = N k=1 w σ(k), σ(k + 1) with indices, as usual, considered modulo N . We will be interested primarily in the multiplicative form of the TSP, denoted by MTSP, in which the objective is to minimise not the additive cost but instead to find σ∗ ∈ SN such that Πσ∗ = min{Πσ : σ ∈ SN }, where for a permutation σ ∈ SN we let N k=1 Πσ = w σ(k), σ(k + 1) . It is clear that TSP and MTSP have the same solution, and indeed one may pass from one form of the problem to the other simply by replacing the weight function by its logarithm or its exponential, as appropriate. Furthermore, the solution of TSP is unaffected by shifting the values of the weight function by a constant amount, which implies in particular that there is no loss of generality in considering the MTSP only for weight functions taking values in the range [0, 1]. It is well known that the TSP, and hence also MTSP, is NP-complete. This means that it lies in the complexity class NP and is NP-hard, which is to say that any other problem in NP can be transformed into an instance of the TSP in polynomial time. Furthermore, by considering the corresponding decision problems it can be seen that TSP and hence MTSP remain NP-complete if the weight function is assumed to take distinct values on distinct pairs. Our first result is an application of Theorem 2.1 showing that the subspace ordering problem is NP-hard. Proposition 3.1. The problem of finding an optimal ordering for collections of independent closed subspaces with pairwise distinct Friedrichs numbers is NP-hard. Proof. It suffices to show that every instance of TSP with distinct costs can be transformed in polynomial time into a subspace ordering problem with pairwise distinct Friedrichs numbers. However, this follows straightforwardly from Theorem 2.1. Indeed, given a TSP problem on N ≥ 2 vertices we may transform it to an instance of MTSP with weight function taking values in the range [0, 1] in O(N 2) steps. Let C = (ck, )1≤k, ≤N be the symmetric matrix with zeros along its main diagonal and entries ck, = w(k, ) for 1 ≤ k, ≤ N with k = . By Theorem 2.1 there exists a Hilbert space X and independent closed subspaces M1, . . . , MN of X such that C is the associated Friedrichs matrix. Moreover, it is clear from the proof of Theorem 2.1 that it is possible to obtain these subspaces in polynomial time. If we find a permutation σ∗ ∈ SN such that rσ∗ = r∗, then since rσ = Πσ for all σ ∈ SN the permutation σ∗ also solves our instance of MTSP, and hence the original TSP problem. Since TSP is known to be NP-hard, our problem is too. Remark 3.2. Note that the subspaces M1, . . . , MN are not merely independent but satisfy the much stronger conditions described in Theorem 2.1. In particular, the result remains true if the subspaces which we are trying to order are merely pairwise quasi-disjoint in the sense of Sect. 1. The result shows that the existence of any polynomial-time algorithm which solves the subspace ordering problem in a sufficiently large number of cases implies that P = NP. In particular, we obtain the following consequence for the Greedy Algorithm. Corollary 3.3. Correctness of the Greedy Algorithm for independent subspaces with pairwise distinct Friedrichs numbers implies that P = NP. Proof. It is straightforward to see that if all the pairwise Friedrichs numbers are distinct then the Greedy Algorithm terminates after O(N 3) steps, where N ≥ 2 is the number of subspaces we a required to order optimally. Remark 3.4. The version of the Greedy Algorithm formulated in [9, Section 9] differs from ours in that it does not consider all possible greedy paths and hence runs in polynomial time even if the pairwise Friedrichs numbers are not assumed to be distinct. Note also that, as in the case of Proposition 3.1, the assumption of independence on the subspaces can be relaxed to pairwise quasi-disjointness. Given that the question whether P = NP is a long-standing open problem, one may view Proposition 3.1 as evidence suggesting that the Greedy Algorithm does not in general lead to an optimal ordering of the subspaces in question. This is indeed the case, as the following example illustrates. Example 3.5. Let F ∈ {R, C} and let X = F4 with the Euclidean norm. Consider the one-dimensional subspaces Mk = span{xk}, 1 ≤ k ≤ 4, where x1, . . . , x4 ∈ X are the unit vectors x1 = ( 1, 0, 0, 0 ), 1 √3 x2 = , 2 2 , 0, 0 , , The Friedrichs numbers satisfy c(Mk, M ) = |(xk, x )| for 1 ≤ k, ≤ 4 with k = , so the associated Friedrichs matrix is given (approximately) by σG(k) 4k=1 = ( 1, 4, 3, 2 ), σ(k) 4k=1 = ( 1, 4, 2, 3 ) The permutation σG ∈ S4 produced by the Greedy Algorithm is which leads to rG ≈ 7.5772 × 10−4. The permutation σ ∈ S4 given by leads to the optimal value r∗ = rσ ≈ 5.1033 × 10−4, and in particular rG > r∗. It follows that the Greedy Algorithm is not correct. Remark 3.6. Example 3.5 disproves a claim made in [9, Section 9], namely that the Greedy Algorithm always leads to an optimal ordering in the case of independent subspaces. The examples considered in [9, Section 9] involve only N = 3 subspaces, a special case in which the Greedy Algorithm performs an exhaustive search of all possible orderings (up to the direction in which they are traversed) and in particular is correct. Thus Example 3.5 is minimal in terms of the number of subspaces involved. 4. Sharp Estimates for the Degree of Suboptimality Having shown in Sect. 3 that the Greedy Algorithm does not in general lead to an optimal ordering of the subspaces in the method of alternating projections, we seek now to quantify how much the result reached by the Greedy Algorithm can disagree with the optimal result. Given a collection of closed subspaces of a Hilbert space such that at least one of the pairwise Friedrichs numbers is zero, we see that for suitable orderings of the subspaces we obtain convergence after at most two steps in the method of alternating projections. Another essentially uninteresting case for asymptotic analysis is when all of the pairwise Friedrichs numbers equal 1, so that no ordering leads to a useful estimate in (1.3). If either of these two cases holds we shall say that the collection of subspaces involved is non-generic, and otherwise we call it generic. Theorem 4.1. Let N ≥ 2 and suppose that M1, . . . , MN are closed subspaces of a Hilbert space X. Then r∗ ≤ rG ≤ r∗1/2. (4.1) Moreover, the second inequality is strict unless the collection M1, . . . , MN of subspaces is non-generic Proof. For 1 ≤ k ≤ N let σk ∈ SN be the permutation produced by running the Greedy Algorithm with the starting vertex σk( 1 ) = k and let rk = rσk . Then certainly r∗ ≤ rk for 1 ≤ k ≤ N , and hence also r∗ ≤ rG. For 1 ≤ k, ≤ N let sk( ) = σk σk−1( ) + 1 denote the index of the successor to M in the ordering of the subspaces determined by σk, noting that sk( ) = 1 if σk( ) = N . Let σ ∈ SN and for 1 ≤ k, ≤ N with k = let w(k, ) = c(Mk, M ). Let 1 ≤ k, ≤ N . If σ−1(σ( )) < σk−1(σ( + 1)), which is to say that in the ordering determined by k σk the subspace Mσ( ) comes before Mσ( +1), then by definition of the Greedy Algorithm we must have w σ( ), sk(σ( ) ≤ w σ( ), σ( + 1) , while if σk−1(σ( )) > σk−1(σ( + 1)) then Since w takes values in [0, 1] it follows that w σ( + 1), sk(σ( + 1) ≤ w σ( ), σ( + 1) . w σ( ), sk(σ( ) w σ( + 1), sk(σ( + 1) ≤ w σ( ), σ( + 1) (4.2) for 1 ≤ k, ≤ N . Thus for 1 ≤ k ≤ N we have rk2 = c(Mσk( ), Mσk( +1))2 = w σk( ), σk( + 1) 2 = ≤ N =1 N =1 N =1 N =1 N =1 w σ( ), sk(σ( ) w σ( + 1), sk(σ( + 1) w σ( ), σ( + 1) = c(Mσ( ), Mσ( +1)) = rσ. (4.3) Since σ ∈ SN was arbitrary we deduce that rk2 ≤ r∗ for 1 ≤ k ≤ N , and in particular rG2 ≤ r∗, as required. Now suppose that rG2 = r∗, and let σ∗ ∈ SN be a permutation such that rσ∗ = r∗. Since rG2 ≤ rk2 ≤ r∗ for 1 ≤ k ≤ N , we see that in fact rk2 = r∗ for 1 ≤ k ≤ N . Now either one of the pairwise Friedrichs numbers is zero or all of the pairwise Friedrichs numbers are non-zero. In the latter case it is clear from (4.3) that we must have equality in (4.2) for 1 ≤ k, ≤ N when σ = σ∗. Taking k = σ∗( ) in (4.2) for 1 ≤ ≤ N , it follows that w σ∗( ), σ∗( + 1) = min w(σ∗( ), k) : 1 ≤ k ≤ N, k = σ∗( ) for 1 ≤ ≤ N . It follows that σ∗ is itself a permutation considered by the Greedy Algorithm, and therefore r∗ = rG. Hence r∗2 = r∗, and since r∗ = 0 we have r∗ = 1, which implies that c(Mk, M ) = 1 for 1 ≤ k, ≤ N with k = . It follows that rG2 < r∗ unless the collection M1, . . . , MN of subspaces is non-generic. It remains to be investigated to what extent the second bound in (4.1) is sharp for generic constellations of subspaces. Our final example shows that it cannot be improved in the sense that given any ε ∈ ( 0, 1 ) there exists a generic constellation of subspaces of some Hilbert space such that rG > (1 − ε)r∗1/2. In fact, there exists a constellation of N such subspaces for every even N ≥ 4. Example 4.2. Given a positive a positive integer n ≥ 2, let N = 2n and suppose that 0 < δ < c < 1. By Theorem 2.1 there exists a Hilbert space X and a generic constellation M1, . . . , MN of closed subspaces of X such that for 1 ≤ k, ≤ N with k = we have ⎧ c if k = ± 1 (mod N ) ⎪ c(Mk, M ) = ⎨ cδ if k = ± 2 (mod N ) and k is even, ⎪⎩ 1 otherwise. Let σ0 ∈ SN denote the identity permutation. Then rσ0 = cN . If we think of the subspaces as the vertices of a complete graph of order N , and we let the edges have weights given by the pairwise Friedrichs numbers, then rσ ≥ rσ0 for all permutations σ ∈ SN involving no cδ-edges. Moreover, any cycle σ ∈ SN which uses at least one of the cδ-edges cannot use more than n − 1 of them, and must involve at least two 1-edges, so for any such cycle In particular, if c2 ≤ δn−1 then r∗ = rσ0 . It is easy to that rσ ≥ cn−1(cδ)n−1 = cN−2δn−1. rG ≥ c2(cδ)n−1 = cδn−1r∗1/2. Given ε ∈ ( 0, 1 ) we deduce that rG > (1 − ε)r∗1/2 provided c, δ ∈ ( 0, 1 ) are such that c2 ≤ δn−1 and cδn−1 > 1 − ε. These conditions are satisfied for instance when (1 − ε)1/3 < c < 1 and δ = c2/(n−1). Furthermore, it is the case that for any r, ε ∈ ( 0, 1 ) there exist generic constellations of N subspaces of a Hilbert space for all sufficiently large even N ≥ 4 with the properties that rG > (1 − ε)r∗1/2 and r∗ = r. Acknowledgements For financial support O.D. thanks Magdalen College, Oxford, A.J. thanks the Mathematical Institute of the University of Oxford, S.R. and R.S. thank both St John’s College, Oxford, and the Mathematical Institute, and L.S. thanks the EPSRC. All authors would further like to express their thanks to Alexis Chevalier, Stefan Kiefer, Dominik Peters and Zhixuan Wang for useful discussions. Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. [1] Badea , C. , Grivaux , S. , Mu¨ller, V.: The rate of convergence in the method of alternating projections . Algebra i Analiz (St. Petersburg Math. J.) 23 ( 3 ), 1 - 30 ( 2011 ) [2] Badea , C. , Seifert , D. : Ritt operators and convergence in the method of alternating projections . J. Approx. Theory 205 , 133 - 148 ( 2016 ) [3] Deutsch , F. : The method of alternating orthogonal projections . In: Approximation Theory, Spline Functions and Applications (Maratea , 1991 ), volume 356 of NATO Adv. Sci. Inst . Ser. C Math. Phys. Sci., pp. 105 - 121 . Kluwer Acad. Publ., Dordrecht ( 1992 ) [4] Deutsch , F. : Best Approximation in Inner Product Spaces . CMS Books in Mathematics. Springer, New York ( 2001 ) [5] Deutsch , F. , Hundal , H.: The rate of convergence in the method of alternating projections, II . J. Math. Anal. Appl . 205 , 381 - 405 ( 1997 ) [6] Deutsch , F. , Hundal , H.: Slow convergence of sequences of linear operators II: arbitrarily slow convergence . J. Approx. Theory 162 ( 9 ), 1717 - 1738 ( 2010 ) [7] Deutsch , F. , Hundal , H.: Arbitarily slow convergence of sequences of linear operators . In: Infinite Products of Operators and Their Applications , volume 636 of Contemp. Math., pp. 93 - 120 . Amer. Math. Soc. , Providence, RI ( 2015 ) [8] Halperin , I. : The product of projection operators . Acta Sci. Math. (Szeged) 23 , 96 - 99 ( 1962 ) [9] Kayalar , S. , Weinert , H.L. : Error bounds for the method of alternating projections . Math. Control Signals Syst . 1 ( 1 ), 43 - 59 ( 1988 )


This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs00025-017-0721-5.pdf

O. Darwin, A. Jha, S. Roy, D. Seifert, R. Steele, L. Stigant. Non-optimality of the Greedy Algorithm for Subspace Orderings in the Method of Alternating Projections, Results in Mathematics, 2017, 1-12, DOI: 10.1007/s00025-017-0721-5