Towards a Unified Theory of Sparsification for Matching Problems
S O S A
Towards a Unified Theory of Sparsification for Matching Problems
Aaron Bernstein 1 2
0 Department of Computer and Information Science, University of Pennsylvania Philadelphia , PA , US
1 Sepehr Assadi
2 Department of Computer Science, Rutgers University Piscataway , NJ , US
In this paper, we present a construction of a ?matching sparsifier?, that is, a sparse subgraph of the given graph that preserves large matchings approximately and is robust to modifications of the graph. We use this matching sparsifier to obtain several new algorithmic results for the maximum matching problem: An almost (3/2)approximation oneway communication protocol for the maximum matching problem, significantly simplifying the (3/2)approximation protocol of Goel, Kapralov, and Khanna (SODA 2012) and extending it from bipartite graphs to general graphs. An almost (3/2)approximation algorithm for the stochastic matching problem, improving upon and significantly simplifying the previous 1.999approximation algorithm of Assadi, Khanna, and Li (EC 2017). An almost (3/2)approximation algorithm for the faulttolerant matching problem, which, to our knowledge, is the first nontrivial algorithm for this problem. Our matching sparsifier is obtained by proving new properties of the edgedegree constrained subgraph (EDCS) of Bernstein and Stein (ICALP 2015; SODA 2016)  designed in the context of maintaining matchings in dynamic graphs  that identifies EDCS as an excellent choice for a matching sparsifier. This leads to surprisingly simple and nontechnical proofs of the above results in a unified way. Along the way, we also provide a much simpler proof of the fact that an EDCS is guaranteed to contain a large matching, which may be of independent interest. 2012 ACM Subject Classification Theory of computation ? Sparsification and spanners, Theory of computation ? Graph algorithms analysis Acknowledgements Sepehr Assadi is grateful to his advisor Sanjeev Khanna for many helpful discussions, and to Soheil Behnezhad for sharing a writeup of [9].
and phrases Maximum matching; matching sparsifiers; oneway communication complexity; stochastic matching; faulttolerant matching

1
Introduction
A common tool for dealing with massive graphs is sparsification. Roughly speaking, a
sparsifier of a graph G is a subgraph H that (approximately) preserves certain properties of
G while having a smaller number of edges. Such sparsifiers have been studied in great detail
for various properties: for example, a spanner [6, 29] or a distance preserver [18, 20] preserves
pairwise distances, a cut sparsifier [26, 11, 22] preserves cut information, and a spectral
sparsifier [32, 8] preserves spectral properties of the graph. An additional property that we
often require of a graph sparsifier is robustness: it should continue to be a good sparsifier
even as the graph changes. Some sparsifiers are robust by nature (e.g cut sparsifiers), but
others (e.g spanners) are not, and for this reason there is an extensive literature on designing
sparsifiers that can provide additional robustness guarantees.
In this paper, we study the problem of designing robust sparsifiers for the prominent
problem of maximum matching. Multiple notions of sparsification for the matching problem
have already been identified in the literature. One example is a subgraph that preserves the
largest matching inside any given subset of vertices in G approximately. This notion is also
known as a matching cover or a matching skeleton [23, 27] in the literature and is closely
related to the communication and streaming complexity of the matching problem. Another
example of a sparsifier is a subgraph that can preserve the largest matching on random
subsets of edges of G, a notion closely related to the stochastic matching problem [15, 5]. An
example of a robust sparsifier for matching is a faulttolerant subgraph, namely a subgraph
G that continue to preserve large matchings in G even after a fraction of the edges is deleted
by an adversary. As far as we know, the faulttolerant matching problem has not previously
been studied, but it is a natural model to consider as it has received lots of attention in the
context of spanners and distance preservers (see e.g. [19, 28, 7, 17, 16]).
Our first contribution is a subgraph H that we show is a robust matching sparsifier in
all of the senses above. Our result is thus the first to unify these notions of sparsification for
the maximum matching problem. In addition to unifying, our construction yields improved
results for each individual notion of sparsification and the corresponding problems, namely,
the oneway communication complexity of matching, stochastic matching, and faulttolerant
matching problems. Interestingly, our unified approach allows us to also provide much simpler
proofs than all previously existing work for these problems. The subgraph we use as our
sparsifier comes from a pair of papers by Bernstein and Stein on dynamic matching [13, 14] ?
they refer to this subgraph as an edgedegree constrained subgraph (EDCS for short). The
EDCS was also very recently used in [2] to design sublinear algorithms for matching across
several different models for massive graphs. Our applications of the EDCS in the current
paper, as well as the new properties we prove for the EDCS, are quite different from those in
[13, 14, 2]. Our first contribution thus takes an existing subgraph, and then provides the
first proofs that it satisfies the three notions of sparsification described above.
Our second contribution is a much simpler (and even slightly improved) proof of the main
property of an EDCS in previous work proved in [13, 14], namely that an EDCS contains a
large matching of the original graph. Our new proof significantly simplifies the analysis of
[14] and allows for simple and selfcontained proofs of the results in this paper.
Definition of the EDCS. Before stating our results, we give a definition of the EDCS
from [13, 14], as this is the subgraph we use for all of our results (see Section 2 for more
details).
I Definition 1 ([13]). For any graph G(V, E) and integers ? ? ?? ? 0, an edgedegree
constrained subgraph (EDCS) (G, ?, ??) is a subgraph H := (V, EH ) of G with the following
two properties:
(P1) For any edge (u, v) ? EH : degH (u) + degH (v) ? ?.
(P2) For any edge (u, v) ? E \ EH : degH (u) + degH (v) ? ??.
It is not hard to show that an EDCS of a graph G always exists for any parameters
? > ?? and that it is sparse, i.e., only has O(n?) edges. A key property of EDCS proven
previously [13, 14] (and simplified in our paper) is that for any reasonable setting of the
parameters (e.g. ?? being sufficiently close to ?), any EDCS H of G contains an (almost)
3/2 approximate matching of G.
1.1
Our Results and Techniques
We now give detailed definitions of the notions of sparsification and the corresponding
problems addressed in this paper, as well as our results for each one. Our second contribution
? a significantly simpler proof that an EDCS contains an almost (3/2)approximate matching
? is left for Section 3.
OneWay Communication Complexity of Matching. Consider the following twoplayer
communication problem: Alice is given a graph GA(V, EA) and Bob holds a graph GB(V, EB).
The goal for Alice is to send a single message to Bob such that Bob outputs an approximate
maximum matching in EA ? EB. What is the minimum length of the message, i.e., the
oneway communication complexity, for achieving a certain fixed approximation ratio on all
graphs? One can show that the message communicated by Alice to Bob is indeed a matching
skeleton, namely a data structure (but not necessarily a subgraph), that allows Bob to find a
large matching in a given subset of vertices in Alice?s input (see [23] for more details).
This problem was first studied by Goel, Kapralov, and Khanna [23] (see also the subsequent
paper of Kapralov [25]), owing to its close connection to onepass streaming algorithms for
matching. Goel et al. [23] designed an algorithm that achieves a (3/2)approximation in
bipartite graphs using only O(n) communication and proved that any better than
(3/2)approximation protocol requires n1+?( log l1og n ) communication even on bipartite graphs (see,
e.g. [23, 4] for further details on this lower bound). A followup work by Lee and Singla [27]
further generalized the algorithm of [23] to general graphs, albeit with a slightly worse
approximation ratio of 5/3 (compared to 3/2 of [23]).
We extends the results in [23] to general graphs with almost no loss in approximation.
I Result 1. For any constant ? > 0, the protocol where Alice computes an EDCS of her
graph with ? = O(1) and ?? = ? ? 1 and sends it to Bob is a (3/2 + ?)approximation
oneway communication protocol for the maximum matching problem with uses O(n)
communication.
We remark that both the previous algorithm of [23] as well as its extension in [27] are
quite involved and rely on a fairly complicated graph decomposition as well as an intricate
primaldual analysis. As such, we believe that the main contribution in Result 1 is in fact in
providing a simple and selfcontained proof of this result.
Stochastic Matching. In the stochastic matching problem, we are given a graph G(V, E)
and a probability parameter p ? (0, 1). A realization of G is a subgraph Gp(V, Ep) obtained
by picking each edge in G independently with probability p to include in Ep. The goal in
this problem is to find a subgraph H of G with maxdegree bounded by a function of p
(independent of number of vertices), such that the size of maximum matching in realizations
of H is close to size of maximum matching in realizations of G. It is immediate to see that
H in this problem is simply a sparsifier of G which preserves large matchings on random
subsets of edges.
This problem was first introduced by Blum et al. [15] primarily to model the kidney
exchange setting and has since been studied extensively in the literature [3, 5, 10, 34]. Early
algorithms for this problem in [15, 3] (and the later ones for the weighted variant of the
problem [10, 34]) all had approximation ratio at least 2, naturally raising the question that
whether 2 is the best approximation ratio achievable for this problem. Assadi, Khanna, and
Li [5] ruled out this perplexing possibility by obtaining a slightly better than 2approximation
algorithm for this problem, namely an algorithm with approximation ratio close to 1.999
(which improves to 1.923 for small p).
We prove that an EDCS results in a significantly improved algorithm for this problem.
I Result 2. For any constant ? > 0, an EDCS of G with ? = O( log (1/p) ) and ?? = ? ? 1
p
achieves a (3/2 + ?)approximation algorithm for the stochastic matching problem with
a subgraph of maximum degree O( log (1/p) ).
p
We remark that our bound on the maximum degree in Result 2 is optimal (up to an
O(log (1/p)) factor) for any constantfactor approximation algorithm (see [5]). In addition to
significantly improving upon the previous best algorithm of [5], our Result 2 is much simpler
than that of [5], in terms of the both the algorithm and (especially) the analysis.
Remark. Independently and concurrently, Behnezhad et al. [9] also presented an
algorithm for stochastic matching with a subgraph of maxdegree O( log (1/p) ) that achieves
p
an approximation of almost (4?2 ? 5) (? 0.6568 compared to 0.6666 in Result 2). They
also provided an algorithm with approximation ratio strictly better than half for weighted
stochastic matching (our result does not work for weighted graphs). In terms of techniques,
our paper and [9] are entirely disjoint.
FaultTolerant Matching. Let f ? 0 be an integer, G(V, E) be a graph, and H be any
subgraph of G. We say that H is an ?approximation f tolerant subgraph of G iff for
any subset F ? E of size ? f , the maximum matching in H \ F is an ?approximation
to maximum matching in G \ F ? that is, H is a robust sparsifier of G. This definition
is a natural analogy of other faulttolerant subgraphs, such as faulttolerant spanners and
faulttolerant distance preservers (see, e.g. [19, 28, 7, 17, 16]), to the maximum matching
problem. Despite being such fundamental objects, quite surprisingly faulttolerant subgraphs
have not previously been studied for the matching problem.
We complete our discussion of applications of EDCS as a robust sparsifier by showing
that it achieves an optimal size faulttolerant subgraph for the matching problem.
I Result 3. For any constant ? > 0 and any f ? 0, there exists a (3/2 + ?)approximation
f tolerant subgraph H of any given graph G with O(f + n) edges in total.
The number of edges used in our faulttolerant subgraph in Result 3 is clearly optimal (up
to constant factors). In Appendix A.2, we show that by modifying the lower bound of [23] in
the communication model, we can also prove that the approximation ratio of (3/2) is optimal
for any f tolerant subgraph with O(f ) edges, hence proving that Result 3 is optimal in a
strong sense. We also show that several natural strategies for this problem cannot achieve
better than 2approximation, hence motivating our more sophisticated approach toward this
problem (see Appendix A.3).
The qualitative message of our work is clear: An EDCS is a robust matching sparsifier
under all three notions of sparsification described earlier, which leads to simpler and improved
algorithms for a wide range of problems involving sparsification for matching problems in a
unified way.
Overall Proof Strategy
Recall that our algorithm in all of the results above is simply to compute an EDCS H of the
input graph G (or GA in the communication problem). The analysis then depends on the
specific notion of sparsification at hand, but the same high level idea applies to all three
cases. In each case, we have an original graph G, and then a modified graph G? produced
by changes to G: G? is GA ? GB in the communication model, the realized subgraph Gp in
the stochastic matching, and the graph G \ F after adversarially removing edges F in the
faulttolerant matching problem. Let H be the EDCS that our algorithm computes in G,
and let H? be the graph that results from H due to the modifications made to G. If we
could show that H? is an EDCS of G? then the proof would be complete, since we know that
an EDCS is guaranteed to contain an almost (3/2)approximate matching. Unfortunately, in
all the three problems that we study it might not be the case that H? is an EDCS of G?.
Instead in each case we are able to exhibit subgraphs He ? H? and Ge ? G? such that He
is an EDCS of Ge, and size of maximum matching of Ge and G? differ by at most a (1 + ?)
factor. This guarantees an approximation ratio of almost (3/2)(1 + ?) (precisely what we
achieve in all three results above), since the EDCS He preserves the maximum matching in Ge
to within an almost (3/2)approximation and He is a subgraph of H.
Organization. The rest of the paper is organized as follows. Section 2 includes notation,
simple preliminaries, and existing work on the EDCS. In Section 3, we present a significantly
simpler proof of the fact that an EDCS contains an almost (3/2)approximation matching
(originally proved in [14]). Sections 4, 5, and 6 prove the sparsification properties of the
EDCS in, respectively, the oneway communication complexity of matching (Result 1), the
stochastic matching problem (Result 2), and the faulttolerant matching problem (Result 3).
These three sections are designed to be selfcontained (beside assuming the background in
Section 2) to allow the reader to directly consider the part of most interest. The appendix
contains some secondary observations.
2
Preliminaries and Notation
Notation. For any integer t ? 1, [t] := {1, . . . , t}. For a graph G(V, E) and a set of vertices
U ? V , NG(U ) denotes the neighbors of vertices in U in G and EG(U ) denotes the set of
edges incident on U . Similarly, for a set of edges F ? E, V (F ) denotes the set of vertices
incident on these edges. For any vertex v ? V , we use degG(v) to denote the degree of v ? V
in G (we may drop the subscript G in these definitions if it is clear from the context). We
use ?(G) to denote the size of the maximum matching in the graph G.
Throughout the paper, we use the following two standard variants of the Chernoff bound.
I Proposition 2 (Chernoff Bound). Suppose X1, . . . , Xt are t independent random variables
that take values in [0, 1]. Let X := Pit=1 Xi and assume E [X] ? ?. For any ? > 0 and
integer k ? 1,
Pr X ? E [X] ? ? ? ?
? 2 ? exp
Pr X ? E [X] ? k
? 2 ? exp
?
2k2
? t
.
,
We also need the following basic variant of Lovasz Local Lemma (LLL).
I Proposition 3 (Lovasz Local Lemma; cf. [21, 1]). Let p ? (0, 1) and d ? 1. Suppose
E1, . . . , Et are t events such that Pr (Ei) ? p for all i ? [t] and each Ei is mutually independent
n
of all but (at most) d other events Ej . If p ? (d + 1) < 1/e then Pr ?i=1Ei > 0.
Hall?s Theorem. We use the following standard extension of the Hall?s marriage theorem
for characterizing maximum matching size in bipartite graphs.
I Proposition 4 (Extended Hall?s marriage theorem; cf. [24]). Let G(L, R, E) be any bipartite
graph with L = R = n. Then, max A ? N (A) = n ? ?(G), where A ranges over L
or R. We refer to such set A as a witness set.
Proposition 4 follows from TutteBerge formula for matching size in general graphs [33, 12]
or a simple extension of the proof of Hall?s marriage theorem itself
Previously Known Properties of the EDCS
Recall the definition of an EDCS in Definition 1. It is not hard to show that an EDCS
always exists as long as ? > ?? (see, e.g. [2]). For completeness, we repeat the proof in the
Appendix A.1.
I Proposition 5 (cf. [13, 14, 2]). Any graph G contains an EDCS(G, ?, ??) for any
parameters ? > ??, which can be found in polynomial time.
The key property of an EDCS, originally proved in [13, 14], is that it contains an almost
(3/2)approximate matching.
I Lemma 6 ([13, 14]). Let G(V, E) be any graph and ? < 1/2 be a parameter. For parameters
??(G?) ?10?0 ,32 ?+ ?? ?3?2(?H?)3., and ?? ? (1 ? ?) ? ?, in any subgraph H := EDCS(G, ?, ??),
Another particularly useful (technical) property of an EDCS is that it ?balances? the
degree of vertices and their neighbors in the EDCS; this property is implicit in [13] but we
explicitly state and prove it here as it shows a main distinction in the properties of EDCS
compared to more standard (and less robust) subgraphs in this context such as bmatchings.
I Proposition 7. Let H := EDCS(G, ?, ??) and U be any subset of vertices. If average
degree of U in H is d? then average degree of NH (U ) from edges incident on U is ? ? ? d.
?
Proof. Let H0 be a subgraph of H containing the edges incident on U . Let W := NH0 (U ) =
NH (U ) and E0 = EH (U, W ) = EH0 (U, W ). We are interested in upper bounding the quantity
E0 / W . Firstly, by Property (P1) of EDCS, we have that P(u,v)?E0 degH0 (u)+degH0 (v) ?
? ? E0 . We write the LHS in this equation as:
X
(u,v)?E0
degH0 (u) + degH0 (v) = X(degH0 (u))2 +
u?U
X (degH0 (w))2
w?W
?
X( E0 )2 +
X ( E0 )2
= E0 ? d?+ E0 / W  .
u?U U  w?W W 
(as Pu degH0 (u) = Pw degH0 (w) = E0 and each is minimized when the summands are equal.)
By plugging in this bound in LHS above, we obtain E0 / W  ? ? ?d?, finalizing the proof. J
3
A Simpler Proof of the Key Property of an EDCS
In this section we provide a much simpler proof of the key property that an EDCS contains
an almost (3/2)approximate matching. This lemma was previously used in [13, 14, 2]. Our
proof is selfcontained to this section, and for general graphs, our new proof even improves
the dependence of ? on parameter ? from 1/?3 to (roughly) 1/?2, thus allowing for an even
sparser EDCS.
The proof contains two steps. We first give a simple and streamlined proof that an EDCS
contains a (3/2)approximate matching in bipartite graphs. Our proof in this part is similar
to [13] but instead of modeling matchings as flows and using cutflow duality, we directly
work with matchings by using Hall?s theorem. The main part of the proof however is to
extend this result to general graphs. For this, we give a simple reduction that extends the
result on bipartite graphs to general graphs by taking advantage of the ?robust? nature of
EDCS. This allows us to bypass the complicated arguments in [14] specific to nonbipartite
graphs and to obtain the result directly from the one for bipartite graphs (the paper of [14]
explicitly acknowledges the complexity of the proof and asks for a more ?natural" approach).
A Slightly Simpler Proof for Bipartite Graphs
Our new proof should be compared to Lemma 2 in Section 4.1 of the Arxiv version of [13].
I Lemma 8. Let G(L, R, E) be any bipartite graph and ? < 1/2 be a parameter. For ? ? 4? ,
? ? 2??1, and ?? ? (1??)??, in any subgraph H := EDCS(G, ?, ??), ?(G) ? 32 + ? ??(H).
Proof. Fix any H := EDCS(G, ?, ??) and let A be any of its witness sets in extended Hall?s
marriage theorem of Proposition 4 and B := NH (A). Without loss of generality, let us
assume A is a subset of L. Define A := L \ A, B := R \ B (see Figure 1). By Proposition 4,
A + B = n ? (A ? B) ? n ? (n ? ?(H)) = ?(H).
On the other hand, since G has a matching of size ?(G), we need to have a matching M of
size (?(G) ? ?(H)) between A and B as otherwise by Proposition 4, A would be a witness
set in G that implies the maximum matching of G is smaller than ?(G) (to see why the
set of edges between A and B is a matching simply apply Proposition 4 to a subgraph of
G containing only a maximum matching of G). Let S ? A ? B be the end points of this
matching (see Figure 1). As edges in M are all missing from H, by Property (P2) of EDCS
H, we have that,
X degH (v) =
v?S
(u,v)?M
X (degH (u) + degH (v)) ? (?(G) ? ?(H)) ? ??.
(1)
(2)
(a) A and B := NH(A) form a Hall?s theorem
witness set in EDCS H and A ? B ? ?(H).
(b) There is a matching of size ?(G) ? ?(H)
between A and B (i.e., the set S) in G \ H.
Consequently, as S = 2(?(G) ? ?(H)), the average degree of S is ? ??/2. As such, by
Proposition 7, the average degree of of NH (S) (from S) is at most ? ? ??/2 ? (1 + ?)?/2.
Finally, note that NH (S) ? A ? B as there are no edges between A and B in H, and hence
by Eq (1), NH (S) ? ?(H). By double counting the number of edges between S and NH (S),
i.e., EH (S):
EH (S) ? S ? ??/2 ? 2(?(G) ? ?(H)) ? ??/2,
EH (S) ? NH (S) (1 + ?)?/2 ? ?(H) ? (1 + ?)?/2.
This implies that,
Reorganizing the terms above finalizes the proof.
J
A Much Simpler Proof for Nonbipartite Graphs
Our new proof in this part should be compared to Lemma 5.1 on page 699 in [14]: see
Appendix B of their paper for the full proof, as well Section 4 for an additional auxiliary
claim needed.
I Lemma 9. Let G(V, E) be any graph and ? < 1/2 be a parameter. For ? ? 3?2 , ? ?
8??2 log (1/?), and ?? ? (1 ? ?) ? ?, in any subgraph H := EDCS(G, ?, ??), ?(G) ?
23 + ? ? ?(H).
Proof. The proof is based on the probabilistic method and Lovasz Local Lemma. Let M ? be
a maximum matching of size ?(G) in G. Consider the following randomly chosen bipartite
subgraph Ge(L, R, Ee) of G with respect to M ?, where L ? R = V :
For any edge (u, v) ? M ?, with probability 1/2, u belongs to L and v belongs to R,
and with probability 1/2, the opposite (the choices between different edges of M ? are
independent).
For any vertex v ? V not matched by M ?, we assign v to L or R uniformly at random
(again, the choices are independent across vertices).
The set of edges in Ee are all edges in E with one end point in L and the other one in R.
Define He := H ? Ge. We argue that as H is an EDCS for G, He also remains an EDCS for
Ge with nonzero probability. Formally,
I Claim 10. He is an EDCS(Ge, ?e, ?e?) for ?e = (1 + 4?)?/2 and ?e? = (1 ? 5?)??/2 with
probability strictly larger than zero (over the randomness of Ge).
Before we prove Claim 10, we argue why it implies Lemma 9. Let Ge be chosen such that
He is an EDCS(Ge, ?e, ?e?) for parameters in Claim 10 (by Claim 10, such a choice of Ge always
exist). By construction of Ge, M ? ? Ee and hence ?(Ge) = ?(G). On the other hand, Ge is
now a bipartite graph and He is its EDCS with appropriate parameters. We can hence apply
Lemma 8 and obtain that ?(Ge) ? (3/2 + ?)?(He ). As He ? H, ?(He ) ? ?(H), and hence
(?(Ge) =)?(G) ? (3/2 + ?)?(H), proving the assertion in the lemma statement. It thus only
remains to prove Claim 10.
Proof of Claim 10. Fix any vertex v ? V , let dv := degH (v) and NH (v) := {u1, . . . , udv } be
the neighbors of v in H. Let us assume v is chosen in L in Ge (the other case is symmetric).
Hence, degree of v in He is exactly equal to the number of vertices in NH (v) that are chosen
in R. As such, by construction of Ge, E hdegH (v)i = dv/2 (+1 iff v is incident on M ? ? H).
e
Moreover, if two vertices ui, uj in NH (v) are matched by M ?, then exactly one of them
appears as a neighbor to v in He and otherwise the choices are independent. Hence, by
Chernoff bound (Proposition 2),
Pr degH (v) ? dv/2 ? ? ? ?
e
? exp ?
2?2 ? ?2 1
? ? exp (?4 log ?) ? ?4 .
(as ? ? 8??2 log (1/?) and hence ? ? 2??2 ? log ?)
Eu,
Define Ev as the event that degH (v) ? dv/2 ? ? ? ?. Note that Ev depends only on
the choice of vertices in NH (v) and ehence can depend on at most ?2 other events Eu for
vertices u which are neighbors to NH (v) (recall that for all u ? V , degH (u) ? ? in H by
Property (P1) of EDCS). As such, we can apply Lovasz Local Lemma (Proposition 3) to
argue that with probability strictly more than zero, ?v?V Ev happens. In the following, we
condition on this event and argue that in this case, He is an EDCS of Ge with appropriate
parameters. To do this, we only need to prove that both Property (P1) and Property (P2)
hold for the EDCS He (with the choice of ?e and ?e?).
We first prove Property (P1) of EDCS He . Let (u, v) be any edge in He . By events Ev and
1
degH (u) + degH (v) ? 2 ? (degH (u) + degH (v)) + 2?? ? ?/2 + 2?? = (1 + 4?) ? ?/2,
e e
where the second inequality is by Property (P1) of EDCS H as (u, v) belongs to H as well.
We now prove Property (P2) of EDCS He . Let (u, v) be any edge in Ge \ He . Again, by Ev
and Eu,
1
degH (u) + degH (v) ? 2 ? (degH (u) + degH (v)) ? 2??
e e
? ??/2 ? 2?(1 ? ?)??
? (1 ? 5?) ? ?/2,
where the second inequality is by Property (P2) of EDCS H as (u, v) ? G \ H.
J Claim 10
J
Lemma 9 now follows immediately from Claim 10 as argued above.
J Lemma 9
J
4
OneWay Communication Complexity of Matching
In the oneway communication model, Alice and Bob are given graphs GA(V, EA) and
GB(V, EB), respectively, and the goal is for Alice to send a small message to Bob such that
Bob can output a large approximate matching in EA ? EB. In this section, we show that
if Alice communicates an appropriate EDCS of GA, then Bob is able to output an almost
(3/2)approximate matching.
I Theorem 11 (Formalizing Result 1). There exists a deterministic polytime oneway
communication protocol that given any ? > 0, computes a (3/2 + ?)approximation to
maximum matching using O( n?log (1/?) ) communication from Alice to Bob.
?2
Theorem 11 is based on the following protocol:
A oneway communication protocol for maximum matching.
1. Alice sends H := EDCS(GA, ?, ? ? 1) for ? := 32 ? ??2 ? log (1/?) to Bob.
2. Bob computes a maximum matching in H ? GB and outputs it as the solution.
By Proposition 5, the EDCS H computed by Alice always exists and can be found in
polynomial time. Moreover, by Property (P1) of EDCS H, the total number of edges (and
hence the message size) sent by Alice is O(n?). We now prove the correctness of the protocol
which concludes the proof of Theorem 11.
I Lemma 12. ?(GA ? GB) ? (3/2 + ?) ? ?(H ? GB).
Proof. Let M ? be a maximum matching in GA ? GB and MA? and MB? be its edges in GA
and GB, respectively. Let Ge := GA ? MB? and note that ?(Ge) = ?(G) simply because M ?
belongs to Ge. Define the following subgraph He ? H ? MB? (and hence ? H ? GB): He
contains all edges in H and any edge (u, v) ? MB? such that degH (u) + degH (v) ? ?. In
the following, we prove that (?(G) =)?(Ge) ? (3/2 + ?) ? ?(He ), which finalizes the proof as
?(He ) ? ?(H ? GB).
We show that He is an EDCS(Ge, ? + 2, ? ? 1) and apply Lemma 9 to argue that He
contains a (3/2)approximate matching of Ge. We prove the EDCS properties of He using
the fact that for v ? V , degH (v) ? {degH (v), degH (v) + 1} as He is obtained by adding a
matching (? MB? ) to H. e
Property (P1) of EDCS He : For an edge (u, v) ? He ,
if (u, v) ? H then:
if (u, v) ? MB? then:
degH (u) + degH (v) ? degH (u) + degH (v) + 2 ? ? + 2,
e e (by Property (P1) of EDCS H of GA)
degH (u) + degH (v) ? degH (u) + degH (v) + 2 ? ? + 2.
(as (ue,v) ? MB? ies inserted to He iff degH (u) + degH (v) ? ?)
Property (P2) of EDCS He : For an edge (u, v) ? Ge \ He ,
if (u, v) ? GA \ H then:
degH (u) + degH (v) ? degH (u) + degH (v) ? ? ? 1,
e e(by Property (P2) of EDCS H of GA)
if (u, v) ? MB? \ He then: degH (u) + degH (v) ? degH (u) + degH (v) > ?.
(as (u, v) ? MB?eis not inseerted to He iff degH (u) + degH (v) > ?)
As such, He is an EDCS(Ge, ? + 2, ? ? 1). By Lemma 9 and the choice of parameter ?,
we obtain that ?(Ge) ? (3/2 + ?) ? ?(He ), finalizing the proof. J
5
The Stochastic Matching Problem
Recall that in the stochastic matching problem, the goal is to compute a boundeddegree
subgraph H of a given graph G, such that E [?(Hp)] is a good approximation of E [?(Gp)],
where Gp is a realization of G (i.e a subgraph where every edge is sampled with probability p),
and Hp = H ? Gp. In this section, we formalize Result 2 by proving the following theorem.
I Theorem 13 (Formalizing Result 2). There exists a deterministic polytime algorithm
that given a graph G(V, E) and parameters ?, p > 0 with ? < 1/4, computes a subgraph
H(V, EH ) of G with maximum degree O( log (1/?p) ) such that the ratio of the expected size
?2?p
of a maximum matching in realizations of G to realizations of H is at most (3/2 + ?), i.e.,
E [?(Gp)] ? (3/2 + ?) ? E [?(Hp)].
We note that while in Theorem 13, we state the bound in expectation, the same result
also holds with high probability as long as ?(G) = ?(1/p) (i.e., just barely more than a
constant), by concentration of maximum matching size in edgesampled subgraphs (see,
e.g. [2], Lemma 3.1). The algorithm in Theorem 13 simply computes an EDCS of the input
graph as follows:
An algorithm for the stochastic matching problem.
Output the subgraph H := EDCS(G, ?, ? ? 1) for ? := C log (1/?p) , for large enough
?2p
constant C.
By Proposition 5, the EDCS H in the above algorithm always exists and can be found in
polynomial time. Moreover, by Property (P1) of EDCS H, the total number of edges in this
subgraph is O(n?). We now prove the bound on the approximation ratio which concludes
the proof of Theorem 13 (by reparametrizing ? to be a constant factor smaller).
I Lemma 14. Let Hp := H ? Gp denote a realization of H; then E [?(Gp)] ? (3/2 + O(?)) ?
E [?(Hp)] where the randomness is taken over the realization Gp of G.
Suppose first that Hp were an EDCS of Gp; we would be immediately done in this case
as we could have applied Lemma 9 directly and prove Lemma 14. Unfortunately, however,
this might not be the case. Instead, we exhibit subgraphs Hep ? Hp and Gep ? Gp with the
following properties:
1. E [?(Gp)] ? (1 + ?) E h?(Gep)i, where the expectation is taken over realizations Gp.
2. Hep is an EDCS(G, (1 + ?)p ? ?, (1 ? 2?)p ? ?) for Gep.
V +
V ?
V +
V ?
V +
V ?
V +
V ?
V +
V ?
V +
V ?
(a) Realized Graph Gp.
(b) Subgraph Gep ? Gp.
(c) Subgraph Hep ? Hp.
Showing these properties concludes the proof of Lemma 14, as for the EDCS in item (2)
above, we have ((11?+2??))p??p?? = 1 + O(?), so by Lemma 9 we get that ?(Gep) ? (3/2 + O(?)) ? ?(Hep).
Combining this with item (1) then concludes E [?(Gp)] ? (1 + ?) ? (3/2 + ?) E [?(Hp)].
It now remains to exhibit Hep and Gep that satisfy the main properties stated above. Note
that for any vertex v ? V , we have E hdegHp (v)i = p ? degH (v) by definition of a realization
Gp (and hence Hp). We now want to separate out vertices that deviate significantly from
this expectation.
I Definition 15. Let V + ? V contain all vertices v for which degHp (v) > p ? degH (v) + ?p?/2.
Similarly, let V ? contain all vertices v such that degHp (v) < p ? degH (v) ? ?p?/2 OR there
exists an edge (v, w) ? H such that w ? V +, i.e., if v is neighbor to V +.
I Claim 16. E [V +] ? ?7p7?(G) and E [V ?] ? ?4p4?(G), where the expectation is over
the realization Gp of G. As we a result we also have E [V + + V ?] ? ?3p3?(G).
Before proving this claim, let us consider why it completes the larger proof.
Proof of Lemma 14 (assuming Claim 16). To prove Lemma 14 it is enough to show the
existence of subgraphs Gep and Hep that satisfy the properties above. We define Gep as follows:
the vertex set is V and the edgeset is the same as Gp, except we remove all edges incident
to V + and all edges (u, v) ?/ H that are incident to V ?. We define Hep to be the subgraph
of Hp induced by the vertex set V \ V +, that is, Hep contains all edges of Hp except those
incident to V +; see Figure 2.
For item (1), note that Gep differs from Gp by vertices in V + ? V ?, so ?(Gep) ? ?(Gp) ?
V + ? V ?. It is also clear that E [?(Gp)] ? p ? ?(G) (as each edge in G is sampled w.p. p
in Gp). By Claim 16,
E h?(Gep)i ? E [?(Gp)] ? E V + ? V ? ? E [?(Gp)] ? p3?3?(G) ? (1 ? ?3) E [?(Gp)] .
The above equation then implies the desired E [?(Gp)] ? (1 + ?) E h?(Gep)i.
For item (2), let us verify Property (P1) and Property (P2) for EDCS Hep of Gep. Neither
Hep nor Gep have any edge incident on V + and hence we can ignore these vertices entirely.
Thus, for all vertices v we have degHp (v) ? p ? degH (v) + ?p?/2, and for all v ?/ V ? we have
e
degHp (v) ? p ? degH (v) ? ?p?/2. Moreover, recall that Gep \ Hep contains no edges incident to
V ?.eAs such,
Property (P1) of EDCS Hep: For an edge (u, v) ? Hep,
degHp (u) + degHp (v) ? p ? degH (u) + p ? degH (v) + ?p? ? (1 + ?)p?.
e e
(by Property (P1) of EDCS H of G)
Property (P2) of EDCS Hep: For any edge (u, v) ? Gep \ Hep, we have u, v ?/ V ? so:
degHp (u) + degHp (v) ? p ? degH (u) + p ? degH (v) ? ?p? ? (1 ? 2?)p?.
e e
(by Property (P2) of EDCS H of G)
This concludes the proof of Lemma 14 (assuming Claim 16).
J Lemma 14
J
All that remains is to prove Claim 16.
Proof of Claim 16. Let us start by bounding the size of V +. Consider any vertex v ? V .
We know that degH (v) ? ?. Each edge then has probability p of appearing in Hp, so
E hdegHp (v)i = p ? degH (v) ? p?. By the multiplicative Chernoff bound in Proposition 2
with ? = p?:
Pr[v ? V +] = Pr[degHp (v) ? p ? degH(v) + ?p?/2] ? e?O(?2p?) ? e?O(log(??1p?1)) ? K?2?10p10,
where K is a large constant and the last two inequalities follow from the fact that we set
? := C log (1/?p) , for large enough constant C. (Note that since constant C is in the exponent,
?2p
we can easily set C large enough to achieve the final probability with a constant K > C.)
This probability bound shows that E [V +] ? nK?2?10p10, but that is not quite good enough
since we want a dependence on ?(G) instead of on n. To achieve this, we observe that the
total number of edges in H is at most ??(G): the reason is that G has a vertex cover of size
at most 2?(G), and all vertices in H have degree at most ? (by Property (P1) of EDCS H).
There are thus at most 2??(G) vertices that have nonzero degree in H, each of which has at
most a ?10p10 probability of being in V +; all vertices with zero degree in H are clearly not
in V + by definition. We thus have E [V +] ? 2??(G) ? K?2?10p10 ? K?1?7p7?(G), where
in the last inequality we use that K > C.
Let us now consider V ?. First let us bound the number of vertices v ? V ? for which
degHp (v) < p ? degH (v) ? ?p?/2. By an analogous argument to the one above, we have that
the expected number of such vertices is at most ?7p7?(G). A vertex can also end up in V ?
because it has a neighbor in V + in H. But each vertex in H has degree at most ? so we have
E V ? ? ?7p7?(G) + ? E V + ? ?4p4?(G),
where the last inequality again uses that K > C.
J
I Remark. Interestingly, our result in Theorem 13 continues to hold as it is even when the
edges sampled in realizations of Gp are only ?(1/p)wise independent, by simply using a
Chernoff bound for boundedindependence random variables (see, e.g. [31]) in the proof of
Claim 16. Allowing correlation in the process of edge sampling is highly relevant to the main
application of this problem to the kidney exchange setting (see [15]). To our knowledge, our
algorithm is the first to work with such a little amount of independence between the edges.
6
A FaultTolerant Subgraph for Matching
In the faulttolerant matching problem, we are given a graph G(V, E) and an integer f ? 1,
and our goal is to compute a subgraph H of G, named an f tolerant subgraph, such that for
any subset F ? E of size f , H \ F contains an approximate maximum matching of G \ F .
We show that,
I Theorem 17 (Formalizing Result 3). There exists a deterministic polytime algorithm that
given any ? > 0 and integer f ? 1, computes a (3/2 + ?)approximate f tolerant subgraph H
of any given graph G with O(??2 ? (n log (1/?) + f )) edges.
The algorithm in Theorem 17 simply computes an EDCS of the input graph as follows:
An algorithm for the faulttolerant matching problem.
1. Define ?min := minF ?(G \ F ) , where F is taken over all subsets of E with size f .
2. Output H := EDCS(G, ?, ? ? 1) for ? := ?2C???mfin + C?log?2(1/?) for a constant C > 0.
By Proposition 5, the EDCS H in the above algorithm always exists and can be found in
polynomial time. The above algorithm as stated however is not a polynomial time algorithm
because it is not clear how to compute the quantity ?min. Nevertheless, for simplicity, we
work with the above algorithm throughout this section, and at the end show how to fix this
problem and obtain a polytime algorithm. We start by proving that the subgraph H only
has O(f + n) edges.
I Lemma 18. The total number of edges in H is O( ?f2 + n ? log (1/?) ).
?2
Proof. Let F ? be a subset of E with size f such that ?min = ?(G \ F ?). Let M ? be a
maximum matching of size ?min in G \ F ?. Note that V (M ?) is a vertex cover for G \ F ?.
This means that all edges in G except for f of them are incident on V (M ?). As no vertex in
the EDCS H can have degree more than ? by Property (P1) of EDCS, the degree of vertices
in V (M ?) in E \ F ? is at most ?. This implies that:
EH  ? V (M ?) ? ? + F ? ? 2?min ? ?2C? ??mfin + C ? lo?g2(1/?)
+ f
We now prove the correctness of the algorithm in the following lemma.
I Lemma 19. Fix any subset F ? E of size f and define GF := G \ F and HF := H \ F .
Then, ?(GF ) ? (3/2 + O(?)) ? ?(HF ).
We first need some definitions. We say that a vertex v ? V is bad iff degHF (v) <
degH (v) ? ??, i.e., at least ?? edges incident on v (in H) are deleted by F . We use BF to
denote the set of bad vertices with respect to F , and bound BF  in the following claim.
I Claim 20. Number of bad vertices in HF is at most BF  ? ? ? ?(GF ).
f
= O( ?2 + n ?
log (1/?)
?2
),
finalizing the proof.
J
Proof. Any deleted edge can decrease the degrees of exactly two vertices. Any vertex
2f
becomes bad iff at least ?? edges incident on it from HF are removed. As such, BF  ? ??? ?
2f??2??min ? ? ? ?(GF ), for sufficiently large C > 0, and since ?(GF ) ? ?min by definition.
??C?f
J Claim 20 J
Proof of Lemma 19. Define a subgraph GeF ? GF as follows: V (GeF ) = V (GF ) (= V (G))
and edges in GeF are all edges in GF except that we remove any edge (u, v) ? GF such that
(u, v) ?/ HF and either of u or v is a bad vertex. We prove that ?(GeF ) is at least (1 ? ?)
fraction of ?(GF ), and moreover, HF is an EDCS of GeF with appropriate parameters. We
can then apply Lemma 9 to obtain that ?(GF ) ? (1 + 2?)?(GeF ) ? (1 + ?) ? (3/2 + O(?))?(HF ),
finalizing the proof.
We first prove the bound on ?(GeF ). Fix any maximum matching M in GF . It can
have at most BF  edges incident on vertices of BF . Hence, even if we remove all edges
incident on BF , the size of this matching would be at least ?(GF ) ? ? ? ?(GF ), by the bound
of BF  ? ? ? ?(GF ) in Claim 20. However, this matching belongs to GeF entirely by the
definition of this subgraph, and hence we have, ?(GF ) ? (1 + 2?)?(GeF ).
We now prove that HF is an EDCS(GeF , ?, (1 ? 2?)? ? 1) of GeF . It suffices to prove the
two properties of EDCS for HF using the fact that degHF (v) ? [degH (v) ? ??, degH (v)] for
vertices in V \ BF , and that all edges incident on BF in GeF also belong to HF .
Property (P1) of EDCS HF of GeF : For any edge (u, v) ? HF :
degHF (u) + degHF (v) ? degH (u) + degH (v) ? ?.
(by Property (P1) of EDCS H of G)
Property (P2) of EDCS HF of GeF : For any edge (u, v) ? GeF \ HF both u, v ? V \ BF
and so:
degHF (u) + degHF (v) ? degH (u) + degH (v) ? 2?? ? (1 ? 2?)? ? 1.
(by Property (P2) of EDCS H of G as (u, v) is missing from H)
As such, HF is an EDCS(GeF , ?, (1 ? 2?)? ? 1) of GeF and by the lower bound on value of ?
in the algorithm (the second term in definition of ?), we can apply Lemma 9, and obtain
that ?(GeF ) ? (3/2 + O(?)) ? ?(HF ), finalizing the proof. J
Theorem 17 now follows from Lemmas 18 and 19 by reparametrizing ? to a sufficiently
smaller constant factor of ? (by picking the integer C large enough) modulo the fact that
the algorithm designed in this section is not a polynomial time algorithm. To make the
algorithm polynomial time, we only need to make a simple modification: instead of finding
?min explicitly, we find the smallest value of ? (by searching over all n possible choices of ?,
or by doing a binary search) such that the EDCS H has at least 2?C?f + n?C?log (1/?) many
?2 ?2
edges. By the proof of Lemma 18, any EDCS of G can have at most 2?min ? ? + f edges. This
implies that the chosen ? ? ?2C???mfin + C?log?2(1/?) as needed in the algorithm. This concludes
the proof, as by definition of ?, H has O( C??2f + n?C?log (1/?) ) many edges, and hence satisfies
?2
the sparsity requirements of Theorem 17.
1
2
3
4
5
A
A.1
Missing Details and Proofs
Proof of Proposition 5
We give the proof of this proposition following the argument of [2], which itself was based
on [14].
Proof. We give a polynomial local search algorithm for constructing an EDCS H of the
graph G which also implies the existence of H. The algorithm is as follows. Start with
empty graph H. While there exists an edge in H or G \ H that violates Property (P1) or
Property (P2) of EDCS, respectively, fix this edge by removing it from H for the former or
inserting it to H for the latter.
We prove that this algorithm terminates after polynomial number of steps which implies
both the existence of the EDCS as well as give a polynomial time algorithm for computing
it. We define the following potential function ? for this task:
?1(H) := (? ? 1/2) ? X
degH (u),
?2(H) :=
u?V (H)
X
(u,v)?E(H)
?(H) := ?1(H) ? ?2(H).
(degH (u) + degH (v)) ,
We claim that after fixing each edge in H in the algorithm, ? increases by at least 1. Since
maxvalue of ? is O(n ? ?2), this implies that this procedure terminates in O(n ? ?2) steps.
Let (u, v) be the fixed edge at this step, H1 be the subgraph before fixing the edge
(u, v), and H2 be the resulting subgraph. Suppose first that the edge (u, v) was violating
Property (P1) of EDCS. As the only change is in the degrees of vertices u and v, ?1 decreases
by (2? ? 1). On the other hand, degH1 (u) + degH1 (v) ? ? + 1 originally (as (u, v) was
violating Property (P1) of EDCS) and hence after removing (u, v), ?2 also decreases by ? + 1.
Additionally, for each neighbor w of u and v in H2, after removing the edge (u, v), degH2 (w)
decreases by one. As there are at least degH2 (u)+degH2 (v) = degH1 (u)+degH1 (v)?2 ? ? ?1
choices for w, this means that in total, ?2 decreases by at least (? + 1) + (? ? 1) = 2?. As a
result, in this case ? = ?1 ? ?2 increases by at least 1 after fixing the edge (u, v).
Now suppose that the edge (u, v) was violating Property (P2) of EDCS instead. In this
case, degree of vertices u and v both increase by one, hence ?1 increases by 2??1. Additionally,
since edge (u, v) was violating Property (P2) we have degH1 (u) + degH1 (v) ? ?? ? 1, so the
addition of edge (u, v) decreases ?2 by at most degH2 (u) + degH2 (v) = degH1 (u) + degH1 (v) +
2 ? ?? + 1. Moreover, for each neighbor w of u and v, after adding the edge (u, v), degH2 (w)
increases by one and since there are at most degH1 (u) + degH1 (v) ? ?? ? 1 choices for w, ?2
decreases in total by at most (?? + 1) + (?? ? 1) = 2??. Since ?? ? ? ? 1, we have that ?
increases by at least (2? ? 1) ? (2??) ? 1 after fixing the edge (u, v), finalizing the proof. J
A.2
Optimality of the (3/2)Approximation Ratio in Result 3
Our argument is a simple modification of the one in [23] for proving a lower bound on the
oneway communication complexity of approximating matching and is provided for the sake
of completeness.
Let G1(V1, E1) be a graph on N vertices such that its edges can be partitioned into
t := N ?(1/ log log N) induced matchings M1, . . . , Mt of size (1 ? ?)N/4 for arbitrarily small
constant ? > 0. These graphs are referred to as (r, t)RuzsaSzemer?di graphs [30] ((r, t)RS
graphs for short) and have been studied extensively in the literature (see [4, 23] for more
details). In particular, the existence of such graphs with parameters mentioned above is
proven in [23].
Let G(V, E) be a graph with n = 2N vertices consisting of G1(V1, E1) plus N additional
vertices U that are connected via a perfect matching MU to V1. In the following, we prove
that any f fault tolerant subgraph H of G that achieves a (3/2 ? ?)approximation for some
constant ? > 0 when f = ?(n) requires n1+?(1/ log log n) = ?(f ) edges.
Suppose towards a contradiction that H contains o(m) edges where m is the number of
edges in the graph G. As edges in G1 are partitioned into induced matchings M1, . . . , Mt,
it means that there exists some induced matching Mi such that only o(1) fraction of its
edges belong to H. Let the set of deleted edge F be only the set of edges in the perfect
matching between U and V1, namely, MU , which are incident to V (Mi). The number of
deleted edges is O(n) and after deletion, MU has size N ? (1 ? ?)N/2 = (1 + ?)N/2. As such,
?(G \ F ) ? (1 + ?)N/2 + (1 ? ?)N/4 ? 3N/4, by picking the remainder of the matching MU
and the induced matching Mi (which is not incident on remainder of MU by construction).
However, we argue that ?(H \ F ) ? (1 + ?)N/2 + o(N ), simply because only o(N ) edges of
Mi belong H and all other matchings are incident to the remaining edges of MU (we can
assume remaining edges of MU belong to any maximum matching of H \ F because they
are incident on degree one vertices). As such, ?(H \ F ) < (2/3 + 2?)?(G \ F ). By picking
? < ?/4, we obtain that H is not a (3/2 ? ?)approximate f fault tolerant subgraph of G.
A.3
Other Standard Algorithms for FaultTolerant Matching
Since the goal in faulttolerant matching is to prepare for adversarial deletions, the most
natural approach seem to be adding many different matchings by a finding maximum
matching in G, adding it to the subgraph H, deleting it from G, and repeating until we have
O(f + n) edges. A similar approach would be to let H be a maximum bmatching, with
b set appropriately to end up with O(f + n) edges. We show a lower bound of 2 on the
approximation ratio of these approaches.
Consider the following approach first: find a maximum matching M in G, add all the
edges of M to the faulttolerant subgraph H, remove all the edges of M from G, and repeat
until the graph contains C(f + n) edges for some large constant C. For f = n/5, we present a
graph G where this approach yields a graph H where ?(H) = ?(G)/2. The graph is bipartite
and the vertex set is partitioned into 5 sets X, Y, Y 0, Z, Z0, each of size n/5. There is an
edge in G from every vertex in X to every vertex in Y or Z, and there are also exactly n/5
vertexdisjoint edges from Y to Y 0, and similarly from Z to Z0; those are all the edge of G.
The fault tolerant algorithm might choose the following subgraph H: H contains a perfect
matching from Y to Y 0 and from Z to Z0, as well as many edges from X to Y , but no edges
from X to Z. (The algorithm can end up with such an H by first choosing the maximum
matching in G that consists of the edges from Y to Y 0 and from Z to Z0; then for all future
iterations the maximum matching size is only X = n/5, so the algorithm might always pick
a maximum matching that only contains edges between X and Y .) Now consider the set of
failures F which consists of the n/5 edges from Z to Z0. It is clear that ?(G \ F ) = 2n/5,
while ?(H \ F ) = n/5. Note also that allowing H to contain more than O(n + f ) edges would
still not allow this approach to break through the 2approximation: in this lowerbound
instance, even if H was allowed to have up to n2/100 edges, H might still not contain any
edges from X to Z, and so we would still have ?(H \ F ) = n/5 = ?(G \ F )/2.
The other natural approach is to let H contain the edges of a maximum bmatching in G,
where b is set to a value for which the resulting bmatching still contains ?(f + n) edges. The
lowerbound graph G is exactly the same as above, though in this case we use f = 2n/5. The
maximum bmatching H might then contain the edges from Y to Y 0 and Z to Z0, a single
matching of size n/5 from X to Z, and then many edges from X to Y . It is easy to see that
this is a maximum bmatching. Now consider the following set F of deletions: F contains all
edges from Z to Z0, as well as the n/5 edges in H from X to Z. It is easy to see that we
once again have ?(H) = n/5 and ?(G) = 2n/5. Also as above, setting B to be very large
and allowing H to have n2/100 edges would still not break through the 2approximation.
Noga Alon and Joel H Spencer . The probabilistic method . John Wiley & Sons, 2004 .
CoRR , abs/1711.03076. To appear in SODA 2019 , 2017 .
Sepehr Assadi , Sanjeev Khanna, and Yang Li . The Stochastic Matching Problem with (Very) Few Queries . In Proceedings of the 2016 ACM Conference on Economics and Computation , EC '16, Maastricht , The Netherlands, July 2428 , 2016 , pages 43  60 , 2016 .
Sepehr Assadi , Sanjeev Khanna, and Yang Li . On Estimating Maximum Matching Size in Graph Streams . In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017 , Barcelona, Spain, Hotel Porta Fira, January 1619 , pages 1723  1742 , 2017 .
Sepehr Assadi , Sanjeev Khanna, and Yang Li . The Stochastic Matching Problem: Beating Half with a NonAdaptive Algorithm . In Proceedings of the 2017 ACM Conference on Economics and Computation , EC '17 , Cambridge, MA, USA, June 2630, 2017 , pages 99  116 , 2017 .
Baruch Awerbuch . Complexity of Network Synchronization. J. ACM , 32 ( 4 ): 804  823 , 1985 .
Surender Baswana , Keerti Choudhary, and Liam Roditty . Fault tolerant subgraph for single source reachability: generic and optimal . In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016 , Cambridge, MA, USA, June 1821, 2016 , pages 509  518 , 2016 .
In Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC 2009 , Bethesda , MD , USA, May 31  June 2, 2009 , pages 255  262 , 2009 .
Stochastic Matching with Few Queries: New Algorithms and Tools . In Manuscript. To appear in SODA 2019 ., 2018 .
Soheil Behnezhad and Nima Reyhani . Almost Optimal Stochastic Weighted Matching with Few Queries . In Proceedings of the 2018 ACM Conference on Economics and Computation , Ithaca, NY , USA, June 1822, 2018 , pages 235  249 , 2018 .
Andr?s A. Bencz?r and David R. Karger . Approximating st Minimum Cuts in ?(n2) Time . In Proceedings of the TwentyEighth Annual ACM Symposium on the Theory of Computing , Philadelphia, Pennsylvania, USA, May 22 24, 1996 , pages 47  55 , 1996 .
Claude Berge . The theory of graphs. Courier Corporation , 1962 .
Aaron Bernstein and Cliff Stein . Fully Dynamic Matching in Bipartite Graphs . In Automata, Languages, and Programming  42nd International Colloquium, ICALP 2015 , Kyoto, Japan, July 6 10 , 2015 , Proceedings, Part I , pages 167  179 , 2015 .
Aaron Bernstein and Cliff Stein . Faster Fully Dynamic Matchings with Small Approximation Ratios . In Proceedings of the TwentySeventh Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2016 , Arlington , VA , USA, January 10  12 , 2016 , pages 692  711 , 2016 .
Avrim Blum , John P. Dickerson, Nika Haghtalab, Ariel D. Procaccia , Tuomas Sandholm, and Ankit Sharma . Ignorance is Almost Bliss: NearOptimal Stochastic Matching With Few Queries . In Proceedings of the Sixteenth ACM Conference on Economics and Computation , EC '15, Portland , OR , USA, June 1519, 2015 , pages 325  342 , 2015 .
Greg Bodwin , Michael Dinitz, Merav Parter, and Virginia Vassilevska Williams . Optimal Vertex Fault Tolerant Spanners (for fixed stretch) . In Proceedings of the TwentyNinth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2018 , New Orleans , LA, USA, January 7 10 , 2018 , pages 1884  1900 , 2018 .
Greg Bodwin , Fabrizio Grandoni, Merav Parter, and Virginia Vassilevska Williams . Preserving Distances in Very Faulty Graphs . In 44th International Colloquium on Automata, Languages, and Programming , ICALP 2017, July 1014 , 2017 , Warsaw, Poland, pages 73 : 1  73 : 14 , 2017 .
B?la Bollob?s , Don Coppersmith, and Michael Elkin . Sparse distance preservers and additive spanners . In Proceedings of the Fourteenth Annual ACMSIAM Symposium on Discrete Algorithms, January 1214 , 2003 , Baltimore, Maryland, USA., pages 414  423 , 2003 .
Shiri Chechik , Michael Langberg , David Peleg, and Liam Roditty . Faulttolerant spanners for general graphs . In Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC 2009 , Bethesda , MD , USA, May 31  June 2, 2009 , pages 435  444 , 2009 .
SIAM J . Discrete Math., 20 ( 2 ): 463  501 , 2006 .
Paul Erd?s and L?szl? Lov?sz . Problems and results on 3chromatic hypergraphs and some related questions . In COLLOQUIA MATHEMATICA SOCIETATIS JANOS BOLYAI 10.
INFINITE AND FINITE SETS , KESZTHELY (HUNGARY). Citeseer , 1973 .
Wai Shing Fung , Ramesh Hariharan, Nicholas J. A. Harvey , and Debmalya Panigrahi . A general framework for graph sparsification . In Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011 , San Jose, CA, USA, 6  8 June 2011, pages 71  80 , 2011 .
Ashish Goel , Michael Kapralov , and Sanjeev Khanna . On the Communication and Streaming Complexity of Maximum Bipartite Matching . In Proceedings of the Twentythird Annual ACMSIAM Symposium on Discrete Algorithms , SODA ' 12 , pages 468  485 . SIAM, 2012 .
URL: http://dl.acm.org/citation.cfm?id= 2095116 . 2095157 .
Philip Hall . On representatives of subsets . Journal of the London Mathematical Society , 1 ( 1 ): 26  30 , 1935 .
Michael Kapralov . Better bounds for matchings in the streaming model . In Proceedings of the TwentyFourth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2013 , New Orleans, Louisiana, USA, January 6 8 , 2013 , pages 1679  1697 , 2013 . doi: 10 .1137/ 1.9781611973105.121.
David R. Karger . Random sampling in cut, flow, and network design problems . In Proceedings of the TwentySixth Annual ACM Symposium on Theory of Computing , 23  25 May 1994 , Montr?al, Qu?bec, Canada, pages 648  657 , 1994 .
In Integer Programming and Combinatorial Optimization  19th International Conference, IPCO 2017 , Waterloo , ON , Canada, June 2628, 2017 , Proceedings, pages 355  367 , 2017 .
David Peleg . As Good as It Gets: Competitive Fault Tolerance in Network Structures . In Stabilization, Safety, and Security of Distributed Systems, 11th International Symposium, SSS 2009 , Lyon, France, November 3 6 , 2009 . Proceedings, pages 35  46 , 2009 .
David Peleg and Alejandro A. Sch?ffer . Graph spanners . Journal of Graph Theory , 13 ( 1 ): 99  116 , 1989 .
Imre Z Ruzsa and Endre Szemer?di . Triple systems with no six points carrying three triangles . Combinatorics (Keszthely , 1976 ), Coll. Math. Soc. J. Bolyai , 18 : 939  945 , 1978 .
Jeanette P. Schmidt , Alan Siegel, and Aravind Srinivasan . ChernoffHoeffding Bounds for Applications with Limited Independence . SIAM J. Discrete Math. , 8 ( 2 ): 223  250 , 1995 .
Comput. , 40 ( 4 ): 981  1025 , 2011 .
William T Tutte . The factorization of linear graphs . Journal of the London Mathematical Society , 1 ( 2 ): 107  111 , 1947 .
Yutaro Yamaguchi and Takanori Maehara . Stochastic Packing Integer Programs with Few Queries . In Proceedings of the TwentyNinth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2018 , New Orleans , LA, USA, January 7 10 , 2018 , pages 293  310 , 2018 .