An O(1)Approximation Algorithm for Dynamic Weighted Vertex Cover with Soft Capacity
LIPIcs.APPROXRANDOM.
An O(1)Approximation Algorithm for Dynamic Weighted Vertex Cover with Soft Capacity
Kunihiko Sadakane 1 3 4 5
0 Department of Industrial Engineering and Engineering Management, National Tsing Hua University , Hsinchu 30013 , Taiwan
1 WingKai Hon Department of Computer Science, National Tsing Hua University , Hsinchu 30013 , Taiwan
2 Department of Mathematics, University of Denver , Denver , USA
3 HaoTing Wei Department of Industrial Engineering and Engineering Management, National Tsing Hua University , Hsinchu 30013 , Taiwan
4 Department of Mathematical Informatics, The University of Tokyo , Tokyo , Japan
5 ChungShou Liao
This study considers the soft capacitated vertex cover problem in a dynamic setting. This problem generalizes the dynamic model of the vertex cover problem, which has been intensively studied in recent years. Given a dynamically changing vertexweighted graph G = (V, E), which allows edge insertions and edge deletions, the goal is to design a data structure that maintains an approximate minimum vertex cover while satisfying the capacity constraint of each vertex. That is, when picking a copy of a vertex v in the cover, the number of v's incident edges covered by the copy is up to a given capacity of v. We extend Bhattacharya et al.'s work [SODA'15 and ICALP'15] to obtain a deterministic primaldual algorithm for maintaining a constantfactor approximate minimum capacitated vertex cover with O(log n/ ) amortized update time, where n is the number of vertices in the graph. The algorithm can be extended to (1) a more general model in which each edge is associated with a nonuniform and unsplittable demand, and (2) the more general capacitated set cover problem. 2012 ACM Subject Classification Theory of computation → Dynamic graph algorithms 1 Supported by NSA Young Investigator Grant H982301510258, and Simons Collaboration Grant #525039. 2 Supported by MOST Taiwan under Grants MOST1052628E007010MY3 and MOST1052221E007085MY3.
and phrases approximation algorithm; dynamic algorithm; primaldual; vertex cover

1
Introduction
Dynamic algorithms have received fastgrowing attention in the past decades, especially for
some classical combinatorial optimization problems such as connectivity [1, 8, 11], vertex
cover, and maximum matching [2, 3, 4, 5, 13, 14, 15, 16]. This paper focuses on the fully
dynamic model of the vertex cover problem, which has been intensively studied in recent
years. Given a vertexweighted graph G = (V, E) which is constantly updated due to a
sequence of edge insertions and edge deletions, the objective is to maintain a subset of vertices
S ⊆ V at any given time, such that every edge is incident to at least one vertex in S and
the weighted sum of S is minimized. We consider a generalization of the problem, where
each vertex is associated with a given capacity. When picking a copy of a vertex v in S, the
number of its incident edges that can be covered by such a copy is bounded by v’s given
capacity. The objective is to find a soft capacitated weighted vertex cover S with minimum
weight, i.e. Pv∈S cvxv is minimized, as well as an assignment of edges such that the number
of edges assigned to a vertex v in S is at most kvxv, where cv is the cost of v, kv is the
capacity of v, and xv is the number of selected copies of v in S. Assume there is no bound
on xv. The static model of this generalization is the socalled soft capacitated vertex cover
problem, introduced by Guha et al. [9]. 3
Prior work. For the vertex cover problem in a dynamic setting, Ivkovic and Lloyd [12]
presented the pioneering work wherein their fully dynamic algorithm maintains a 2approximation
factor to vertex cover with O((n + m)0.7072) update time, where n is the number of vertices
and m is the number of edges. Onak and Rubinfeld [14] designed a randomized data structure
that maintains a large constant approximation ratio with O(log2 n) amortized update time
in expectation; this is the first result that achieves a constant approximation factor with
polylogarithmic update time. Baswana, Gupta, and Sen [2] designed another randomized
data structure which improves the approximation ratio to two, and simultaneously improved
the amortized update time to O(log n). Recently, Solomon [16] gave the currently best
randomized algorithm, which maintains a 2approximate vertex cover with O(1) amortized
update time.
For deterministic data structures, Onak and Rubinfeld [14] presented a data structure
that maintains an O(log n)approximation algorithm with O(log2 n) amortized update time.
Bhattacharya et al. [5] proposed the first deterministic data structure that maintains a
constant ratio, precisely, a (2 + )approximation to vertex cover with polylogarithmic
O(log n/ 2) amortized updated time. Existing work also considered the worstcase update
time. Neiman and Solomon [13] provided a 2approximation dynamic algorithm with O(√m)
worstcase update time. Later, Peleg and Solomon [15] improved the worstcase update time
to O(γ/ 2), where γ is the arboricity of the input graph. Very recently, Bhattacharya et
al. [3] extended their hierarchical data structure to achieve the currently best worstcase
update time of O(log3 n). Note that the above studies only discussed the unweighted vertex
cover problem, the objective of which is to find a vertex cover with minimum cardinality.
Consider the dynamic (weighted) set cover problem. Bhattacharya et al. [6] used a
hierarchical data structure similar to that reported in [5], and achieved a scheme with
O(f 2)approximation ratio and O(f log(n + m)/ 2) amortized updated time, where f is the
maximum frequency of an element. Very recently, Gupta et al. [10] improved the amortized
3 If each xv is associated with a bound, it is called the hard capacitated vertex cover problem, introduced
by Chuzhoy and Naor [7].
update time to O(f 2), albeit the dynamic algorithm achieves a higher approximation ratio
of O(f 3). They also offered another O(log n)approximation dynamic algorithm in O(f log n)
amortized update time. Bhattacharya et al. [4] simultaneously derived the same outcome
with O(f 3)approximation ratio and O(f 2) amortized update time for the unweighted set
cover problem. Table 1 presents a summary of the above results.
Our contribution. In this study we investigate the soft capacitated vertex cover problem
in the dynamic setting, where there is no bound on the number of copies of each vertex that
can be selected. We refer to the primaldual technique reported in [9], and present the first
deterministic algorithm for this problem, which can maintain an O(1)approximate minimum
capacitated (weighted) vertex cover with O(log n/ ) amortized update time. The algorithm
can be extended to a more general model in which each edge is associated with a given
demand, and the demand has to be assigned to an incident vertex. That is, the demand of
each edge is nonuniform and unsplittable. Also, it can be extended to solve the more general
capacitated set cover problem, where the input graph is a hypergraph, and each edge may
connect to multiple vertices.
The proposed dynamic mechanism builds on Bhattacharya et al.’s (α, β)partition
structure [5, 6], but a careful adaptation has to be made to cope with the newly introduced capacity
constraint. Briefly, applying the fractional matching technique in Bhattacharya et al.’s
algorithm cannot directly lead to a constant approximation ratio in the capacitated vertex
cover problem. The crux of our result is the redesign of a key parameter, weight of a vertex,
in the dual model. Details of this modification are shown in the next section.
In addition, if we go back to the original vertex cover problem without capacity constraint,
the proposed algorithm is able to resolve the weighted vertex cover problem by maintaining
a (2 + )approximate weighted vertex cover with O(log n/ 2) amortized update time. This
result achieves the same approximation ratio as the algorithm in [5], but they considered the
unweighted model. Details of this discussion are presented in the end of Section 3.
1.1
Overview of our technique
First, we recall the mathematical model of the capacitated vertex cover problem which was
first introduced by Guha et al. [9]. In this model, yev serves as a binary variable that indicates
whether an edge e is covered by a vertex v. Let Nv be the set of incident edges of v, kv and
cv be the capacity and the cost of a vertex v, respectively. Let xv be the number of selected
copies of a vertex v. An integer program (IP) model of the problem can be formulated as
follows (the minimization program on the left):
Min
s.t
Pv cvxv
yev + yeu ≥ 1, ∀e = {u, v} ∈ E
kvxv − P
xv ≥ yev, e∈Nv yev ≥ 0, ∀v ∈ V
∀v ∈ e, ∀e ∈ E
yev ∈ {0, 1}, ∀v ∈ e, ∀e ∈ E
xv ∈ N, ∀v ∈ V
Max
s.t
Pe∈E πe
kvqv + Pe∈Nv lev ≤ cv, ∀v ∈ V
qv + lev ≥ πe, ∀v ∈ e, ∀e ∈ E
qv ≥ 0, ∀v ∈ V
lev ≥ 0, ∀v ∈ e, ∀e ∈ E
πe ≥ 0, ∀e ∈ E
If we allow a relaxation of the above primal form, i.e., dropping the integrality constraints,
its dual problem yields a maximization problem. The linear program for the dual can be
formulated as shown in the above (the maximization program on the right; also see [9]). One
may consider this as a variant of the packing problem, where we want to pack a value of πe
for each edge e, so that the sum of the packed values is maximized. Packing of e is limited
by the sum of qv and lev, where qv is the global ability of a vertex v emitted to v’s incident
edges, and lev is the local ability of v distributed to its incident edge e.
In this study, we incorporate the above IP model with its LP relaxation for capacitated
vertex cover into the dynamic mechanism proposed by Bhattacharya et al. [5, 6]. They
devised the weight of a vertex v (in the dual model), denoted by Wv, to obtain a feasible
solution in the dual problem. They also allowed a flexible range for Wv to quickly adjust
the solution for dynamic updates while preserving its approximation quality. Due to the
additional capacity constraint in our problem, a new weight function is obviously required.
Technical challenges. There are two major differences between our algorithm and
Bhattacharya et al.’s [5, 6]. First, the capacity constraint in the primal problem leads to the
two variables qv and lev in the dual problem in which we have to balance their values when
approaching cv to maximize the dual objective. By contrast, the previous work considered one
dual variable lev without the restriction on the coverage of a vertex. We thus redesign Wv,
the weight of a vertex v to specifically consider the capacitated scenario. Yet, even with the
new definition of Wv, there is still a second challenge on how to approximate the solution
within a constant factor in the dynamic environment. In order to achieve O(log n) amortized
update time, Bhattacharya et al.’s fractional matching approach assigns the value of all v’s
incident edges to v, which, however, may result in a nonconstant h, hidden in the
approximation ratio, where h is the largest number of copies selected in the cover. We observe that we
cannot remove h from the approximation guarantee based on the (α, β)partition structure if
we just select the minimum value of α, as it is done in [5, 6]. The key insight is that we show
a bound on the value of α, which restricts the updates of the dynamic mechanism. With the
help of this insight, we are able to revise the setting of α to derive a constant approximation
ratio, while maintaining the O(log n) update time.
2
Level Scheme and its Key Property
The core of Bhattacharya et al.’s (α, β)partition structure [5, 6] is a level scheme [14] that is
used to maintain a feasible solution in their dual problem. In this section, we demonstrate
(in a different way from the original papers) how this scheme can be applied to our dual
problem, and describe the key property that the scheme guarantees.
A level scheme is an assignment ` : V → {0, 1, . . . , L} such that every vertex v ∈ V has a
level `(v). Let cmin and cmax denote the minimum and maximum costs of a vertex, respectively.
For our case, we set L = dlogβ(nμα/cmin)e for some α, β > 1 and μ > cmax. Based on `, each
edge (u, v) is also associated with a level `(u, v), where `(u, v) = max{`(u), `(v)}. An edge is
assigned to the higherlevel endpoint, and ties are broken arbitrarily if both endpoints have
the same level.
Each edge (u, v) has a weight w(u, v) according to its level, such that w(u, v) = μβ−`(u,v).
Each vertex v also has a weight Wv, which is defined based on the incident edges of v and
their corresponding levels. Before giving details on Wv, we first define some notations. Let
Nv = {u  (u, v) ∈ E} be the set of vertices adjacent to v (i.e., the neighbors of v). Let
Nv(i) denote the set of leveli neighbors of v, and Nv(i, j) denote the set of v’s neighbors
whose levels are in the range [i, j]. That is, Nv(i) = {u  (u, v) ∈ E ∧ `(u) = i} and
Nv(i, j) = {u  (u, v) ∈ E ∧ `(u) ∈ [i, j]}. The degree of a vertex v is denoted by Dv = Nv.
Similarly, we define Dv(i) = Nv(i) and Dv(i, j) = Nv(i, j). Finally, we use δ(v) to denote
the set of edges assigned to a vertex v. Now, the weight Wv of a vertex v is defined as follows:
Case 1 Dv(0, `(v)) > kv:
Case 2 Dv(0, `(v)) ≤ kv:
Wv = kvμβ−`(v) + X min{kv, Dv(i)}μβ−i
i>`(v)
Wv = Dv(0, `(v))μβ−`(v) + X min{kv, Dv(i)}μβ−i
i>`(v)
Due to the capacity constraint, we consider whether the number of leveli neighbors
of v, 0 ≤ i ≤ `(v), is larger than the capacity of v, to define the weight of a vertex v. Note
that the total weight of the edges that are assigned to v or incident to v can contribute
at most kvw(u, v) to Wv. Briefly, the weight of a vertex has two components: one that is
dependent on the incident edges with level `(v), and the other that is dependent on the
remaining incident edges. For convenience, we call the former component Internalv and the
latter component as Externalv. Moreover, we have:
Externalv
≤
kv X μβ−i
i>`(v)
≤ (1/(β − 1))kvμβ−`(v).
In general, an arbitrary level scheme cannot be used to solve our problem. What we need
is a valid level scheme, which is defined as follows.
I Definition 1. A level scheme is valid if Wv ≤ cv, for every vertex v.
I Lemma 2. Let V0 denote the set of level0 vertices in a valid level scheme. Then, V \ V0
forms a vertex cover of G.
Proof. Consider any edge (u, v) ∈ E. We claim that at least one of its endpoints must
be in V \ V0. Suppose that the claim is false which implies that `(u) = `(v) = 0 and
w(u, v) = μ > cmax. Since w(u, v) appears in Internalv, we have Wv ≥ w(u, v). As a result,
cv ≥ Wv ≥ μ > cmax, which leads to a contraditction. The claim thus follows, and so does
the lemma. J
The above lemma implies that no edge is assigned to any level0 vertex. In our mechanism,
we will maintain a valid level scheme, based on which each vertex in V \ V0 picks enough
copies to cover all the edges assigned to it; this forms a valid capacitated vertex cover.
Next, we define the notion of tightness, which is used to measure how good a valid level
scheme performs.
I Definition 3. A valid level scheme with an associated edge assignment is εtight if for
every vertex v with δ(v) > 0, Wv ∈ (cv/ε, cv].
I Lemma 4. Given an εtight valid level scheme, we can obtain an ε(2(β/(β − 1)) +
1)approximation solution to the weighted minimum capacitated vertex cover (WMCVC)
problem.
Proof. First, we fix an arbitrary edge assignment that is consistent with the given valid level
scheme. For each vertex v with δ(v) > 0, we pick dδ(v)/kve copies to cover all the δ(v)
edges assigned to it. To analyze the total cost of this capacitated vertex cover, we relate it
to the value Pe πe of a certain feasible solution of the dual problem, whose corresponding
values of qv and lev are as follows:
For every vertex v:
if dδ(v)/kve > 1: qv = μβ−`(v), and lev = 0;
if dδ(v)/kve ≤ 1: qv = μ PiDv(i)>kv β−i, lev = 0 if Dv(`(e)) > kv, and lev = μβ−`(e)
otherwise.
For every edge e: πe = μβ−`(e).
It is easy to verify that the above choices of qv, lev, and πe give a feasible solution to the
dual problem.
For the total cost of our solution, we separate the analysis into two parts, based on the
multiplicity of the vertex:
Case 1 dδ(v)/kve > 1: In this case, the external component of Wv is at most 1/(β − 1) of
the internal component, so Wv ≤ (β/(β − 1))kvqv. Then, the cost of all copies of v is:
dδ(v)/kve · cv ≤ dδ(v)/kve · ε · Wv
≤ 2 · δk(vv) · ε · (β/(β − 1))kvqv = 2ε(β/(β − 1)) · X πe.
e∈δ(v)
Case 2 dδ(v)/kve = 1: In this case, we pick one copy of vertex v, whose cost is:
cv ≤ ε · Wv ≤ ε · X πe = ε · X πe + X πe ,
e∼v e∈δ(v) e∈/δ(v), e∼v
where e ∼ v denotes e is an edge incident to v.
In summary, the total cost is bounded by
X max{ε, 2ε(β/(β − 1))} X πe + ε
v
v
= X 2ε(β/(β − 1)) X πe + ε
= ε(2(β/(β − 1)) + 1) X πe
e
≤ ε(2(β/(β − 1)) + 1) · OP T,
e∈δ(v)
e∈δ(v)
X
e∈/δ(v), e∼v
X
e∈/δ(v), e∼v
πe
πe
where OP T denotes the optimal solution of the dual problem, which is also a lower bound of
the cost of any weighted capacitated vertex cover. J
The next section discusses how to dynamically maintain an εtight level scheme, for some
constant factor ε and with amortized O(log n/ ) update time. Before that, we show a greedy
approach to get a (β + 1)tight level scheme to the static problem as a warm up.
First, we have the following definition.
I Definition 5. A valid level scheme λ is improvable if some vertex can drop its level to get
another level scheme λ0 such that λ0 is valid; otherwise, we say λ is nonimprovable.
I Lemma 6. If a valid level scheme λ is nonimprovable, then λ is (β + 1)tight.
If we set the level of every vertex to L initially, it is easy to check that by our choice of L
as dlogβ(nμα/cmin)e, such a level scheme is valid. Next, we examine each vertex one by one,
and drop its level as much as possible while the scheme remains valid. In the end, we will
obtain a nonimprovable scheme, so that by the above lemma, the scheme is (β + 1)tight.
This implies a (β + 1)(2(β/(β − 1)) + 1)approximate solution for the WMCVC problem.
3
Maintaining an α(β + 1)tight Level Scheme Dynamically
In this section, we present our O(1)approximation algorithm for the WMCVC problem, with
amortized O(log n) update time for each edge insertion and edge deletion. We first state an
invariant that is maintained throughout by our algorithm, and show how the latter is done.
Next, we analyze the time required to maintain the invariant with the potential method, and
show that our proposed method can be updated efficiently as desired. To obtain an O(log n)
amortized update time, we relax the flexible range of the weight of a vertex Wv by multiply
a constant α. Let cv∗ be cv/α(β + 1). The invariant that we maintain is as follows.
I Invariant 7. (1) For every vertex v ∈ V \ V0, it holds that cv∗ ≤ Wv ≤ cv, and (2) for every
vertex v ∈ V0, it holds that Wv ≤ cv .
By maintaining the above invariant, we will automatically obtain an α(β + 1)tight
valid scheme. As mentioned, we will choose a value for α in order to remove h from the
approximation ratio. In particular, we will set α = (2β + 1)/β + 2 , where 0 < < 1 to
balance the update time, and β = 2.43 to minimize the approximation ratio, so that we
achieve the following theorem.
I Theorem 8. There exists a dynamic level scheme λ which can achieve a constant
approximation ratio (≈ 36) for the WMCVC problem with O(log n/ ) amortized update time.
The remainder of this section is devoted to proving Theorem 8.
3.1
The algorithm: Handling insertion or deletion of an edge
We now show how to maintain the invariant under edge insertions and deletions. A vertex is
called dirty if it violates Invariant 7, and clean otherwise. Initially, the graph is empty, so
that every vertex is clean and is at level zero. Assume that at the time instant just prior to
the tth update, all vertices are clean. When the tth update takes place, which either inserts
or deletes an edge e = (u, v), we need to adjust the weights of u and v accordingly. Due to
this adjustment, the vertices u, or v, or both may become dirty. To recover from this, we
call the procedure Fix. The pseudo codes of the update algorithm (Algorithm 1) and the
procedure Fix are shown in the next page.
Algorithm 1
1: if an edge e = (u, v) has been inserted then
2: Set `(e) = max {`(u), `(v)} and set w(u, v) = μβ−`(e)
3: Update Wu and Wv
4: else if an edge e = (u, v) has been deleted then
5: Update Wu and Wv
6: end if
7: Run procedure Fix
procedure Fix:
1: while there exists a dirty vertex v do
2: if Wv > cv then
3: Increment the level of v by setting `(v) ← `(v) + 1
4: Update Wv and Wu for all affected v’s neighboring vertices u
5: else if Wv < cv∗ and `(v) > 0 then
6: Decrement the level of v by setting `(v) ← `(v) − 1
7: Update Wv and Wu for all affected v’s neighboring vertices u
8: end if
9: end while
Algorithm 1 ensures that Invariant 7 is maintained after each update, so that the dynamic
scheme is α(β + 1)tight as desired. To complete the discussion, as well as the proof of
Theorem 8, it remains to show that each update can be performed efficiently, in amortized
O(log n) time.
3.2
Time complexity
Each update involves two steps, namely the adjustment of weights of the endpoints, and the
running of procedure Fix. We now give the time complexity analysis, where the main idea
is to prove the following two facts: (Fact 1) the amortized cost of the adjustment step is
O(log n), and (Fact 2) the amortized cost of the procedure Fix is zero, irrespective of the
number of vertices or edges that are affected during this step. Once the above two facts are
proven, the time complexity analysis follows.
We use the standard potential method in our amortized analysis. Imagine that we have a
bank account B. Initially, the graph is empty, and the bank account B has no money. For
each adjustment step during an edge insertion or deletion, we deposit some money into the
bank account B; after that, we use the money in B to pay for the cost of the procedure Fix.
Some proofs are omitted in the following due to space limit.
Following the definition of [6], we say a vertex v ∈ V is active if its degree in G is nonzero,
and passive otherwise. Now, the value of B is set by the following formula:
B =
1
·
!
X φ(e) + X ψ(v) ,
e∈E
v∈V
φ(e) =
β(`(v)+1)
ψ(v) = μ(β − 1) · max {0, α cv∗ − Wv}, if v is active.
0,
We now switch our attention to Fact 2. Observe that the procedure Fix performs a series
of level up and level down events. For each such event, the level of a specific vertex v will be
changed, which will then incur a change in its weight, and changes in the weights of some
of the incident edges and their endpoints. Let t0 denote the moment before a level up or a
level down event, and t1 denote the moment after the weights of the edges and vertices are
updated due to this event. Let Count denote the number of times an edge in the graph G
is updated (for simplicity, we assume that in one edge update, the weight and the assignment
of the edge may be updated, and so do the weights of its endpoints, where all these can be
done in O(1) time).
For ease of notation, in the following, a superscript t in a variable denotes the variable at
moment t. For instance, Wvt0 stands for the weight Wv of v at moment t0. Also, we use Δx
to denote the quantity xt0 − xt1 , so that
ΔCount = Countt0 − Countt1  = Countt1 − Countt0
represents the number of incident edges whose weights are changed between t0 and t1.
Briefly speaking, based on the level scheme and the potential function B, we can show:
For each level up event, each of the affected edges e would have its φ(e) value dropped,
so that an fraction can pay for the weight updates of itself and its endpoints, while the
remaining fraction can be converted into the increase in ψ(v) value.
For each level down event, the reverse happens, where the vertex v would have its ψ(v)
value dropped, so that an fraction can pay for the weight updates of the affected edges
and their endpoints, while the remaining fraction can be converted into the increase in
φ(e) values of the affected edges. The α value controls the frequency of the level down
events, while trading this off with the approximation guarantee.
Sections 3.2.1 and 3.2.2 present the details of the amortized analysis of these two types
of events, respectively. Finally, note that there is no money (potential) input to the bank B
after the adjustment step, so that the analysis implies that the procedure Fix must stop (as
the money in the bank is finite).
3.2.1
Amortized cost of level up
Let v be the vertex that undergoes the level up event, and i = `(v) denote its level at moment
t0. By our notation, ΔB = Bt0 − Bt1 denotes the potential drop in the bank B from moment
t0 to moment t1. To show that the amortized cost of a level up event is at most zero, it is
equivalent to show that ΔB ≥ ΔCount.
Recall that after a level up event, only the value of ψ(v), the values of φ(e) and ψ(u)
for an edge (u, v) may be affected. In the following, we will examine carefully the changes
in such values, and derive the desired bound for ΔB. First, we have the following simple
lemma.
I Lemma 10. ΔCount ≤ Dvt0 (0, i).
Proof. When v changes from level i to i + 1, only those incident edges with levels i will be
affected. J
The next three lemmas examine, respectively, the changes Δψ(v), Δφ(e), and Δψ(u).
I Lemma 11. Δψ(v) = 0.
I Lemma 12. For every edge e incident to v,
Δφ(e) =
0,
I Lemma 13. For every vertex u ∈ Nvt0 , Δψ(u) ≥ −β/(β − 1).
Based on the above lemmas, we derive the following and finish the proof for the case of level
up.
ΔB =
1
1
· Δψ(v) + X Δφ(e) + X Δψ(u)
e∈E u∈Nvt0
≥
· 0 +
= Dvt0 (0, i) ≥ ΔCount.
We now show that the amortized cost of level down for a vertex v is at most zero. Similar to
the case of level up, we examine Δψ(v), Δφ(e), and Δψ(u), and show that ΔB ≥ ΔCount.
Before starting the proof of the level down case, recall that we have mentioned a
parameter h at the end of Introduction, where h is the largest number of selected copies of all
the vertices. That is, h = maxv{dδt0 (v)/kve}. Also, we let h0 = maxv{dDvt0 (0, `(v))/kve},
where h0 ≥ h, and set ξ ≥ 0 such that h0 = h + ξ.
I Lemma 14. ΔCount ≤ Dvt0 (0, i) < h0 · βiμcv∗ .
Now, we are ready to examine Δψ(v), Δφ(e), and Δψ(u), through the following lemmas.
I Lemma 15. For every vertex u ∈ Nvt0 , Δψ(u) ≥ −1/(β − 1).
Next, we partition Nvt0 into three subsets: X, Y1 and Y2, i.e. Nvt0 = X ∪ Y1 ∪ Y2, where
X = {u  u ∈ Nvt0 (0, i − 1)},
Y1 = {u  u ∈ Nvt0 (i)},
Y2 = {u  u ∈ Nvt0 (i + 1, L)}.
Δφ(u, v) = −
I Lemma 16. For every edge (u, v) incidents to a vertex v,
, if u ∈ X
if u ∈ Y1 ∪ Y2.
Next, let Wvt0 = x + y1 + y2, where x, y1 and y2 on the righthandside correspond to the
weights generated by the subsets X, Y1, Y2, respectively. So, we get the following lemmas:
I Lemma 17. Pu∈Nvt0 Δφ(u, v) ≤ − (β−β1) +
βi+1 βi
I Lemma 18. Δψ(v) = (αcv∗ − x − y1 − y2) · μ(β−1) − max{0, αcv∗ − βx − y1 − y2} · μ(β−1) .
Finally, depending upon the value of αcv∗ − βx − y1 − y2, we consider two possible scenarios,
where we show that in each case, ΔB ≥ h0 · βicv∗/μ. This in turn implies ΔB ≥ ΔCount
as desired. Thus, the level scheme remains α(β + 1)tight after a level down event. However,
the value of h is bounded by n, and h appears inside α, so that the approximation ratio
of the scheme may become n in the worstcase. Fortunately, with the help of the following
lemma, we can choose α carefully, which in turn improves the approximation ratio from n to
O(1).
I Lemma 19. Suppose that we set α ≥ β/(β − 1). By the time a level down event occurs at
v at moment t0, exactly one copy of v is selected. That is, dδt0 (v)/kve = 1.
Proof. Assume to the contrary that v could decrease its level even if more than one copy
of v is selected. Since v undergoes level down, its weight Wv must have decreased; this can
happen only in one of the following cases:
Case 1: An incident edge whose level is in the range [0, `(v)] is deleted. In this case, since
more than one copy of v is selected, Wv is unchanged. Thus, this case cannot happen.
Case 2: An incident edge whose level is in the range [`(v) + 1, L] is deleted. In this case, the
weight Wvt0 at moment t0 is less than c∗. On the other hand, at the moment t0 when v
v
attains the current level `(v) (from level `(v) − 1), its weight Wvt0 was at least cv before
level up, and became at least cv/(β + 1) after the level up. (The reason is from the proof
of Lemma 6: the weight change between consecutive levels is at most a factor of β + 1.)
This implies that:
c∗ > Wvt0 ≥ kvμβ−`(v) * more than one copy of v is selected
v
(β/(β − 1))kvμβ−`(v) ≥ Wvt0 ≥ cv/(β + 1) * left bound is max possible Wv value
Combining, we would have
cv
α(β + 1)
= cv∗ > kvμβ−`(v) ≥ cβv((ββ +−11)) ,
so that α < β/(β − 1). A contradiction occurs.
Thus, the lemma follows.
The above lemma states that if we choose α ≥ β/(β − 1), then level down of v occurs
only when dδt0 (v)/kve is one. Then, Case 2 inside the proof of Lemma 14 will not occur,
so that we can strengthen Lemma 14 to get ΔCount ≤ Dvt0 (0, i) < βicv∗/μ. Similarly, the
proof of Lemma 17 can be revised, so that we can strengthen Lemma 17 by replacing h
with one. On the other hand, we need α ≥ (2β + 1)/β + 2 to satisfy the amortized cost
analysis. Consequently, we set α = (2β + 1)/β + 2 , and we can achieve the desired bound
ΔB ≥ βicv∗/μ ≥ ΔCount. The proof for the level down case is complete.
3.3
Summary and extensions
With the appropriate setting of α = (2β + 1)/β + 2 , where 0 < < 1, we get an α(β + 1)tight
level scheme. Then, by setting β = 2.43, Theorem 8 is proven so that we get an approximation
solution of ratio close to 36 with O((log n)/ ) amortized update time. Note that if we focus
on the noncapacitated case, that is, each vertex is weighted and has unlimited capacity,
J
the problem becomes the weighted vertex cover problem. Our dynamic scheme can easily
be adapted to maintain an approximate solution, based on the following changes. First,
we define the weight of a vertex Wv as Wv = Pe∼v μβ−`(e). Next, we let α = 1 + 3 and
β = 1 + and revise φ(e) as φ(e) = (1 + ) (L − `(e)). After these changes, we can go through
a similar analysis, and obtain a (2 + )approximate weighted vertex cover with O(log n/ 2)
amortized update time.
Finally, we consider two natural extensions of the capacitated vertex cover problem, and
show how to adapt the proposed level scheme to handle these extensions.
Capacitated set cover. First, we consider the capacitated set cover problem, which is
equivalent to the capacitated vertex cover problem in hypergraphs. A hypergraph G = (V, E)
has V  = n vertices and E = m hyperedges, where each hyperedge is incident to a set
of vertices. Suppose each hyperedge is incident to at most f vertices. In this case, we can
obtain a level scheme that maintains an O(f 2) approximation solution to the dynamic set
cover problem with O(f log(m + n)/ ) amortized update time.
Capacitated vertex cover with nonuniform unsplittable demand. Next, we consider a
more general case of the capacitated vertex cover problem in which each edge has an
unsplittable demand. That is, the demand of each edge must be covered by exactly one of
its endpoints. In this case, we found that it is difficult to adapt the proposed level scheme
and derive similar results as before. Briefly speaking, one may redesign the weight of a
vertex Wv to keep the approximation ratio, by then it becomes hard to cope with the edge
insertion and deletion and maintain the O(log n) amortized update time. Nevertheless, we
present two simple algorithms, one with O(log2 dmax) approximation ratio and O(log kmax/ )
amortized update time, where dmax = maxe{de}, kmax = maxv{kv}, and another with O(1)
approximation ratio and O(dmax log kmax/ ) amortized update time, by reusing our proposed
scheme for capacitated vertex cover.
4
Concluding Remarks
We have extended dynamic vertex cover to the more general WMCVC problem, and developed
a constantfactor dynamic approximation algorithm with O(log n/ ) amortized update time,
where n is the number of the vertices. Note that, with minor adaptions to the greedy
algorithm reported in Gupta et al.’s very recent paper [10] is also able to work for the dynamic
capacitated vertex cover problem, but only to obtain a logarithmicfactor approximation
algorithm with O(log n) amortized update time. Moreover, our proposed algorithm can also
be extended to solve the soft capacitated set cover problem, and the capacitated vertex cover
problem with nonuniform unsplittable edge demand.
We conclude this paper with some open problems. First, recall that in the static model,
the soft capacitated vertex cover problem [9] can be approximated within a factor of two
and three for the uniform and nonuniform edge demand cases, respectively. Here, we
have shown that it is possible to design a dynamic scheme with O(1) approximation ratio
with polylogartihmic update time for the uniform edge demand case. Thus, designing an
O(1)approximation ratio algorithm with O(log kmax), or polylogarithmic, update time for
the nonuniform edge demand case seems promising.
Recall that in the noncapacity case, both of [4, 10] achieved a large constant approximation
ratio (≈ 1000) with O(1) amortized update time. However, when applying their approaches
directly, it seems hard to remove the coefficient h, so that the approximation ratio may be
up to O(n). On the other hand, very recently, Bhattacharya et al. [3] derived a scheme with
polylogartihmic worstcase update time and (2 + )approximation ratio. They created six
states for dynamic updates. Nevertheless, we cannot extend their approach directly, since
some of these states do not satisfy the capacity constraint. It would be of significant to
consider the above approaches to soft capacity vertex cover.
Another open problem is to consider hard capacity vertex cover, where most of the
previous studies in the literature used different techniques, such as rounding and patching,
from the primaldual approach in this paper. It would be worthwhile to explore the dynamic
model with hard capacity constraints.
1
2
3
4
5
6
7
8
9
10
11
12
13
A. Andersson and M. Thorup . Dynamic ordered sets with exponential search trees . Journal of the ACM (JACM) , Vol. 54 , Issue 3 , No. 13 , 2007 .
S. Baswana , M. Gupta , and S. Sen . Fully dynamic maximal matching in O(log n) update time . SIAM J. Comput . 44 ( 2015 ), no. 1 , pp. 88  113 .
S. Bhattacharya , D. Chakrabarty , and M. Henzinger . Fully dynamic approximate maximum matching and minimum vertex cover in O(log3 n) worst case update time . In Proc. the 28th ACMSIAM Symposium on Discrete Algorithms (SODA) , Barcelona, Spain, 2017 , pp. 470  489 .
S. Bhattacharya , D. Chakrabarty , and M. Henzinger . Deterministic fully dynamic approximate vertex cover and fractional matching in O(1) amortized update time . In Proc. the 19th Conference on Integer Programming and Combinatorial Optimization (IPCO) , Waterloo, Canada, 2017 , pp. 86  98 .
S. Bhattacharya , M. Henzinger , and G. F. Italiano . Deterministic fully dynamic data structures for vertex cover and matching . In Proc. the 26th ACMSIAM Symposium on Discrete Algorithms (SODA) , Philadelphia, USA, 2015 , pp. 785  804 .
S. Bhattacharya , M. Henzinger , and G. F. Italiano . Design of dynamic algorithms via primaldual method . In Proc. the 42nd International Colloquium on Automata, Languages, and Programming (ICALP) , Heidelberg, Germany 2015 , pp. 206  218 .
J. Chuzhoy , J. Naor . Covering problems with hard capacities . In Proc. the 43rd IEEE Symposium on Foundations of Computer Science (FOCS) , 2002 , pp. 481  489 .
C. Demetrescu and G. F. Italiano . A new approach to dynamic all pairs shortest paths . Journal of the ACM (JACM) , Vol. 51 , Issue 6 , 2004 , pp. 968  992 .
S. Guha , R. Hassin , S. Khuller , and E. Or . Capacitated vertex covering . Journal of Algorithms , Vol. 48 , Issue 1 , August 2003 , pp. 257  270 .
A. Gupta , R. Krishnaswamy , A. Kumar , and D. Panigrahi . Online and dynamic algorithms for set cover . In Proc. the 49th ACM Symposium on Theory of Computing (STOC) , Montreal, Canada, 2017 , pp. 537  550 .
M. T. J. Holm , K. de. Lichtenberg. Polylogarithmic deterministic fullydynamic algorithms for connectivity, minimum spanning tree, 2edge, and biconnectivity . Journal of the ACM (JACM) Vol. 48 Issue 4 , 2001 , pp. 723  760 .
Z. Ivkovic and E. L. Lloyd . Fully dynamic maintenance of vertex cover . In Proc. the 19th International Workshop on Graphtheoretic Concepts in Computer Science (WG) , London, UK, 1994 , pp. 99  111 .
O. Neiman and S. Solomon . Simple deterministic algorithms for fully dynamic maximal matching . In Proc. the 45th ACM Symposium on Theory of Computing (STOC) , Palo Alto, USA, 2013 , pp. 745  754 .
K. Onak and R. Rubinfeld . Maintaining a large matching and a small vertex cover . In Proc. the 42nd ACM Symposium on Theory of Computing (STOC) , Cambridge, USA, 2010 , pp. 457  464 .
D. Peleg and S. Solomon . Dynamic ( 1 + ) approximate matchings: a densitysensitive approach . In Proc. the 27th ACMSIAM Symposium on Discrete Algorithms (SODA) , Virginia, USA, 2015 , pp. 712  729 .
S. Solomon . Fully dynamic maximal matching in constant update time . In Proc. the 57th Symposium on Foundations of Computer Science (FOCS) , New Jersey, USA, 2016 , pp. 325  334 .