#### Approximability of the Discrete Fr\'echet Distance

S o C G '
Approximability of the Discrete Fréchet Distance
Karl Bringmann 0
Wolfgang Mulzer 0
0 Institute of Theoretical Computer Science, ETH Zurich, Switzerland Institut für Informatik, Freie Universität Berlin , Germany
The Fréchet distance is a popular and widespread distance measure for point sequences and for curves. About two years ago, Agarwal et al [SIAM J. Comput. 2014] presented a new (mildly) subquadratic algorithm for the discrete version of the problem. This spawned a flurry of activity that has led to several new algorithms and lower bounds. In this paper, we study the approximability of the discrete Fréchet distance. Building on a recent result by Bringmann [FOCS 2014], we present a new conditional lower bound that strongly subquadratic algorithms for the discrete Fréchet distance are unlikely to exist, even in the one-dimensional case and even if the solution may be approximated up to a factor of 1.399. This raises the question of how well we can approximate the Fréchet distance (of two given d-dimensional point sequences of length n) in strongly subquadratic time. Previously, no general results were known. We present the first such algorithm by analysing the approximation ratio of a simple, linear-time greedy algorithm to be 2Θ(n). Moreover, we design an α-approximation algorithm that runs in time O(n log n + n2/α), for any α ∈ [1, n]. Hence, an nε-approximation of the Fréchet distance can be computed in strongly subquadratic time, for any ε > 0. 1998 ACM Subject Classification F.2.2 Nonnumerical Algorithms and Problems - Geometrical problems and computations Let P and Q be two polygonal curves with n vertices each. The Fréchet distance provides a meaningful way to define a distance between P and Q that overcomes some of the shortcomings of the classic Hausdorff distance [6]. Since its introduction to the computational geometry community by Alt and Godau [6], the concept of Fréchet distance has proven extremely useful and has found numerous applications (see [4, 6, 7, 8, 9, 10] and the references therein). The Fréchet distance has two classic variants: continuous and discrete [6, 12]. In this paper, we focus on the discrete variant. In this case, the Fréchet distance between two sequences P, Q of n points in d dimensions is defined as follows: imagine two frogs traversing the sequences P and Q, respectively. In each time step, a frog can jump to the next vertex along its sequence, or it can stay where it is. The discrete Fréchet distance is the minimal length of a leash required to connect the two frogs while they traverse the two sequences from start to finish.
and phrases Fréchet distance; approximation; lower bounds; Strong Exponential Time Hypothesis
Introduction
The original algorithm for the continuous Fréchet distance by Alt and Godau has running
time O(n2 log n) [6]; while the algorithm for the discrete Fréchet distance by Eiter and
Mannila needs time O(n2) [12]. These algorithms have remained the state of the art until
very recently: in 2013, Agarwal et al [4] presented a slightly subquadratic algorithm for the
discrete Fréchet distance. Building on their work, Buchin et al [9] managed to find a slightly
improved algorithm for the continuous Fréchet distance a year later. At the time, Buchin et
al thought that their result provides evidence that computing the Fréchet distance may not
be 3SUM-hard [13], as had previously been conjectured by Alt [5]. Even though a recent
result by Grønlund and Pettie [15], showing that 3SUM has subquadratic decision trees,
casts new doubt on the connection between 3SUM and the Fréchet distance, the conclusions
of Buchin et al motivated Bringmann [7] to look for other explanations for the apparent
difficulty of the Fréchet distance.
He found a possible explanation in the Strong Exponential Time Hypothesis (SETH) [16,
17], which roughly speaking asserts that satisfiability cannot be decided in time1 O∗((2 − ε)n)
for any ε > 0 (see Section 2 for details). Since exhaustive search takes time O∗(2n) and
since the fastest known algorithms are only slightly faster than that, SETH is a reasonable
assumption that formalizes an algorithmic barrier. It has been shown that SETH can be
used to prove conditional lower bounds even for polynomial time problems [1, 2, 18, 20].
In this line of research, Bringmann [7] showed, among other things, that there are no
strongly subquadratic algorithms for the Fréchet distance unless SETH fails. Here, strongly
subquadratic means any running time of the form O(n2−ε), for constant ε > 0. Bringmann’s
lower bound works for two-dimensional curves and both classic variants of the Fréchet
distance. Thus, it is unlikely that the algorithms by Agarwal et al and Buchin et al can be
improved significantly, unless a major algorithmic breakthrough occurs.
1.1
Our Contributions
In this extended abstract we focus on the discrete Fréchet distance. In Section 6, we will
discuss how far our results carry over to the continuous version. Our main results are as
follows.
Conditional Lower Bound. We strengthen the result of Bringmann [7] by showing that even
in the one-dimensional case computing the Fréchet distance remains hard. More precisely,
we show that any 1.399-approximation algorithm in strongly subquadratic time for the
one-dimensional discrete Fréchet distance violates the Strong Exponential Time Hypothesis.
Previously, Bringmann [7] had shown that no strongly subquadratic algorithm approximates
the two-dimensional Fréchet distance by a factor of 1.001, unless SETH fails.
One can embed any one-dimensional sequence into the two-dimensional plane by fixing
some ε > 0 and by setting the y-coordinate of the i-th point of the sequence to i · ε. For
sufficiently small ε, this embedding roughly preserves the Fréchet distance. Thus, unless
SETH fails, there is also no strongly subquadratic 1.399-approximation for the discrete Fréchet
distance on (1) two-dimensional curves without self-intersections, and (2) two-dimensional
x-monotone curves (also called time-series). These interesting special cases had been open.
Approximation: Greedy Algorithm. A simple greedy algorithm for the discrete Fréchet
distance goes as follows: in every step, make the move that minimizes the current distance,
1 The notation O∗(·) hides polynomial factors in the number of variables n and the number of clauses m.
where a “move” is a step in either one sequence or in both of them. This algorithm has
a straightforward linear time implementation. We analyze the approximation ratio of the
greedy algorithm, and we show that, given two sequences of n points in d dimensions, the
maximal distance attained by the greedy algorithm is a 2Θ(n)-approximation for their discrete
Fréchet distance. We emphasize that this approximation ratio is bounded, depending only
on n, but not the coordinates of the vertices. This is surprising, since so far no bounded
approximation algorithm that runs in strongly subquadratic time was known at all. Moreover,
although an approximation ratio of 2Θ(n) is huge, the greedy algorithm is the best linear
time approximation algorithm that we could come up with.
Approximation: Improved Algorithm. For the case that slightly more than linear time is
acceptable, we provide a much better approximation algorithm: given two sequences P and Q
of n points in d dimensions, we show how to find an α-approximation of the discrete Fréchet
distance between P and Q in time O(n log n + n2/α), for any 1 ≤ α ≤ n. In particular, this
yields an n/ log n-approximation in time O(n log n), and an nε-approximation in strongly
subquadratic time for any ε > 0. We leave it open whether these approximation ratios can
be improved.
2
Preliminaries and Definitions
We call an algorithm an α-approximation for the Fréchet distance if, given curves P, Q, it
returns a number that is at least the Fréchet distance of P, Q and at most α times the Fréchet
distance of P, Q.
2.1
Discrete Fréchet Distance
Since we focus on the discrete Fréchet distance throughout, we will sometimes omit the
term “discrete”. Let P = hp1, . . . , pni and Q = hq1, . . . , qni be two sequences of n points in d
dimensions. A traversal β of P and Q is a sequence of pairs in (p, q) ∈ P × Q such that (i)
the traversal β begins with the pair (p1, q1) and ends with the pair (pn, qn); and (ii) the pair
(pi, qj) ∈ β can be followed only by one of (pi+1, qj), (pi, qj+1), or (pi+1, qj+1). We call β
simultaneous, if it only makes steps of the third kind, i.e., if in each step β advances in both
P and Q. We define the distance of the traversal β as δ(β) := max(p,q)∈β d(p, q), where d(., .)
denotes the Euclidean distance. The discrete Fréchet distance of P and Q is now defined as
δdF(P, Q) := minβ δ(β), where β ranges over all traversals of P and Q.
We review a simple O(n2 log n) time algorithm to compute δdF(P, Q) that is the starting
point of our second approximation algorithm. First, we describe a decision procedure that,
given a value γ, decides whether δdF(P, Q) ≤ γ. For this, we define the free-space matrix F .
This is a Boolean n × n matrix such that for i, j = 1, . . . , n, we set Fij = 1 if d(pi, qj) ≤ γ, and
Fij = 0, otherwise. Then δdF(P, Q) ≤ γ if and only if F allows a monotone traversal from
(1, 1) to (n, n), i.e., if we can go from entry F11 to Fnn while only going down, to the right, or
diagonally, and while only using 1-entries. This is captured by the reach matrix R, which is
again an n × n Boolean matrix. We set R11 = F11, and for i, j = 1, . . . , n, (i, j) 6= (1, 1), we
set Rij = 1 if Fij = 1 and either one of R(i−1)j, Ri(j−1), or R(i−1)(j−1) equals 1 (we define
any entry of the form R(−1)j or Ri(−1) to be 0). Otherwise, we set Rij = 0. From these
definitions, it is straightforward to compute F and R in total time O(n2). Furthermore, by
construction we have δdF(P, Q) ≤ γ if and only if Rnn = 1.
With this decision procedure at hand, we can use binary search to compute δdF(P, Q) in
total time O(n2 log n) by observing that the optimum must be achieved for one of the n2
distances d(pi, qj), for i, j = 1, . . . , n. Through a more direct use of dynamic programming,
the running time can be reduced to O(n2) [12].
2.2
Hardness Assumptions
Strong Exponential Time Hypothesis (SETH). As is well-known, the k-SAT problem is
as follows: given a CNF-formula Φ over boolean variables x1, . . . , xn with clause width k,
decide whether there is an assignment of x1, . . . , xn that satisfies Φ. Of course, k-SAT is
NP-hard, and it is conjectured that no subexponential algorithm for the problem exists [14].
The Strong Exponential Time Hypothesis (SETH) goes a step further and basically states
that the exhaustive search running time of O∗(2n) cannot be improved to O∗(1.99n) [16, 17].
I Conjecture 2.1 (SETH). For no ε > 0, k-SAT has an O(2(1−ε)n) algorithm for all k ≥ 3.
The fastest known algorithms for k-SAT take time O(2(1−c/k)n) for some constant
c > 0 [19]. Thus, SETH is reasonable and, due to lack of progress in the last decades, can be
considered unlikely to fail. It is by now a standard assumption for conditional lower bounds.
Orthogonal Vectors (OV) is the following problem: Given u1, . . . , uN , v1, . . . , vN ∈ {0, 1}D,
decide whether there are i, j ∈ [N ] with (ui)k · (vj)k = 0 for all 1 ≤ k ≤ D. Here we denote
by ui the i-th vector and by (ui)k its k-th entry. We write ui ⊥ vj if (ui)k · (vj)k = 0 for all
1 ≤ k ≤ D. This problem has a trivial O(DN 2) algorithm. The fastest known algorithm
runs in time N 2−1/O(log(D/ log N)) [3], which is only slightly subquadratic for D log N . It
is known that OV has no strongly subquadratic time algorithms unless SETH fails [
21
]; we
present a proof for completeness.
I Lemma 2.2. For no ε > 0, OV has an DO(1) · N 2−ε algorithm, unless SETH fails.
Proof. Given a k-SAT formula Φ on variables x1, . . . , xn and clauses C1, . . . , Cm, we build
an equivalent OV-instance with N = 2n/2 and D = m. Denote all possible assignments of
true and false to the variables x1, . . . , xn/2 (the first half of the variables) by φ1, . . . , φN ,
N = 2n/2. For every such assignment φi we construct a vector ui where (ui)k is 0 if φi
causes Ck to evaluate to true, and 1 otherwise. Similarly, we enumerate the assigments
ψ1, . . . , ψN of the variables xn/2+1, . . . , xn (the second half of the variables), and for every ψj
we construct a vector vj where (vj)k is 0 if ψj causes Ck to evaluate to true, and 1 otherwise.
Then, (ui)k · (vj)k = 0 if and only if one of φi and ψj satisfies clause Ck. Thus, we have
(ui)k · (vj)k = 0 for all 1 ≤ k ≤ D if and only if (φi, ψj) forms a satisfying assignment of
the formula Φ. Hence, we constructed an equivalent OV-instance of the required size. The
constructed OV instance can be computed in time O(DN ).
It follows that any algorithm for OV with running time DO(1) ·N 2−ε gives an algorithm for
k-SAT with running time mO(1)2(1−ε/2)n. Since m ≤ nk ≤ 2o(n), this contradicts SETH. J
A problem P is OV-hard if there is a reduction that transforms an instance I of OV with
parameters N, D, to an equivalent instance I0 of P of size n ≤ DO(1)N , in time DO(1)N 2−ε
for some ε > 0. A strongly subquadratic algorithm (i.e., O(n2−ε0 ) for some ε0 > 0) for an
OVhard problem P would yield an algorithm for OV with running time DO(1)N 2−min{ε,ε0}. Thus,
by Lemma 2.2 an OV-hard problem does not have strongly subquadratic time algorithms
unless SETH fails. Most known SETH-based lower bounds for polynomial time problems are
actually OV-hardness results; our lower bound in the next section is no exception. Note that
OV-hardness is potentially stronger than a SETH-based lower bound, since it is conceivable
that SETH fails, but OV still has no strongly subquadratic algorithms.
We first construct vector gadgets. For each ui, i ∈ {1, . . . , N }, we define a sequence Ai of
D points from P as follows: For 1 ≤ k ≤ D let p ∈ {o, e} be the parity of k (odd or even).
Then the kth point of Ai is a(pui)k . Similarly, for each vj, we define a sequence Bj of D points
from P. For Bj, we use the points b∗p instead of a∗p. The next claim shows how the vector
gadgets encode orthogonality.
I Claim 3.1. Fix i, j ∈ {1, . . . , N } and let β be a traversal of (Ai, Bj). (i) If β is not a
simultaneous traversal, then δ(β) ≥ 1.8; (ii) if β is a simultaneous traversal and ui ⊥ vj,
then δ(β) ≤ 1; and (iii) if β is a simultaneous traversal and ui 6⊥ vj, then δ(β) = 1.4.
Proof. First, suppose that β is not a simultaneous traversal. Consider the first time when β
makes a move on one sequence but not the other. Then, the current points on Ai and Bj lie
on different sides of s, which forces δ(β) ≥ min{d(ao1, be0), d(ae1, bo0)} = 1.8.
Next, suppose that ui ⊥ vj. Then, the simultaneous traversal β of Ai and Bj has δ(β) ≤ 1.
Indeed, for each dimension 1 ≤ k ≤ D at least one of (ui)k and (vj)k is 0. Thus, if we
consider the kth point of Ai and the kth point of Bj, both of them lie on the same side of s,
and at least one of them is in {ao0, ae0, bo0, be0}. It follows that the distance between the kth
points in β is at most 1, for all k.
Finally, suppose that (ui)k = (vj)k = 1 for some k. Let β be the simultaneous traversal
of Ai and Bj, and consider the time when β reaches the kth points of Ai and Bj. These are
either {ao1, bo1} or {ae1, be1}, so δ(β) = min{d(ao1, bo1), d(ae1, bo1)} = 1.4. J
Let W be the sequence of D(N − 1) points that alternates between ao0 and ae0, starting
o
with a0. We set
and
P = W · x1 · s · A1 · s · A2 · · · · · s · AN · s · x2 · W
Q = w1 · B1 · w2 · w1 · B2 · w2 · · · · · w1 · BN · w2,
where · denotes the concatenation of sequences. The idea is to implement an or-gadget. If
there is a pair of orthogonal vectors, then P and Q should be able to reach the corresponding
vector gadgets and traverse them simultaneously. If there is no such pair, it should not be
possible to “cheat”. The purpose of the sequences W and the points w1 and w2 is to provide
a buffer so that one sequence can wait while the other sequence catches up. The purpose of
the points x1, x2, and s is to synchronize the traversal so that no cheating can occur. The
next two claims make this precise. First, we show completeness.
I Claim 3.2. If there are i, j ∈ {1, . . . , N } with ui ⊥ vj, then δdF(P, Q) ≤ 1.
Proof. Let ui, vj be orthogonal. We traverse P and Q as follows:
1. P goes through D(N − j) points of W ; Q stays at w1.
2. For k = 1, . . . , j − 1: we perform a simultaneous traversal of Bk and the next portion of
W starting with ao0 and the first point on Bk. When the traversal reaches ae0 and the
last point of Bk, P stays at ae0 while Q goes to w2 and w1. If k < j − 1, the traversal
continues with ao0 on P and the first point of Bk+1 on Q. If k = j − 1, we go to Step 3.
3. P proceeds to x1 and walks until the point s before Ai, Q stays at w1 before Bj.
4. P and Q go simultaneously through Ai and Bj, until the pair (s, w2) after Ai and Bj.
5. P continues to x2 while Q stays at w2.
6. For k = j + 1, . . . , N : P goes to the next ao0 on W while Q goes to w1. We then perform
a simultaneous traversal of Bk and the next portion of W . When the traversal reaches ae0
and the last point of Bk, P stays at ae0 while Q continues to w2. If k < N , the traversal
continues with the next iteration, otherwise we go to Step 7.
7. P finishes the traversal of W while Q stays at w2.
We use the notation max-d(S, T ) := maxs∈S,t∈T d(s, t), and max-d(s, T ) := max-d({s}, T ),
max-d(S, t) := max-d(S, {t}). The traversal maintains a maximum distance of 1: for Step 1,
this is implied by max-d({ao0, ae0}, w1) = 1. For Step 2, it follows from D being even and from
max-d(ao0, {bo1, bo0}) = max-d(ae0, {be1, be0, w1, w2}) = 1.
For Step 3, it is because max-d({x1, ao0, ao1, s, ae1, ae0}, w1) = 1. For Step 4, we use Claim 3.1
and d(s, w2) = 0.2. In Step 5, it follows from max-d({ao0, ao1, s, ae1, ae0, x2}, w2) = 1. In Step 6,
we again use that D is even and that
max-d(ao0, {bo1, bo0, w1}) = max-d(ae0, {be1, be0, w2}) = 1.
Step 7 uses max-d({ao0, ae0}, w2) = 1.
J
The second claim establishes the soundness of the construction.
I Claim 3.3. If there are no i, j ∈ {1, . . . , N } with ui ⊥ vj, then δdF(P, Q) ≥ 1.4.
Proof. Let β be a traversal of (P, Q). Consider the time when β reaches x1 on P . If Q is
not at either w1 or at a point from Bo = {bo0, bo1}, then δ(β) ≥ 1.4 and we are done. Next,
suppose that the current position is in {x1} × Bo. In the next step, β must advance P to s
or Q to {be0, be1} (or both).2 In each case, we get δ(β) ≥ 1.4. From now on, suppose we reach
x1 in position (x1, w1). After that, P must advance to s, because advancing Q to Bo would
take us to a position in {x1} × Bo, implying δ(β) ≥ 1.4 as we saw above.
Now consider the next step when Q leaves w1. Then Q must go to a point from Bo. At
this time, P must be at a point from Ao = {ao0, ao1} or we would get δ(β) ≥ 1.4 (note that P
has already passed the point x1). This point on P belongs to a vector gadget Ai or to the
2 Recall that we assumed D to be even.
final gadget W (again because P is already past x1). In the latter case, we have δ(β) ≥ 1.4,
because in order to reach the final W , P must have gone through x2 and d(x2, w1) = 1.4.
Thus, P is at a point in Ao in a vector gadget Ai, and Q is at the starting point (from Bo)
of a vector gadget Bj.
Now β must alternate simultaneously in P and Q among both sides of s, or again
δ(β) ≥ 1.4, see Claim 3.1. Furthermore, if P does not start in the first point of Ai, then
eventually P has to go to s while Q has to go to a point in Bo or stay in {be0, be1}, giving
δ(β) ≥ 1.4. Thus, we may assume that β simultaneously reached the starting points of Ai
and Bj and traverses Ai and Bj simultaneously. By assumption, the vectors ui, vj are not
orthogonal, so Claim 3.1 gives δ(β) ≥ 1.4. J
I Theorem 3.4. Fix α ∈ [1, 1.4). Computing an α-approximation of the discrete Fréchet
distance in one dimension is OV-hard. In particular, the discrete Fréchet distance in one
dimension has no strongly subquadratic α-approximation unless SETH fails.
Proof. We use Claims 3.2 and 3.3 and the fact that P and Q can be computed in time
O(DN ) from u1, . . . , uN , v1, . . . , vN : any O(n2−ε) time α-approximation for the discrete
Fréchet distance would yield an OV algorithm in time DO(1)N 2−ε, which by Lemma 2.2
contradicts SETH. J
I Remark. The proofs of Claims 3.2 and 3.3 yield a system of linear inequalities that constrain
the points in P. Using this system, one can see that the inapproximability factor 1.4 in
Theorem 3.4 is best possible for our current proof.
4
Approximation Quality of the Greedy Algorithm
In this section we study the following greedy algorithm. Let P = hp1, . . . , pni and Q =
hq1, . . . , qni be two sequences of n points in Rd. We construct a traversal βgreedy =
βgreedy(P, Q). We begin at (p1, q1). If the current position is (pi, qj), there are at most
three possible successor configurations: (pi+1, qj), (pi, qj+1), and (pi+1, qj+1) (or fewer, if we
have already reached the last point from P or Q). Among these, we pick the pair (pi0 , qj0 )
that minimizes the distance d(pi0 , qj0 ). We stop when we reach (pn, qn). We denote the
largest distance taken by the greedy traversal by δgreedy(P, Q) := δ(βgreedy(P, Q)).
I Theorem 4.1. Let P and Q be two sequences of n points in Rd. Then, δdF(P, Q) ≤
δgreedy(P, Q) ≤ 2O(n)δdF(P, Q). Both inequalities are tight, i.e., there are polygonal curves
P, Q with δgreedy(P, Q) = δdF(P, Q) > 0 and δgreedy(P, Q) = 2Ω(n)δdF(P, Q) > 0, respectively.
The inequality δdF(P, Q) ≤ δgreedy(P, Q) follows directly from the definition, since the
traversal βgreedy(P, Q) is a candidate for an optimal traversal. Furthermore, one can check
that if P and Q are increasing one-dimensional sequences, then the greedy traversal is
optimal (this is similar to the merge step in mergesort). Thus, there are examples where
δgreedy(P, Q) = δdF(P, Q). It remains to show the upper bound δgreedy(P, Q) ≤ 2O(n)δdF(P, Q)
and to provide an example where this inequality is tight.
4.1
Upper Bound
We call a pair pipi+1 of consecutive points on P an edge of P , for i = 1, . . . , n − 1, and
similarly for Q. Let m be the total number of edges of P and Q, and let `1 ≤ `2 ≤ · · · ≤ `m
be the sorted sequence of the edge lengths. We pick k∗ ∈ {0, . . . , m} minimum such that
k∗
4 δdF(P, Q) + 2 X `i < `k∗+1,
i=1
where we set `m+1 = ∞. We define δ∗ as the left hand side, δ∗ := 4 δdF(P, Q) + 2 Pik=∗1 `i.
I Lemma 4.2. We have (i) δ∗ ≥ 4δdF(P, Q); (ii) Pik=∗1 `i ≤ δ∗/2 − 2 δdF(P, Q); (iii) there
is no edge with length in (δ∗/2 − 2δdF(P, Q), δ∗); and (iv) δ∗ ≤ 3k∗ 4δdF(P, Q).
Proof. Properties (i) and (ii) follow by definition. Property (iii) holds since for i = 1, . . . , k∗,
we have `i ≤ δ∗/2 − 2δdF(P, Q), by (ii), and for i = k∗ + 1, . . . , m, we have `i ≥ δ∗, by
definition. It remains to prove (iv): for k = 0, . . . , k∗, we set δk = 4 δdF(P, Q) + 2 Pk
i=1 `i,
and we prove by induction that δk ≤ 3k 4δdF(P, Q). For k = 0, this is immediate. Now
suppose we know that δk−1 ≤ 3k−1 4δdF(P, Q), for some k ∈ {1, . . . , k∗}. Then, k ≤ k∗
implies `k ≤ δk−1, so δk = δk−1 + 2`k ≤ 3δk−1 ≤ 3k4δdF(P, Q), as desired. Now (iv) follows
from δ∗ = δk∗ . J
We call an edge long if it has length at least δ∗, and short otherwise. In other words, the
short edges have lengths `1, . . . , `k∗ , and the long edges have lengths `k∗+1, . . . , `m. Let β be
an optimal traversal of P and Q, i.e., δ(β) = δdF(P, Q).
I Lemma 4.3. The sequences P and Q have the same number of long edges. Furthermore,
if pi1 pi1+1, . . . , pik pik+1 and qj1 qj1+1, . . . , qjk qjk+1 are the long edges of P and of Q, for
1 ≤ i1 < · · · < ik < n and 1 ≤ j1 < · · · < jk < n, then both β and βgreedy contain the steps
(pi1 , qj1 ) → (pi1+1, qj1+1), . . . , (pik , qjk ) → (pik+1, qjk+1).
Proof. First, we show that for every long edge pipi+1 of P , the optimal traversal β contains
the step (pi, qj) → (pi+1, qj+1), where qj, qj+1 is a long edge of Q. Consider the step of β
from pi to pi+1. This step has to be of the form (pi, qj) → (pi+1, qj+1) for some qj ∈ Q:
since max{d(pi, qj), d(pi+1, qj)} ≥ d(pi, pi+1)/2 ≥ δ∗/2 ≥ 2δdF(P, Q), by Lemma 4.2(i),
staying in qj would result in δ(β) ≥ 2δdF(P, Q). Now, since max{d(pi, qj), d(pi+1, qj+1)} ≤
δ(β) = δdF(P, Q), the triangle inequality gives d(qj, qj+1) ≥ d(pi, pi+1) − 2 δdF(P, Q) ≥
δ∗ − 2 δdF(P, Q). Lemma 4.2(iii) now implies d(qj, qj+1) ≥ δ∗, so the edge qjqj+1 is long.
Thus, β traverses every long edge of P simultaneously with a long edge of Q. A symmetric
argument shows that β traverses every long edge of Q simultaneously with a long edge of P .
Since β is monotone, it follows that P and Q have the same number of long edges, and that
β traverses them simultaneously in their order of appearance along P and Q.
It remains to show that the greedy traversal βgreedy traverses the long edges of P
and Q simultaneously. Set i0 = j0 = 0. We will prove for a ∈ {0, . . . , k − 1} that if
βgreedy contains the position (pia+1, qja+1), then it also contains the step (pia+1 , qja+1 ) →
(pia+1+1, qja+1+1) and hence the position (pia+1+1, qja+1+1). The claim on βgreedy then follows
by induction on a, since βgreedy contains the position (p1, q1) by definition. Thus, fix
a ∈ {0, . . . , k − 1} and suppose that βgreedy contains (pia+1, qja+1). We need to show that
βgreedy also contains the step (pia+1 , qja+1 ) → (pia+1+1, qja+1+1). For better readability, we
write i for ia, j for ja, i0 for ia+1, and j0 for ja+1. Consider the first position of βgreedy
when βgreedy reaches either pi0 or qj0 . Without loss of generality, this position is of the
from (pi0 , ql), for some l ∈ {j + 1, . . . , j0}. Then, d(pi0 , ql) ≤ δ∗/2 − δdF(P, Q), since we
saw that d(pi0 , qj0 ) ≤ δ(β) = δdF(P, Q) and since the remaining edges between ql and qj0
are short and thus have total length at most δ∗/2 − 2 δdF(P, Q), by Lemma 4.2(ii). The
triangle inequality now gives d(pi0+1, ql) ≥ d(pi0 , pi0+1) − d(pi0 , ql) ≥ δ∗/2 + δdF(P, Q). If
l < j0, the same argument applied to ql+1 shows that d(pi0 , ql+1) ≤ δ∗/2 − δdF(P, Q) and
thus d(pi0+1, ql+1) ≥ δ∗/2 + δdF(P, Q). Thus, βgreedy moves to (pi0 , ql+1). If l = j0, then
βgreedy takes the step (pi0 , qj0 ) → (pi0+1, qj0+1), as d(pi0+1, qj0+1) ≤ δ(β) = δdF(P, Q), but
d(pi0 , qj0+1), d(pi0+1, qj0 ) ≥ δ∗ − δdF(P, Q) ≥ 3 δdF(P, Q), by Lemma 4.2(i). J
Finally, we can show the desired upper bound on the quality of the greedy algorithm.
I Lemma 4.4. We have δgreedy(P, Q) ≤ δ∗/2.
Proof. By Lemma 4.3, P and Q have the same number of long edges. Let pi1 pi1+1, . . . ,
pik pik+1 and qj1 qj1+1, . . . , qjk , qjk+1 be the long edges of P and of Q, where 1 ≤ i1 <
· · · < ik < n and 1 ≤ j1 < · · · < jk < n. By Lemma 4.3, βgreedy contains the positions
(pia , qja ) and (pia+1, qja+1) for a = 1, . . . , k, and d(pia , qja ), d(pia+1, qia+1) ≤ δdF(P, Q)
for a = 1, . . . , k. Thus, setting i0 = j0 = 0 and ik+1 = jk+1 = n, we can focus on
the subtraversals βa = (pia+1, qia+1), . . . , (pia+1 , qia+1 ) of βgreedy, for a = 0, . . . , k. Now,
since all edges traversed in βa are short, and since d(pia+1, qia+1) ≤ δdF(P, Q), we have
δ(βa) ≤ δdF(P, Q) + δ∗/2 − 2 δdF(P, Q) ≤ δ∗/2 by Lemma 4.2(iii) and the triangle inequality.
Thus, δ(βgreedy) ≤ max{δdF(P, Q), δ(β1), . . . , δ(βk)} ≤ δ∗/2, as desired. J
Lemmas 4.2(iv) and 4.4 prove the desired inequality δgreedy(P, Q) ≤ 2O(n)δdF(P, Q), since
k∗ ≤ m = 2n − 2.
4.2
Tight Example for the Upper Bound
Fix 1 < α < 2. Consider the sequence P = hp1, . . . , pni with pi := (−α)i and the sequence
Q = hq1, . . . , qn−2i with qi := (−α)i+2. We show the following:
1. The greedy traversal βgreedy(P, Q) makes n − 2 simultaneous steps in P and Q followed
by 2 single steps in P . This results in a maximal distance of δgreedy(P, Q) = αn + αn−1.
2. The traversal which makes 2 single steps in P followed by n − 2 simultaneous steps in
both P and Q has distance α3 + α2.
Together, this shows that δgreedy(P, Q)/δdF(P, Q) = Ω(αn) = 2Ω(n), proving that the
inequality δgreedy(P, Q) ≤ 2O(n)δdF(P, Q) is tight.
To see (1), assume that we are at position (pi, qi). Moving to (pi, qi+1) would result
in a distance of d(pi, qi+1) = αi+3 + αi. Similarly, the other possible moves to (pi+1, qi)
and to (pi+1, qi+1) would result in distances αi+2 + αi+1, and αi+3 − αi+1, respectively.
It can be checked that for all α > 1 we have αi+3 + αi > αi+2 + αi+1. Moreover, for all
α < 2 we have αi+2 + αi+1 > αi+3 − αi+1. Thus, the greedy algorithm makes the move
to (pi+1, qi+1). Using induction, this shows that the greedy traversal starts with n − 2
simultaneous moves in P and Q. In the end, the greedy algorithm has to take two single
moves in P . Thus, the greedy traversal contains the pair (pn−1, qn−2), which is in distance
d(pn−1, qn−2) = αn + αn−1 = 2Ω(n).
To see (2), note that the traversal which makes 2 single steps in P followed by n − 2
simultaneous moves in P and Q starts with (p1, q1) and (p2, q1) followed by (pi, qi−2) for
i = 2, . . . , n. Note that d(p1, q1) = α3 − α, d(p2, q1) = α3 + α2, and pi = qi−2, so that the
remaining distances are 0. Thus, we have δdF(P, Q) ≤ α3 + α2 = O(1).
5
Improved Approximation Algorithm
Let P = hp1, . . . , pni and Q = hq1, . . . , qni be two sequences of n points in Rd, where d is
constant. Let 1 ≤ α ≤ n. We show how to find a value δ∗ with δdF(P, Q) ≤ δ∗ ≤ αδdF(P, Q)
in time O(n log n + n2/α). For simplicity, we will assume that all points on P and Q are
pairwise distinct. This can be achieved by an infinitesimal perturbation of the point set.
5.1
Decision Algorithm
We begin by describing an approximate decision procedure. For this, we prove the following
theorem.
I Theorem 5.1. Let P and Q be two sequences of n points in Rd, and let 1 ≤ α ≤ n. Suppose
that the points of P and Q have been sorted along each coordinate axis. There exists a decision
algorithm with running time O(n2/α) and the following properties: if δdF(P, Q) ≤ 1, the
algorithm returns YES; if δdF(P, Q) ≥ α, the algorithm returns NO; if δdF(P, Q) ∈ (1, α),
the algorithm may return either YES or NO. The running time depends exponentially on d.
Consider the regular d-dimensional grid with diameter 1 (all cells are axis-parallel cubes
with side length 1/√d). The distance between two grid cells C and D, d(C, D), is defined
as the smallest distance between a point in C and a point in D. The distance between a
point x and a grid cells C, d(x, C), is the distance between x and the closest point in C. For
a point x ∈ Rd, we write Bx for the closed unit ball with center x and Cx for the grid cell
that contains x (since we are interested in approximation algorithms, we may assume that
all points of P ∪ Q lie strictly inside the cells). We compute for each point r ∈ P ∪ Q the
grid cell Cr that contains it. We also record for each nonempty grid cell C the number of
points from Q contained in C. This can be done in total linear time as follows: we scan the
points from P ∪ Q in x1-order, and we group the points according to the grid intervals that
contain them. Then we split the lists that represent the x2-,. . . , xd-order correspondingly,
and we recurse on each group to determine the grouping for the remaining coordinate axes.
Each iteration takes linear time, and there are d iterations, resulting in a total time of O(n).
In the following, we will also need to know for each non-empty cell the neighborhood of all
cells that have a certain constant distance from it. These neighborhoods can be found in
linear time by modifying the above procedure as follows: before performing the grouping, we
make O(1) copies of each point r ∈ P ∪ Q that we translate suitably to hit all neighboring
cells for r. By using appropriate cross-pointers, we can then identify the neighbors of each
non-empty cell in total linear time. Afterwards, we perform a clean-up step, so that only the
original points remain.
A grid cell C is full if |C ∩ Q| ≥ 5n/α. Let F be the set of full grid cells. Clearly,
|F | ≤ α/5. We say that two full cells C, D ∈ F are adjacent if d(C, D) ≤ 4. This defines a
graph H on F of constant degree. Using the neighborhood finding procedure from above, we
can determine H and its connected components L1, . . . , Lk in time O(n + α). For C ∈ F ,
the label LC of C is the connected component of H containing C.
For each q ∈ Q, we search for a full cell C ∈ F with d(q, C) ≤ 2. If such a cell exists,
we label q with Lq = LC ; otherwise, we set Lq =⊥. Similarly, for each p ∈ P , we search
a full cell C ∈ F with d(p, C) ≤ 1. In case of success, we set Lp = LC ; otherwise, we set
Lp =⊥. Using the neighborhood finding procedure from above, this takes linear time. Let
P 0 = {p ∈ P | Lp 6=⊥} and Q0 = {q ∈ Q | Lq 6=⊥}. The labeling has the following properties.
I Lemma 5.2. We have
1. for every r ∈ P ∪ Q, the label Lr is uniquely determined;
2. for every x, y ∈ P 0 ∪ Q0 with Lx = Ly, we have d(x, y) ≤ α;
3. if p ∈ P 0 and q ∈ Bp ∩ Q, then Lp = Lq; and
4. if p ∈ P \ P 0, there are O(n/α) points q ∈ Q with d(p, Cq) ≤ 1. Hence, |Bp ∩ Q| = O(n/α).
Proof. Let r ∈ P ∪ Q and suppose there are C, D ∈ F with d(r, C) ≤ 2 and d(r, D) ≤ 2.
Then d(C, D) ≤ d(C, r) + d(r, D) ≤ 4, so C and D are adjacent in H. It follows that
LC = LD and that Lr is determined uniquely.
Fix x, y ∈ P 0 ∪ Q0 with Lx = Ly. By construction, there are C, D ∈ F with d(x, C) ≤ 2,
d(y, D) ≤ 2 and LC = LD. This means that C and D are in the same component of H.
Therefore, C and D are connected by a sequence of adjacent cells in F . We have |F | ≤ α/5,
any two adjacent cells have distance at most 4, and each cell has diameter 1. Thus, the
triangle inequality gives d(x, y) ≤ 2 + 4(|F | − 1) + |F | + 2 ≤ α.
Let p ∈ P 0 and q ∈ Bp ∩ Q. Take C ∈ F with d(p, C) ≤ 1. By the triangle inequality,
d(q, C) ≤ d(q, p) + d(p, C) ≤ 2, so Lq = Lp = LC .
Take p ∈ P and suppose there is a grid cell C with |C ∩ Q| > 5n/α and d(p, C) ≤ 1.
Then C ∈ F , so Lp 6=⊥, which means that p ∈ P 0. The contrapositive gives (4). J
Lemma 5.2 enables us to design an efficient approximation algorithm. For this, we define
the approximate free-space matrix F . This is an n × n matrix with entries from {0, 1}. For
i, j ∈ {1, . . . , n}, we set Fij = 1 if either (i) pi ∈ P 0 and Lpi = Lqj ; or (ii) pi ∈ P \ P 0 and
d(pi, qj) ≤ 1. Otherwise, we set Fij = 0. The matrix F is approximate in the following sense:
I Lemma 5.3. If δdF(P, Q) ≤ 1, then F allows a monotone traversal from (1, 1) to (n, n).
Conversely, if F has a monotone traversal from (1, 1) to (n, n), then δdF(P, Q) ≤ α.
Proof. Suppose that δdF(P, Q) ≤ 1. Then there is a monotone traversal β of (P, Q) with
δ(β) ≤ 1. By Lemma 5.2(3), β is also a traversal of F .
Now let β be a monotone traversal of F . By Lemma 5.2(2), we have δ(β) ≤ α, as
desired. J
Additionally, we define the approximate reach matrix R, which is an n × n matrix with
entries from {0, 1}. We set Rij = 1 if F allows a monotone traversal from (1, 1) to (i, j),
and Rij = 0, otherwise. By Lemma 5.3, Rnn is an α-approximate indicator for δdF ≤ 1. We
describe how to compute the rows of R successively in total time O(n2/α).
First, we perform the following preprocessing steps: we break Q into intervals, where
an interval is a maximal consecutive subsequence of points q ∈ Q with the same label
Lq 6=⊥. For each point in an interval, we store pointers to the first and the last point of the
interval. This takes linear time. Furthermore, for each pi ∈ P \ P 0, we compute a sparse
representation Ti of the corresponding row of F , i.e., a sorted list of all the column indices j
for which Fij = 1. Using hashing and bucketing, this can be done in total time O(n2/α), by
Lemma 5.2(4).
Now we successively compute a sparse representation for each row i of R, i.e., a sorted
list Ii of disjoint intervals [a, b] ∈ Ii such that for j = 1, . . . , n, we have Rij = 1 if and only if
there is an interval [a, b] ∈ Ii with j ∈ [a, b]. We initialize I1 as follows: if F11 = 0, we set
I1 = ∅ and abort. Otherwise, if p1 ∈ P 0, then I1 is initialized with the interval of q1 (since
F11 = 1, we have Lp1 = Lq1 by Lemma 5.2(3)). If p1 ∈ P \ P 0, we determine the maximum b
such that F1j = 1 for all j = 1, . . . , b, and we initialize I1 with the singleton intervals [j, j] for
j = 1, . . . , b. This can be done in time O(n/α), irrespective of whether pi lies in P 0 or not.
Now suppose we already have the interval list Ii for some row i, and we want to compute
the interval list Ii+1 for the next row. We consider two cases.
Case 1: pi+1 ∈ P 0. If Lpi+1 = Lpi , we simply set Ii+1 = Ii. Otherwise, we go through the
intervals [a, b] ∈ Ii in order. For each interval [a, b], we check whether the label of qb or the
label of qb+1 equals the label of pi+1. If so, we add the maximal interval [b0, c] to Ii+1 with
b0 = b or b0 = b + 1 and Lpi+1 = Lqj for all j = b0, . . . , c. With the information from the
preprocessing phase, this takes O(1) time per interval. The resulting set of intervals may not
be disjoint (if pi ∈ P \ P 0), but any two overlapping intervals have the same endpoint. Also,
intervals with the same endpoint appear consecutively in Ii+1. We next perform a clean-up
pass through Ii+1: we partition the intervals into conscutive groups with the same endpoint,
and in each group, we only keep the largest interval. All this takes time O(|Ii| + |Ii+1|).
Case 2: pi+1 ∈ P \P 0. In this case, we have a sparse representation Ti+1 of the corresponding
row in F at our disposal. We simultaneously traverse Ii and Ti+1 to compute Ii+1 as
follows: for each j ∈ {1, . . . , n} with F(i+1)j = 1, if Ii has an interval containing j − 1
or j or if [j − 1, j − 1] ∈ Ii+1, we add the singleton [j, j] to Ii+1. This takes total time
O(|Ii| + |Ii+1| + n/α).
The next lemma shows that the interval representation remains sparse throughout the
execution of the algorithm, and that the intervals Ii indeed represent the approximate reach
matrix R.
I Lemma 5.4. We have |Ii| = O(n/α) for i = 1, . . . , n. Furthermore, the intervals in Ii
correspond exactly to the 1-entries in the approximate reach matrix R.
Proof. First, we prove that |Ii| = O(n/α) for i = 1, . . . , n. This is done by induction on i.
We begin with i = 1. If p1 ∈ P 0, then |I1| = 1. If p1 ∈ P \ P 0, then Lemma 5.2(4) shows
that the first row of F contains at most O(n/α) 1-entries, so |I1| = O(n/α). Next, suppose
that we know by induction that |Ii| = O(n/α). We must argue that |Ii+1| = O(n/α). If
pi+1 ∈ P \ P 0, then the (i + 1)-th row of F contains O(n/α) 1-entries by Lemma 5.2(4),
and |Ii+1| = O(n/α) follows directly by construction. If pi+1 ∈ P 0 and Lpi+1 = Lpi , then
Ii+1 = Ii, and the claim follows by induction. Finally, if pi+1 ∈ P 0 and Lpi+1 6= Lpi , then by
construction, every interval in Ii gives rise to at most one new interval in Ii+1. Thus, by
induction, |Ii+1| ≤ |Ii| = O(n/α).
Second, we prove that Ii represents the i-th row of R, for i = 1, . . . , n. Again, the proof is
by induction. For i = 1, the claim holds by construction, because the first row of R consists
of the initial segment of 1s in F . Next, suppose we know that Ii represents the i-th row of
R. We must argue that Ii+1 represents the (i + 1)th row of R. If pi+1 ∈ P \ P 0, this follows
directly by construction, because the algorithm explicitly checks the conditions for each
possible 1-entry of R (R(i+1)j can only be 1 if F(i+1)j = 1). If pi+1 ∈ P 0 and Lpi+1 = Lpi ,
then the (i + 1)-th row of F is identical to the i-th row of F , and the same holds for R:
there can be no new monotone paths, and all old monotone paths can be extended by one
step along Q. Finally, consider the case pi+1 ∈ P 0 and Lpi+1 6= Lpi . If pi ∈ P \ P 0, then
every interval in Ii is a singleton [b, b], from which a monotone path could potentially reach
(i + 1, b) and (i + 1, b + 1), and from there walk to the right. We explicitly check both of
these possibilities. If pi ∈ P 0, then for every interval [a, b] ∈ Ii and for all j ∈ [a, b] we have
Lqj = Lpi 6= Lpi+1 . Thus, the only possible move is to (i + 1, b + 1), and from there walk to
the right, which is what we check. J
The first part of Lemma 5.4 implies that the total running time is O(n2/α), since each row
is processed in time O(n/α). By Lemma 5.3 and the second part of Lemma 5.4, if In has
an interval containing n then δdF(P, Q) ≤ α, and if δdF(P, Q) ≤ 1 then n appears in In.
Since the intervals in In are sorted, this condition can be checked in O(1) time. Theorem 5.1
follows.
5.2
Optimization Procedure
We now leverage Theorem 5.1 to an optimization procedure.
I Theorem 5.5. Let P and Q be two sequences of n points in Rd, and let 1 ≤ α ≤ n.
There is an algorithm with running time O(n2 log n/α) that computes a number δ∗ with
δdF(P, Q) ≤ δ∗ ≤ αδdF(P, Q). The running time depends exponentially on d.
Proof. If α ≤ 5, we compute δdF(P, Q) directly in O(n2) time. Otherwise, we set α0 = α/5.
We sort the points of P ∪ Q according to the coordinate axes, and we compute a
(1/3)-wellseparated pair decomposition P = {(S1, T1), . . . , (Sk, Tk)} for P ∪ Q in time O(n log n) [11].
Recall the properties of a well-separated pair decomposition: (i) for all pairs (S, T ) ∈ P,
we have S, T ⊆ P ∪ Q, S ∩ T = ∅, and max{diam(S), diam(T )} ≤ d(S, T )/3 (here, diam(S)
denotes the maximum distance between any two points in S); (ii) the number of pairs is
k = O(n); and (iii) for every distinct q, r ∈ P ∪ Q, there is exactly one pair (S, T ) ∈ P with
q ∈ S and r ∈ T , or vice versa.
For each pair (Si, Ti) ∈ P, we pick arbitrary s ∈ Si and t ∈ Ti, and set δi = 3d(s, t).
After sorting, we can assume that δ1 ≤ . . . ≤ δk. We call δi a YES-entry if the algorithm
from Theorem 5.1 on input α0 and the point sets P an Q scaled by a factor of δi returns
YES; otherwise, we call δi a NO-entry. First, we test whether δ1 is a YES-entry. If so, we
return δ∗ = α0δ1. If δ1 is a NO-entry, we perform a binary search on δ1, . . . , δk: we set l = 1
and r = k. Below, we will prove that δk must be a YES-entry. We set m = d(l + r)/2e. If δm
is a NO-entry, we set l = m, otherwise, we set r = m. We repeat this until r = l + 1. In the
end, we return δ∗ = α0δr. The total running time is O(n log n + n2 log n/α). Our procedure
works exactly like binary search, but we presented it in detail in order to emphasize that
δ1, . . . , δk is not necessarily monotone: NO-entries and YES-entries may alternate.
We now argue correctness. The algorithm finds a YES-entry δr such that either r = 1 or
δr−1 is a NO-entry. By Theorem 5.1, any δi is a NO-entry if δi ≤ δdF(P, Q)/α0. Thus, we
certainly have δ∗ = α0δr > δdF(P, Q). Now take a traversal β with δ(β) = δdF(P, Q), and let
(p, q) ∈ P × Q be a position in β that has d(p, q) = δ(β). There is a pair (Sr∗ , Tr∗ ) ∈ P with
p ∈ Sr∗ and q ∈ Tr∗ , or vice versa. Let s ∈ Sr∗ and t ∈ Tr∗ be the points we used to define
δr∗ . Then
d(s, t) ≥ d(p, q) − diam(Sr∗ ) − diam(Tr∗ ) ≥ d(p, q) − 2d(Sr∗ , Tr∗ )/3 ≥ d(p, q)/3,
and
d(s, t) ≤ d(p, q) + diam(Sr∗ ) + diam(Tr∗ ) ≤ d(p, q) + 2d(Sr∗ , Tr∗ )/3 ≤ 5d(p, q)/3,
so δr∗ = 3d(s, t) ∈ [δ(β), 5δ(β)]. Since by Theorem 5.1 any δi is a YES-entry if δi ≥ δdF(P, Q),
all δi with i ≥ r∗ are YES-entries (in particular, δk is a YES-entry). Thus, δ∗ ≤ α0δr∗ ≤
5α0δdF(P, Q) ≤ αδdF(P, Q). J
The running time of Theorem 5.5 can be improved as follows.
I Theorem 5.6. Let P and Q be two sequences of n points in Rd, and let 1 ≤ α ≤ n.
There is an algorithm with running time O(n log n + n2/α) that computes a number δ∗ with
δdF(P, Q) ≤ δ∗ ≤ αδdF(P, Q). The running time depends exponentially on d.
Proof. If α ≤ 4, we can compute δdF(P, Q) exactly. Otherwise, we use Theorem 5.5 to
compute a number δ0 with δdF(P, Q) ≤ δ0 ≤ n · δdF(P, Q), or, equivalently, δdF(P, Q) ∈
[δ0/n, δ0]. This takes time O(n log n). Set i∗ = dlog(n/α)e + 1 and for i = 1, . . . , i∗ let
αi = n/2i+1. Also, set a1 = δ0/n and b1 = δ0.
We iteratively obtain better estimates for δdF(P, Q) by repeating the following for i =
1, . . . , i∗ − 1. As an invariant, at the beginning of iteration i, we have δdF(P, Q) ∈ [ai, bi] with
bi/ai = 4αi. We use the algorithm from Theorem 5.1 with inputs αi and P and Q scaled by
a factor 2ai (since αi ≥ αi∗−1 = n/2dlog(n/α)e+1 ≥ α/4, the algorithm can be applied). If the
answer is YES, it follows that δdF(P, Q) ≤ αi2ai = bi/2, so we set ai+1 = ai and bi+1 = bi/2.
If the answer is NO, then δdF(P, Q) ≥ 2ai, so we set ai+1 = 2ai and bi+1 = bi. This needs
time O(n2/αi) and maintains the invariant.
In the end, we return ai∗ . The invariant guarantees δdF(P, Q) ∈ [ai∗ , bi∗ ] and bi∗ /ai∗ =
4αi∗ ≤ α, as desired. The total running time is proportional to
i∗−1 i∗−1
n log n + X n2/αi = n log n + X n2i+1 ≤ n log n + n2i∗+1 = O(n log n + n2/α).
i=1 i=1
J
6
Conclusions
We have obtained several new results on the approximability of the discrete Fréchet distance.
As our main results,
1. we showed a conditional lower bound for the one-dimensional case that there is no
1.399-approximation in strongly subquadratic time unless the Strong Exponential Time
Hypothesis fails. This sheds further light on what makes the Fréchet distance a difficult
problem.
2. we determined the approximation ratio of the greedy algorithm as 2Θ(n) in any dimension
d ≥ 1. This gives the first general linear time approximation algorithm for the problem;
and
3. we designed an α-approximation algorithm running in time O(n log n + n2/α) for any
1 ≤ α ≤ n in any constant dimension d ≥ 1. This significantly improves the greedy
algorithm, at the expense of a (slightly) worse running time.
Our lower bounds exclude only (too good) constant factor approximations with strongly
subquadratic running time, while our best strongly subquadratic approximation algorithm
has an approximation ratio of nε. It remains a challenging open problem to close this gap.
1
2
3
4
5
6
7
Amir Abboud and Virginia Vassilevska Williams . Popular conjectures imply strong lower bounds for dynamic problems . In Proc. 55th Annu. IEEE Sympos. Found. Comput. Sci.
(FOCS) , pages 434 - 443 , 2014 .
Amir Abboud , Virginia Vassilevska Williams, and Oren Weimann . Consequences of faster alignment of sequences . In Proc. 41st Internat. Colloq. Automata Lang. Program. (ICALP) , volume 8572 of LNCS , pages 39 - 51 , 2014 .
Amir Abboud , Ryan Williams , and Huacheng Yu . More applications of the polynomial method to algorithm design . In Proc. 26th Annu. ACM-SIAM Sympos. Discrete Algorithms (SODA) , pages 218 - 230 , 2015 .
Pankaj K. Agarwal , Rinat Ben Avraham, Haim Kaplan, and Micha Sharir . Computing the discrete Fréchet distance in subquadratic time . SIAM J. Comput. , 43 ( 2 ): 429 - 449 , 2014 .
Helmut Alt . Personal communication . 2012 .
Helmut Alt and Michael Godau . Computing the Fréchet distance between two polygonal curves . Internat. J. Comput. Geom. Appl. , 5 ( 1 -2): 78 - 99 , 1995 .
Karl Bringmann . Why walking the dog takes time: Fréchet distance has no strongly subquadratic algorithms unless SETH fails . In Proc. 55th Annu. IEEE Sympos. Found.
Comput. Sci. (FOCS), pages 661 - 670 , 2014 .
Karl Bringmann and Marvin Künnemann . Improved approximation for Fréchet distance on c-packed curves matching conditional lower bounds . arXiv:1408.1340 , 2014 .
Kevin Buchin , Maike Buchin, Wouter Meulemans, and Wolfgang Mulzer . Four soviets walk the dog - with an application to Alt's conjecture . In Proc. 25th Annu. ACM-SIAM Sympos.
Discrete Algorithms (SODA), pages 1399 - 1413 , 2014 .
Computing the Fréchet distance with a retractable leash . In Proc. 21st Annu. European Sympos. Algorithms (ESA) , pages 241 - 252 , 2013 .
Paul B. Callahan and S. Rao Kosaraju . A decomposition of multidimensional point sets with applications to k-nearest-neighbors and n-body potential fields . J. ACM , 42 ( 1 ): 67 - 90 , 1995 .
Thomas Eiter and Heikki Mannila . Computing Discrete Fréchet Distance . Technical Report CD-TR 94/64 , Christian Doppler Laboratory, 1994 .
Anka Gajentaan and Mark H. Overmars . On a class of O(n2) problems in computational geometry . Comput. Geom. Theory Appl. , 5 ( 3 ): 165 - 185 , 1995 .
Michael R. Garey and David S. Johnson . Computers and intractability. A guide to the theory of NP-completeness. W. H. Freeman , 1979 .
55th Annu. IEEE Sympos. Found. Comput. Sci. (FOCS) , pages 621 - 630 , 2014 .
System Sci ., 62 ( 2 ): 367 - 375 , 2001 .
Russell Impagliazzo , Ramamohan Paturi, and Francis Zane . Which problems have strongly exponential complexity . J. Comput. System Sci. , 63 ( 4 ): 512 - 530 , 2001 .
21st Annu. ACM-SIAM Sympos . Discrete Algorithms (SODA) , pages 1065 - 1075 , 2010 .
Ramamohan Paturi , Pavel Pudlák, Michael E. Saks , and Francis Zane . An improved exponential-time algorithm for k-sat . J. ACM , 52 ( 3 ): 337 - 364 , 2005 .
Liam Roditty and Virginia Vassilevska Williams . Fast approximation algorithms for the diameter and radius of sparse graphs . In Proc. 45th Annu. ACM Sympos. Theory Comput.
(STOC) , pages 515 - 524 , 2013 .
Ryan Williams . A new algorithm for optimal 2-constraint satisfaction and its implications .
Theoret. Comput. Sci. , 348 ( 2 ): 357 - 365 , 2005 .