Finding a Guard that Sees Most and a Shop that Sells Most
Discrete Comput Geom
Geometry Discrete & Computational
Otfried Cheong 2
Alon Efrat 1
Sariel HarPeled 0
0 Department of Computer Science, University of Illinois , 201 N. Goodwin Avenue, Urbana, IL 61801 , USA
1 Department of Computer Science, University of Arizona , Tucson, AZ 85721 , USA
2 Division of Computer Science , KAIST, Gwahangno 335, Yuseonggu, Daejeon 305701 , South Korea
We present a nearquadratic time algorithm that computes a point inside a simple polygon P in the plane having approximately the largest visibility polygon inside P, and a nearlinear time algorithm for finding the point that will have approximately the largest Voronoi region when added to an npoint set in the plane. We apply the same technique to find the translation that approximately maximizes the area of intersection of two polygonal regions in nearquadratic time, and the rigid motion doing so in nearcubic time. ∗ Work on this paper by Otfried Cheong was supported by the Brain Korea 21 Project, The School of Information Technology, KAIST, and that by Sariel HarPeled was partially supported by NSF CAREER award CCR0132901.

We consider two problems where our goal is to find a point x such that the area of the
region V (x ) “controlled” by x is as large as possible. In the first problem we are given a
simple polygon P , and V (x ) is the visibility polygon of x , that is, the region of points y
inside P such that the segment x y does not intersect the boundary of P . In the second
problem we are given a set of points T , and V (x ) is the Voronoi cell of x in the Voronoi
diagram of the set T ∪ {x }, that is, the set of points that are closer to x than to any point
in T .
In both problems it is straightforward (but tedious) to write a closed formula describing
the area of the region controlled by a point x . This area function (inside a region where
V (x ) has the same combinatorial structure) is the sum of the areas of triangles that
depend on the location of x . The function domain consists of a polynomial number
of regions, and the function has a different closed form in each region: it is the sum
of (n) lowdegree rational functions in two variables, which do not have a common
denominator. See the Appendix for a more detailed description of the area function. It
seems difficult to solve the problem of finding the maximum of this function analytically
and efficiently, and we resort to approximation.
In this paper we address the question of efficiently finding a point x that approximately
maximizes the area of V (x ). More precisely, let µ( x ) be the area of V (x ), and let
µ opt = maxx µ( x ) be the area for the optimal solution. Given δ > 0, we show how to
find xapp such that µ( xapp) ≥ (1 − δ)µ opt.
The main motivation for our first problem arises from artgallery or sensor placement
problems. In a typical problem of this type, we are given a simple polygon P, and wish
to find a set of points (guards) so that each point of P is seen by least one guard. The
problem of determining whether a given polygon can be guarded by no more than a
given number of guards is NPhard. Artgallery problems have attracted a lot of research
in the last 30 years [
22
], [
23
]. A natural heuristic for solving artgallery problems is to
use a greedy approach based on area: we first find a guard that maximizes the area seen,
next find a guard that sees the maximal area not seen by the first guard, and so on until
each point of P is seen by some guard.
Ghosh [
14
] used a similar greedy heuristic to obtain an O(log n)approximation on
the number of guards needed to see an nedge polygon, if guard locations are
constrained to be on the vertices of P. (An improved algorithm obtains an O(log
kopt)approximation [
11
].)
No approximation bounds for the greedy approach are known if guards can be located
in the interior of P. However, for the related problem of maximizing the area seen by
k guards, for a given number k, Hochbaum and Pathria [
16
] showed that k iterations
of the greedy algorithm mentioned above construct a (1 − 1/e)approximation to the
more general setcover problem. In Section 2.4 we show how to apply our result to the
problem of finding k guards that see as much as possible of the polygon P.
Ntafos and Tsoukalas [
20
] show how to find, for any δ > 0, a guard that sees an
area of size (1 − δ)µ opt. Their algorithm requires O(n5/δ2) time in the worst case.
In Section 2.3 we give a probabilistic algorithm that finds a (1 − δ)approximation in
time O((n2/δ4) log3(n/δ)) with high probability. We also show that approximating the
largest visible polygon up to a constant factor is 3SUMhard [
12
], implying that it is likely
that our algorithm is close to optimal as far as the dependency on n is concerned.
Our second problem is motivated by the task of placing a new supermarket such that
it takes over as many customers as possible from the existing competition. If we assume
that customers are uniformly distributed and shop at the nearest supermarket, then our
task is indeed to find a point x such that the Voronoi region of x is as large as possible.
The area of Voronoi regions has been considered before in the context of games, such
as the Voronoi game [
1
], [
7
] or the Hotelling game [
21
]. As far as we know, the only
previous paper discussing maximizing the Voronoi region of a new point is by Dehne
et al. [
10
], who show that the area function has only a single local maximum inside a
region where the set of Voronoi neighbors does not change and is in convex position.
They give an algorithm for finding (approximately) the optimal new point numerically
based on Newton approximation.
In Section 3 we show that given a set T of n points and a number δ > 0, we can find a
point xapp such that µ( xapp) ≥ (1 − δ)µ opt, where µ( x ) is the area of the Voronoi region
of x in the Voronoi diagram of T ∪ {x }, and µ opt = maxx µ( x ). The (deterministic)
running time of the algorithm is O(n/δ4 + n log n).
Our framework captures a variety of other problems, where the goal is to maximize
the area of some region which depends on a multidimensional parameter. As an
example of such a further application, we consider the problem of matching two planar
shapes P and Q under translations or rigid motions. The area of overlap (or the area of
the symmetric difference) of two planar regions is a natural measure of their similarity
that is insensitive to noise [
3
], [
6
]. Mount et al. [
19
] first studied the function mapping
a translation vector to the area of overlap of a translated simple polygon P with another
simple polygon Q, showing that it is continuous and piecewise polynomial of degree at
most two. If m and n are the number of vertices of P and Q, respectively, then the (graph
of this) function has O((nm)2) pieces, each with constant algebraic complexity, and it
can be computed within the same time bound. No algorithm is known that computes the
translation maximizing the area of overlap that does not essentially construct the whole
function graph. De Berg et al. [
6
] gave an O((n + m) log(n + m)) time algorithm to
solve the problem exactly in the case of convex polygons, and gave a constantfactor
approximation. Alt et al. [
3
] gave a constantfactor approximation for the minimum area of
the symmetric difference of two convex polygons. Finally, de Berg et al. [
5
] consider the
case where P and Q are disjoint unions of m and n unit disks, with m ≤ n. They compute
a (1 − ε)approximation for the maximal area of overlap of P and Q under translations
in time O((nm/ε2) log(n/ε)), and under rigid motions in time O((n2m2/ε3) log n). We
are not aware of any previous result on maximizing the overlap of more general shapes
under rigid motions.
Our framework applies immediately to this problem: for a translation vector x , let
P(x ) denote the translation of P by x , and let µ( x ) be the ratio of the areas of P(x ) ∩ Q
and P. Clearly, 0 ≤ µ( x ) ≤ 1, where µ( x ) = 1 for a perfect match (this model allows
us to search for P appearing as a subpattern in Q). Let µ opt := maxx µ( x ). We show
how to find a translation xapp such that µ( xapp) ≥ µ opt − ε. Note that the error is absolute
here. This makes sense in shape matching: if µ opt is small (say less than 110 ), then P and
Q cannot be matched well. In many applications it will be sufficient to know that no
decent match is possible, rather than a precise estimate on how poor the match is.
If P and Q are polygonal regions of complexity m and n, the running time of our
procedure is O(m + (n2/ε4) log2 n). For the case of disjoint unit disks studied by de
Berg et al., this reduces to O((n/ε4) log3 n), saving one order of magnitude over their
result (which is, however, deterministic, approximates with a relative error, and has a
better dependence on ε).
Finally, we consider the case of arbitrary rigid motions, and give an
O(m + (n3/ε4) log5 n) time algorithm for polygonal regions.
Our algorithms are based on the theory of εapproximations. Instead of measuring
area directly, we estimate it by counting the number of points of an εapproximation S
inside the region. The estimate is sufficiently tight such that the point that maximizes
it is a good approximation for the optimal solution. The beauty of this approach is that
it turns a continuous problem into a discrete one: we only need to find the point x
such that V (x ) contains the largest number of points of S. If, for s ∈ S, we define
W (s) = {x  s ∈ V (x )}, then this problem can be solved by computing the arrangement
of the sets W (s), for s ∈ S, and inspecting each face of this arrangement.
In Section 2.1 we show how to apply this approach to our first problem, maximizing
the visibility region. Unfortunately, it turns out that the size of the εapproximation
required is prohibitively large. This is because the area of the optimal solution might
only be of the order of 1/n, so our εapproximation needs to guarantee an error of less
than δ/n. In Sections 2.2 and 2.3 we show how to work around these problems, and
improve the running time to nearquadratic.
In Section 3 we consider the Voronoi region problem. Here we can exploit the
geometry of the optimal solution to decompose the problem into subproblems such that in
each subproblem a small εapproximation is sufficient.
We generate εapproximations by random sampling. Since random sampling appears
to work, the reader may wonder why we cannot simply generate a random sample X ,
compute µ( x ) for each x ∈ X , and pick the sample point maximizing µ( x ). Such an
approach may be possible for maximizing the Voronoi region, but the required sample
size would be prohibitively large. The approach does not work at all for maximizing
the visibility region, as we will see in Section 2.1. Our indirect use of random sampling
indeed makes all the difference.
2.
Maximizing the Visibility Region
2.1. Using an εApproximation
In the following let P denote a simple polygon, let µ( ·) denote the area measure, and
assume that the area of P is 1; that is, µ( P) = 1. Given a point x ∈ P, let VP (x ) denote
the visibility polygon of x inside P; that is, the region in P visible from x . Formally,
VP (x ) = {y  x y ⊆ int( P)},
where x y is the open segment connecting x and y. Note that under this definition,
VP (x ) ∩ int( P) is an open set. Let µ( x ) denote the area of VP (x ), and let µ opt =
maxx∈P µ( x ) denote the maximum area.
Definition 2.1. For a set S of points in P, and a point x ∈ P, let
eS(x ) − µ( x ) ≤ ε.
be the estimate of the area visible from x .
Consider the range space ( P, V), where V = {VP (x )  x ∈ P}. The set S is an
εapproximation for this range space if for any x ∈ P we have (recall that µ( P) = 1)
Valtr [
24
] showed that the VCdimension of the range space ( P, V) is bounded by 23. By
the εapproximation theorem [
2
], a uniform random sample S of O((d/ε2) log (d/εδ))
points from a range space of VCdimension d is an εapproximation for this range space
with probability at least 1 − δ.
A random point from P can be obtained by triangulating P, choosing a triangle with
probability proportional to its area, and then choosing a point from inside the triangle. A
uniform random sample from P can therefore be computed in O(log n) time per sample
point, after O(n log n) preprocessing (in fact, linear preprocessing is also achievable).
We note now that µ opt ≥ 1/(n − 2), since this quantity is bounded from below by the
area of the largest triangle in a triangulation of P. We assume that S is an εapproximation
for ε = δ/2n, let xapp ∈ P be a point x maximizing eS(x ), and let xopt ∈ P be a point
maximizing µ( xopt). Then we have
µ( xapp) ≥ eS(xapp) − δ/(2n) ≥ eS(xopt) − δ/(2n)
≥ µ( xopt) − δ/n ≥ (1 − δ)µ opt.
In other words, a point xapp ∈ P seeing the maximum number of points of S is a
(1 − δ)approximation to a point in P having the largest visibility polygon.
Now note that s ∈ VP (x ) if and only if x ∈ VP (s). Let WS = {VP (s)  s ∈ S}
be the set of visibility polygons defined by the points of S, and let AP (S) denote the
arrangement A(WS). Our problem has been reduced to finding a point in P that is
contained in the largest number of polygons in WS.
Lemma 2.2. Given a simple polygon P, and a parameter δ > 0, one can compute, in
O (n5/δ4) log3(n/δ) time, a point x ∈ P, such that µ( x ) ≥ (1 − δ)µ opt.
Proof. Let ε = δ/2n. A uniform random sample S of
M = O(1/ε2 log (1/ε)) = O (n2/δ2) log(n/δ)
points from P is an εapproximation with high probability [
2
].
We compute, for each point s ∈ S, its visibility polygon VP (s) using sweeping. Let
WS be the resulting set of polygons. The complexity of the arrangement of A(WS) is
O(n M 2) = O (n5/δ4) log2(n/δ) ; it can be computed in O (n5/δ4) log3(n/δ) time.
To see the bound on the complexity, observe that a segment inside P might intersect
the boundary of a visibility polygon at most twice. The total number of edges of the
visibility polygons in WS is O(n M ), and each such segment contains at most O(M )
vertices of the arrangement, implying the bound stated. See [
13
] for details.
Finally, we perform a simple traversal of the arrangement, where we compute for each
face the number of polygons of WS that contain it. We pick a point in the face where
this number is largest.
The size of the sample used is too large to make the above algorithm attractive. There
are two reasons for this: The value of ε has to be chosen sufficiently small to guarantee
correctness for the extreme case where µ opt ≈ 1/n. Furthermore, an εapproximation
is stronger than what is really required: it guarantees a good approximation for any
range. In the next section we will see that testing the area seen from a “small” (that is,
polynomial in n) number of candidate points is sufficient, and in Section 2.3 we will
then deal with the problem of possibly small µ opt.
At this point, the reader may wonder why we do not take the more direct approach
of just sampling enough points, for each point computing its visibility polygon, and
returning the largest visibility polygon computed. Somewhat surprisingly, this does not
work, as demonstrated by the example depicted in Fig. 1. Imagine that we stretch the
horizontal and vertical corridors until each of them has area 1/n − 1/n10, while the
central room has area 1/n9. With high probability, a random sample of size, say, O(n)
would have sample points only inside those corridors. Furthermore, the sample points
would be “deep” in the corridors. Therefore, every random sample point would see an
area at most 1/n, while one can place a point in the central room that sees area at least
2/n. In fact, the visibility arrangement corresponding to such a sample has quadratic
complexity and no point is contained in more than two polygons.
2.2. Estimating the Area Directly
Given a point x ∈ P, we can estimate µ( x ) by sampling a set S of points in P, and
computing the fraction that is visible from x (we assume that µ( P) = 1).
Lemma 2.3. Let ν, δ be parameters, and let x be any point in P such that µ( x ) ≥ ν
and 0 < δ ≤ 1. Let S be a uniform sample from P of size M ≥ c1((log n)/δ2ν), where
c1 is an appropriate absolute constant. Then
1
Pr[eS(x ) − µ( x ) > δ · µ( x )] ≤ n10 .
Proof. This is immediate from the Chernoff inequality. Indeed, let X1, . . . , X M be
indicator variables, such that Xi = 1 if and only if the i th sample point si ∈ S is inside
VP (x ). Let X = i Xi , c1 = 44, and
By the (simplified form of the) Chernoff inequality [
18
], we have
Pr
The key observation in the above lemma is that because we are estimating µ( x ) for
a single fixed point x only, the sample we need is considerably smaller than the sample
required by an εapproximation, which guarantees the approximation bound for every
point x at the same time. Naturally, we would like to use such a small sample in the
algorithm of Lemma 2.2 and end up with a faster algorithm. However, one now has to be
careful, to argue that the random sample does not overestimate the area of the visibility
polygon of some other point in the polygon. We do so by arguing that the random
sample S correctly estimates the visibility for all vertices of the visibility arrangement
induced by S.
Lemma 2.4. Let ν, δ be parameters, let p, q be two points in P, let x ∈ int( P) be on
the boundary of VP ( p) ∪ VP (q), let T be a uniform sample from P of size M − 2, where
M ≥ c2((log n)/δ2ν) and c2 is an appropriate absolute constant, and let S = T ∪ { p, q}.
1. If µ( x ) ≤ ν/4, then Pr[eS(x ) ≥ ν/2] ≤ 1/n10.
2. If µ( x ) ≥ ν/4 then Pr[eS(x ) − µ( x ) > δ · µ( x )] ≤ 1/n10.
Proof. We have VP (x ) ∩ T  = VP (x ) ∩ S, since, by definition, VP ( p) ∪ VP (q) does
not contain x . This gives
eT (x ) =
VP (x ) ∩ T 
T 
=
VP (x ) ∩ S S
S · T 
,
for c2 large enough. In particular, we have eS(x ) ≤ eT (x ) ≤ eS(x ) + νδ2/10.
Thus, if µ( x ) ≤ ν/4 then eS(x ) ≤ eT (x ) ≤ ν/2 with high probability, by the
Chernoff inequality, as can be easily verified. Alternatively, for µ( x ) ≥ ν/4, we have by
Lemma 2.3 that
δ
Pr eT (x ) − µ( x ) > 2 · µ( x )
eS(x ) − µ( x ) ≤ eT (x ) − µ( x ) + eS(x ) − eT (x )
≤ eT (x ) − µ( x ) + δ2ν/10,
and since µ( x ) ≥ ν/4, we have
δ2ν
Pr[eS(x ) − µ( x ) > δµ( x )] ≤ Pr eT (x ) − µ( x ) + 10 > δµ( x )
δ 1
≤ Pr eT (x ) − µ( x ) > 2 µ( x ) ≤ n10 .
It is now natural to pick the vertex x ∗ in the visibility arrangement contained in the
largest number of visibility polygons as the best placement for a guard. We cannot directly
apply Lemma 2.4 to x ∗, since the vertex x ∗ depends itself on the random sample S.
We therefore first show a bound that holds for all vertices of the visibility polygon
arrangement.
Lemma 2.5. Let ν, δ be parameters, and let S be a uniform sample from P of size
M , where M ≥ c2((log n)/δ2ν) and c2 is an appropriate absolute constant. Then with
probability at least 1 − 1/n6 the following hold:
1. For every vertex x of AP (S) with µ( x ) ≤ ν/4, we have eS(x ) ≥ ν/2. 2. For every vertex x of AP (S) with µ( x ) ≥ ν/4, we have eS(x ) − µ( x ) < δ · µ( x ).
Proof. We can consider S as the result of making M uniform random draws p1, p2, . . . ,
pM from P. Consider a pair i, j of indices, and consider pi and pj to be fixed. The
remaining points of S are now a random sample of size M − 2, and Lemma 2.4 tells us
that a vertex x of AP ({ pi , pj }) violates the conditions of our lemma with probability at
most 1/n10. Since the arrangement AP ({ pi , pj ) has O(n) vertices, the probability that
some vertex violates the conditions is at most 1/n9. Repeating the argument for all the
O(M 2) = o(n3) possible pairs of indices implies that the conditions are violated for any
vertex of the arrangement AP (S) with probability at most 1/n6.
We can now prove that the best vertex of AP (S) is indeed a good approximation.
Lemma 2.6. Let ν, δ be parameters such that δ > 1/n0.1 and ν > 1/n. Let S be a
uniform sample from P of size M ≥ c3((log n)/δ2ν), where c3 is an appropriate absolute
constant, and let x ∗ be a vertex of the arrangement AP (S) that maximizes eS(x ∗).
1. If µ opt ≤ ν/4, then Pr eS(x ∗) ≥ ν/2 ≤ 1/n6.
2. If µ opt ≥ ν/4 then Pr eS(x ∗) − µ opt > δ · µ opt(x ∗) ≤ 1/n6.
Proof. Consider a point xopt with µ( xopt) = µ opt. Let f be the face of AP (S) that
contains xopt, and let x ∗ be a vertex of f . Since xopt lies in the same visibility polygons as
x ∗ excluding maybe the two that determine the vertex x ∗, we have eS(x ∗) ≤ eS(xopt) ≤
eS(x ∗) + 2/S. Lemma 2.5 now implies the lemma.
Lemma 2.6 yields an immediate algorithm for estimating the maximum area
visibility polygon in P. Indeed, set ν = 1/n, compute a sample S inside P of size
M = O ((n log n)/δ2), compute the arrangement AP (S), and find the vertex that is
contained in the largest number of visibility polygons induced by the points of S. Clearly,
the overall running time of this algorithm is O (n M 2 log n) = O ((n3/δ4) log3(n/δ)).
This algorithm succeeds with high probability. We conclude:
Theorem 2.7. Given a simple polygon P , and a parameter δ > 0, one can compute, in
O ((n3/δ4) log3(n/δ)) time, a point x ∈ P , such that µ( x ) ≥ (1 − δ)µ opt. This algorithm
succeeds with high probability.
2.3.
Estimating the Area of the Optimal Solution
The running time of Theorem 2.7 is dominated by the worstcase value of ν (which is
a lower bound on the area of the largest visibility polygon). It is natural to perform an
exponential search for the right value of ν. Indeed, set ν = 21 , and use Lemma 2.6. In
nearlinear time, we either find a (1 − δ)approximation or we know (with high probability)
that µ opt ≤ 21 .
In the i th iteration, we let νi = 1/2i . We either find a (1 − δ)approximation, or we
know that µ opt ≤ νi and proceed to the next iteration.
What is the benefit of this approach? In the i th iteration we know that µ opt ≤ νi−1
(with high probability). If Si is the sample used in the i th iteration, and Mi = Si , then
this implies that, with high probability, no point of AP (Si ) is contained in more than
Li = 2νi−1 Mi visibility polygons (see Lemma 2.12 below for a formal proof of this
intuitive claim).
Lemma 2.8. If no point of P is contained in more than t visibility polygons of WS ,
then the complexity of the arrangement AP (S) is O (nkt ), where n is the complexity of
the polygon P , and k = S.
Proof. The complexity of the union of k such visibility polygons is O (nk) [
13
]. By
Clarkson and Shor [
8
] this implies that the complexity of the atmostt level (that is, the
region of all points contained in no more than t − 1 polygons) is O (t 2n(k/t )) = O (nkt ).
Since in our case the atmostt level is the entire arrangement, the lemma follows.
This implies that in the i th iteration, our algorithm computes an arrangement of
complexity
O (n Mi Li ) = O
n
log n
δ2νi
log n
δ2
= O
n
log2 n
δ4νi
.
The running time of the algorithm is dominated by the running time of the final iteration,
which takes O (n MI L I log(n MI L I )) time, where I ≤ log2 n . We conclude:
Theorem 2.9. Given a simple polygon P , and a parameter δ > 0, one can compute, in
O (n2/δ4) log3(n/δ) time, a point x ∈ P , such that µ( x ) ≥ (1 − δ)µ opt. This algorithm
succeeds with high probability.
Interestingly, as pointed out to us by Jeff Erickson, our problem is 3SUMhard [
12
].
As this indicates that a subquadratic algorithm is unlikely, the result of Theorem 2.9 is
probably close to optimal as far as the dependency on n is concerned.
Lemma 2.10. Given a simple polygon P, there is a constant c > 0, such that (1 − c)
approximating the largest visible polygon in P is 3SUMhard.
Proof. The details of the proof are tedious but straightforward, and we only outline it.
The basic idea is to carefully extend the example of Fig. 1 for the case of n arbitrary
lines.
Let L be a set of n lines with integer coefficients. Deciding whether three lines of L
pass through a common point is 3SUMhard. Furthermore, either L has three common
lines through a single point, or the distance between any vertex of the arrangement A(L)
and a third line of L is at least δ > 0, where δ can be computed in linear time. Indeed,
the intersection of two lines y = ax + b and y = a x + b is the point (x0, y0) where
x0 = −(b − b )/(a − a ) and y0 = ax0 + b. The distance of this point from a third line
y = a x + b is η = (a x0 + b − y0)/√a 2 + b 2 + 1. If all coefficients are integers
bounded by U, then the denominator of x0 and y0 is bounded by 2U. The numerator of
η is therefore a rational number with denominator bounded by 4U2. This implies that
either η is zero, or δ = η ≥ 1/ 4U2(2U2 + 1) , establishing the claimed lower bound.
One can resize and translate L, in O(n log n) time, such that all the vertices of the
arrangement of L lie in the square [0.25, 0.75]2. Next, consider the axisparallel square
S of side length M 10 centered at the origin, and thicken every line ∈ L into a narrow
and long “string” r (that is, take the Minkowski sum of with an appropriately small
axisparallel square) such that the intersection of r with S is of area 2. Let R denote the
resulting set of objects. By picking M large enough, we can guarantee that the topology
of the union of the objects of R is identical to the topology of the union of lines (that is,
the objects are disjoint outside the unit square, no faces outside the union disappear, and
so on). The appropriate value of M is a function of the input, it can be derived from the
value of δ.
Next, consider the polygon P = r∈R r ∪ [
0, 1
]2. We can compute P in O(n log n)
time. Now, if there are three lines in L that pass through a common point, then there
is a point that stabs three objects of R, and sees an area at least 3 · 2 − o(1) inside P.
Similarly, if there is no point that is contained in three lines of L, then every point inside
P sees at most an area 2 · 2 + 1 + o(1) (the area of two objects of R corresponding to
two lines, and the area of the unit square, plus some minor additional portions of objects
that might be locally visible).
This implies that there is a constant gap between the largest visible polygon in P
depending on whether L has three lines that share a point. Thus, the problem of
approximating the largest visible polygon up to a constant factor is 3SUMhard.
In many cases we do not expect to encounter inputs where the visibility polygon is
truly small (that is roughly 1/n of the total area of P). The following corollary might be
useful in such a situation.
Corollary 2.11. Given a simple polygon P, and a parameter δ > 0, one can compute,
in O((n/µ optδ4) log3(n/δ)) time, a point x ∈ P, such that µ( x ) ≥ (1 − δ)µ opt. This
algorithm succeeds with high probability.
We also have the following combinatorial lemma.
Lemma 2.12. Let P be a simple polygon of area 1, such that the largest visibility
polygon in P has area at most ν, and let S be a uniform sample of size M ≥ (c1 log n)/ν,
for some constant c1. Then, with high probability, no point in P sees more than 2M ν
points of S.
Proof. It is sufficient to prove this for all the vertices of the arrangement A = AP (S).
Consider a vertex v of A, defined by the visibility polygon of two points p, q ∈ S, and
observe that the number of visibility polygons of WS that covers v is determined by
the set S\{ p, q}. This is a random variable independent of p, with expectation at most
ρ = ν(M − 2) = (log n). Arguing as in Lemma 2.3, it follows from the Chernoff
inequality that the probability that p is contained in more than 2ν M − 2 polygons of
WS is smaller than 2 exp(−ρ/4) ≤ n−c1/8. This implies the lemma, as the number of
vertices of WS is bounded by O(n M 2).
2.4. Finding a Good Set of Guards
As discussed in the Introduction, we want to use the greedy algorithm to find k “good”
guards for P. Namely, at every step we pick a guard that sees as much as possible of the
regions of the polygon of P not covered yet. We find the first guard using Theorem 2.9.
To find the following guards, we need to modify the algorithm slightly, as the uncovered
region is no longer a simple polygon. Indeed, assume that {g1 · · · gi } are guards that
were already assigned, and let Qi ⊆ P be the region not seen by these guards, namely
Qi = P\ ij=1 VP (gj ). The complexity of Qi is O(ni ), as Qi is the complement of the
union of i visibility polygons inside P [
13
]. We modify the algorithm to pick random
sample points only from Qi , and normalize the area of Qi to be 1. It is straightforward to
modify the algorithm of Theorem 2.7 to handle this more complicated case, and verify
that the algorithm still works; the only major difference being that we set in Theorem 2.7,
ν = 1/(ni ) since Qi can be decomposed into O(ni ) triangles. Hence the running time
increases to O (i 3n3/δ4) log3(n/δ) . Similarly, the runtime of Theorem 2.9 increases to
O (i 2n2/δ4) log3(n/δ) . To analyze the performance of this algorithm, we use the result
of [
16
] that shows that a βapproximation algorithm to the heaviest set in a setcover
instance, when used repeatedly k times, results in a 1 − exp(−β) approximation to the
heaviest cover with k sets. Combining our approximation algorithm with the analysis of
[
16
] yields the following result.
Theorem 2.13. Given a simple polygon P, a parameter δ > 0, and a positive integer
k, one can compute, in O (k3n2/δ4) log3(n/δ) time, a set of k guards {g1∗ · · · gk∗} ⊂ P
such that
µ
k
i=1
VP (gi∗)
≥ 1 − eδ−1
max µ
{g1···gk}⊂P
k
i=1
VP (gi ) .
This algorithm succeeds with high probability.
Of course, one can continue running the algorithm past the first k iterations. In general,
this would result in m = O(k log(1/ε)) guards guarding an area at least 1 − ε times the
area guarded by the optimal k guards.
3.
Maximizing the Voronoi Region
Let T be a given fixed set of n points in the plane. For a point x not necessarily in T , let
VT (x ) denote the Voronoi region of x in the Voronoi diagram of T ∪ {x }, and let µ( x )
denote the area of VT (x ). We are looking for a point x maximizing µ( x ). For points x
outside the convex hull of T , µ( x ) would be infinite. There are quite a few ways of
avoiding these boundary situations: using torus topology, restricting the point (that is,
supermarket) to lie within a polygon (that is, city limits), or by adding a boundary that
acts as an additional site. In the following we choose the first option, and assume the
input is a set of points in the unit square [
0, 1
]2 with torus topology. The reader can easily
modify the arguments to handle the boundary in a different way.
Formally, we are given a set T of n points in [
0, 1
]2, and for a point x not in T we
let µ( x ) denote the area of the Voronoi region of x in the Voronoi diagram of T ∪ {x }
(on the unit square with torus topology). We note that the function µ( x ) is defined on
the domain [
0, 1
]2\T , it is continuous on this domain, but the points of T are potential
singularities—the limit of µ( x ) as x approaches a point p ∈ T is not the same for
different directions of approach.
We define µ opt = supx∈[
0,1
]2\T µ( x ), and our goal is to find a point xapp such that
µ( xapp) ≥ (1 − δ)µ opt, for some parameter δ > 0. Note that we do not know whether a
point x with µ( x ) = µ opt exists.
The reach of a Voronoi region VT (x ) is the distance between the site x and the
furthest point inside VT (x ), or, in other words, the radius of the smallest disc centered
at x containing VT (x ). We can estimate µ opt as follows.
Lemma 3.1. Let be the largest reach of any Voronoi region VT (t ), for t ∈ T . Then
π 2/4 ≤ µ opt ≤ π 2.
Proof. Let p be a point realizing the reach , that is, its distance to the nearest site is . It
follows that VT ( p) contains the disc with center p and radius /2, and so µ( p) ≥ π 2/4.
The lower bound follows.
Now let ε > 0, and consider a point x with µ( x ) > µ opt − ε. Let y ∈ VT (x ) be the
point farthest from x . The distance of y to the nearest site in T is at most , and so its
distance to x is at most . It follows that VT (x ) is contained in the disc with radius and
center x , implying µ opt − ε < µ( x ) ≤ π 2. Since this holds for any ε > 0, the upper
bound follows.
Note that the largest reach is also the radius of the largest empty circle. Our algorithm
first computes in O(n log n) time by computing the Voronoi diagram V of T and
inspecting every vertex of V . It then partitions the unit square into a grid of squares
with side length . For each grid cell Q, we apply the εapproximation technique of
Section 2.1. We define an estimate function eS, such that for any x ∈ Q we have
eS(x ) − µ( x ) ≤
δπ 2
12
,
and we pick a point xQ ∈ Q maximizing eS(xQ) over all points of Q. We first argue
that this solves the problem: Let Q be a grid cell where supx∈Q\T µ( x ) = µ opt, and
let x ∗ ∈ Q be a point with µ( x ∗) ≥ µ opt − δπ 2/12. Let xapp be the point xQ that
maximizes eS(xQ). Then
µ( xapp) ≥ eS(xapp) −
δπ 2
12
δπ 2
12
≥ eS(x ∗) −
≥ µ( x ∗) −
δπ 2
6
≥ µ opt −
δπ 2
4
= (1 − δ)µ opt,
≥ µ opt − δµ opt
and so xapp is the desired approximate solution.
It remains to show how to define eS and how to find the point xQ, for each grid cell Q.
We fix a grid cell Q, and let x be a point in Q. The reach of VT (x ) is at most , and
so VT (x ) can intersect only Q itself and its eight neighboring grid cells. Consequently,
all points of T participating in the definition of VT (x ) lie in Q and the 24 grid cells
at distance at most 2 from it. Let Q denote the union of these 25 grid cells, and let
TQ = T ∩ Q .
We make use of the following simple lemma.
Lemma 3.2. Let S be a square grid of density ε in the plane, that is, the distance
between neighboring grid points is ε, and let C be a convex body of diameter at most D.
Then
µ( C ) − ε2 C ∩ S ≤ 4Dε.
Proof. Consider the tessellation of the plane into little squares of side length ε, where
each point of S is the center of one little square. The boundary of C intersects at most
4D/ε little squares, which implies the bound.
We set ε = δπ /96 and let S be a square grid of density ε, covering Q . For a point
x ∈ Q, let
eS(x ) = ε2 VT (x ) ∩ S
be the estimate of the Voronoi region of x . Making use of the fact that the diameter of
VT (x ) is at most 2 , we then have by Lemma 3.2
eS(x ) − µ( x ) ≤ 8 ε ≤ δπ 2/12,
and by what we observed above, it remains to find the point xQ ∈ Q maximizing eS(xQ).
To this end, we define
W (s) = {x ∈ Q  s ∈ VT (x )}.
Note that W (s) is simply the largest disc with center s that contains no point of T in its
interior, clipped to Q. Let WS = {W (s)  s ∈ S} and consider the arrangement A(WS).
As in Section 2.1, our problem has been reduced to finding a point in Q that is contained
in the largest number of clipped discs in WS.
Theorem 3.3. Given a set T of n points in the plane and a parameter δ > 0, one can
deterministically compute, in time O(n/δ4 + n log n), a point xapp such that µ( xapp) ≥
(1 − δ)µ opt.
Proof. We start by computing the Voronoi diagram of T and inspecting its vertices to
determine the largest reach . We then define the square grid, and determine the set of
points TQ relevant in each grid cell. Since a point of T is relevant in at most 25 grid
cells, the total size of the sets TQ is O(n).
For each grid cell Q we take a square grid S of density ε = δπ /96. It consists of
M = 25 2/ε2 = O(1/δ2) points. For s ∈ S, the clipped disc W (s) can be determined
by finding the nearest neighbor to s in TQ. We do this by simply comparing the distance
from s to each point in TQ. The arrangement WS is computed by a sweepline algorithm
in time O(M 2). The number of discs containing each face of the arrangement can again
be determined by a simple transversal. We pick a point xQ from the face maximizing the
estimate eS(xQ).
By the choice of , every grid cell is within distance at most 2 from a point of T .
The number of grid cells handled is therefore at most O(n). Each point of T appears at
most 25M times in a nearestneighbor computation, and so the overall running time is
O(n log n + n M + n M 2) = O(n/δ4 + n log n).
4. Shape Matching
To round off our presentation, we now briefly discuss an application of our framework
to the shape matching problem. Let P and Q be polygonal regions of complexity m
and n, respectively. For a translation vector x , let VP Q (x ) denote P(x ) ∩ Q, where
P(x ) = { p + x  p ∈ P} is the region obtained by translating P by x . Let µ( x ) =
µ( VP Q (x ))/µ( P) denote the ratio of the areas of VP Q (x ) and P. Our goal is to find
the translation x maximizing µ( x ). As before, let µ opt be maxx µ( x ), and let xopt be a
translation such that µ opt = µ( xopt).
We normalize so that µ( P) = 1, and so µ( x ) becomes simply µ( VP Q (x )). We sample
a set S of M points in P, and for a translation x (identified with a point x in the plane),
we count the fraction of sample points that is translated into Q to obtain the estimate
Lemma 4.1. Let Xi , i = 1, ..., r , be independent random variables with values 0 and
1, let X = ri=1 Xi , and let ε be a parameter with 0 < ε < 1. Then
Pr[X − E [X ] > εr ] < 2 exp(−ε2r/2).
Proof. Let ρ = E [X ]. By the simplified Chernoff bound [18, Theorem 4.2], we have
for 0 < δ ≤ 1. We use this to prove that
Pr[X < (1 − δ)ρ] < exp(−ρδ2/2),
Pr[X < ρ − εr ] < exp(−ε2r/2).
ρδ2 = ρε2r 2/ρ2 = ε2r 2/ρ ≥ ε2r.
Pr[X > ρ + εr ] < exp(−ε2r/2)
This is clearly true if ρ ≤ εr . Otherwise, set δ = εr/ρ < 1, and the result follows from
Finally, by considering the random variables Xi = 1 − Xi , we can conclude that
as well, and the result follows.
For s ∈ S, let W (s) = {x  s + x ∈ Q} be a translated copy of Q. Let AQ (S) be the
arrangement of all regions W (s). As before, we choose the vertex xapp of AQ (S) that
maximizes eS(xapp). We have the following lemma.
Lemma 4.2. Let 0 < ε < 1 be a parameter with ε > 1/n, and let S be a uniform
sample from P of size M ≥ (c log n)/ε2 + 2, for a suitable constant c. Then with
probability at least 1 − 1/n6, we have eS(x ) − µ( x ) ≤ ε for every vertex x of AQ (S).
Proof. As in Lemmas 2.4 and 2.5, we argue in two steps. First, consider two points
si , sj ∈ S to be fixed. The remaining points T = S\{si , sj } form a random sample of
size M − 2. Let x be a vertex of AQ ({si , sj }). As in the proof of Lemma 2.4, we find that
eT (x ) − eS(x ) ≤ 2/(M − 2) ≤ ε/10,
for c large enough.
Pr[eT (x ) − µ( x ) > ε/2] = Pr[X − E [X ] > εr/2] < 2 exp(−ε2r/8) ≤ 2/nc/8.
The arrangement AQ ({si , sj }) has O(n2) vertices, so the probability that eT (x )−µ( x ) >
ε/2 for any of them is at most 2/nc/8−2. Repeating the argument for the O(M 2) pairs
of indices (i, j ), the probability that the bound is violated for any vertex of AQ (S) is at
most 2M 2/nc/8−2, which is less than 1/n6 for large enough c.
Lemma 4.2 implies that with high probability the vertex xapp of AQ (S)
maximizing eS(xapp) fulfills µ( xapp) ≥ µ( xopt) − 2ε. We therefore have the following main
theorem.
Theorem 4.3. Given polygonal shapes P and Q of complexity m and n in the plane.
In time O(m + (n2/ε4) log2 n) we can compute a translation xapp such that µ( xapp) ≥
µ opt − ε with high probability. If P and Q are disjoint unions of m and n unit disks, then
the running time reduces to O((n/ε4) log3 n).
Proof. We start by triangulating P, so that we can take the sample S uniformly
at random from P. The arrangement AQ (S) can be computed in time O(n2 M 2) =
O((n2/ε4) log2 n). The vertex xapp maximizing eS(xapp) can then be found by a simple
traversal of the arrangement.
If P and Q are disjoint unions of unit disks (the case studied by de Berg et al. [
5
]), the
arrangement AQ (S) consists of M translated copies of a set of n disjoint unit disks. Each
disk can intersect only a constant number of disks in each W (s), and so the total number
of vertices of AQ (S) is at most O(n M 2). It can be computed in time O(n M 2 log n) =
O((n/ε4) log3 n), for instance by a plane sweep, and again xapp can be found using a
traversal.
Finally, we consider the problem of maximizing µ( VP Q (x )) under rigid motions x .
The probabilistic analysis above goes through nearly unchanged, using a
threedimensional configuration space for x . It is, however, no longer attractive to compute the
arrangement AQ (S) explicitly, as the regions W (s) are now bounded by curved
surfaces. Fortunately, we do not need the arrangement AQ (S) as long as we can somehow
enumerate its vertices. A vertex of AQ (S) is defined by a rigid motion that moves up to
three points of S onto the boundary of Q. For each triple of sample points from S and
each triple of edges of Q, this can happen only a constant number of times, and so we
can enumerate all possible vertices of AQ (S) in time O(M 3n3) = O((n3/ε6) log3 n).
For each candidate motion x , we test whether it maps each s ∈ S into Q. Using a point
location data structure for Q, this can be done in time O(M log n), and so the total
running time is O(m + M 4n3 log n) = O(m + (n3/ε4) log5 n).
5. Conclusions
We have given a nearquadratic time algorithm for approximating the largest visible
polygon inside a simple polygon. Our algorithm runs in nearlinear time if the visibility
polygon is reasonably large, a case that appears relevant in many applications. We also
showed that approximating the area of the largest visible polygon is a 3SUMhard problem
[
12
], and as such it is unlikely to have a subquadratic algorithm.
In the second part of the paper we applied a similar technique to the problem of finding
the largest Voronoi cell one might occupy by a single point. Unlike the first problem,
where direct random sampling does not yield any guaranteed approximation in the worst
case, here direct random sampling seems to be possible. To do so, one has to prove that
the area of the region
A = {x  µ( x ) ≥ (1 − δ)µ opt}
is sufficiently large to be “hit” by a sample point. The bounds we were able to prove
on the area of A result in an algorithm far slower than the one presented here, and used
considerably more involved arguments. It seems that this problem is far from being well
understood, and we leave it as open problem for further research. (See [
10
] for related
results.)
Recent followup work has shed further light on the problems discussed in this paper.
Aronov and HarPeled [
4
] showed how to find an approximation to the deepest point
in a set of ranges of bounded VCdimension using random sampling. We could use
this approach to pick our approximate solution instead of computing the arrangement
completely, possibly resulting in a minor speedup. Cohen et al. [
9
] observe that relative
approximations are the concept underlying this idea. Given a finite range space (X, R), a
subset S ⊆ X is an (ε, p)relative approximation if it approximates all ranges of weight
at least p within a multiplicative error of 1 + ε. Li et al. [
17
] showed that (ε, p)relative
approximations of size (cd/ε2 p) log(1/ p) exist, where c is an absolute constant and d
is the VCdimension of the range space. Relative approximations can be constructed
deterministically using discrepancy [
15
].
Acknowledgments
We thank Jeff Erickson for sketching out the proof of Lemma 2.10. We also thank
Joseph S. B. Mitchell, Christian Knauer, Rene´ van Oostrum, and Helmut Alt for fruitful
discussions regarding the problems addressed in this paper.
Appendix.
On the Area Function
A.1. The Visibility Polygon Case
Let P be a simple polygon in the plane, and let p = (x , y) be a point inside it. Let V ( p) be
the visibility polygon of p, and let µ( p) be the area of V ( p). We are interested in giving a
closed form formula for µ( p). To this end, we triangulate V ( p) around p, and consider a
triangle in this triangulation. In the most general case, such a triangle is formed by two
rays emanating from p that pass through two vertices q = (a, b) and q = (c, d), and end
on an edge e. For the sake of simplicity of exposition, assume that b, d ≥ 0 and e lies on
the x axis. The line through p and q intersects e at the point(x − y((x − a)/(y − b)), 0),
while the line through p and q intersects e at the point(x − y((x − c)/(y − d)), 0). The
area of the triangle is then (y2/2)((x − a)/(y − b) − (x − c)/(y − d)), assuming
(x − a)/(y − b) ≥ (x − c)/(y − d).
In general, of course, e is not necessarily on the x axis. After applying an affine
transformation, we obtain a somewhat more complicated formula, which is still a rational
function of low degree. The function µ( p) can be written as the sum of at most n such
terms. Careful inspection of the expression above shows that these terms do not have a
common denominator.
A.2.
The Voronoi Diagram Case
Let T be a set of n points in the plane, and let µ( p) denote the area of the cell C of p
in the Voronoi diagram of { p} ∪ T . We triangulate C by connecting p to each vertex
of C . The coordinates of the vertices of C are intersections of bisectors of p and a point
of T , and therefore rational functions of low degree in p. The area of each triangle can
be written as the determinant of its vertices, and is therefore again a lowdegree rational
function in p. So again, the function µ( p) can be written as the sum of lowdegree
rational functions.
1. H.  K. Ahn , S.W. Cheng, O. Cheong, M. Golin , and R. van Oostrum. Competitive facility location: The Voronoi game . Theoret. Comput. Sci. , 310 : 457  467 , 2004 .
2. N. Alon and J. H. Spencer . The Probabilistic Method, 2nd edition . Wiley Interscience, New York, 2000 .
3. H. Alt , U. Fuchs, G. Rote, and G. Weber . Matching convex shapes with respect to the symmetric difference . Algorithmica , 21 : 89  103 , 1998 .
4. B. Aronov and S. HarPeled . On approximating the depth and related problems . In Proc. ACMSIAM Symp. Discrete Algorithms , pages 886  894 , 2005 .
5. M. de Berg , S. Cabello, P. Giannopoulos , C. Knauer , R. van Oostrum, and R. C. Veltkamp . Maximizing the area of overlap of two unions of disks under rigid motion . In Proc. Scandinavian Workshop on Algorithms , pages 138  149 . LNCS, vol. 3111 . Springer, Berlin, 2004 .
6. M. de Berg , O. Cheong , O. Devillers , M. van Kreveld , and M. Teillaud . Computing the maximum overlap of two convex polygons under translations . Theory Comput. Systems , 31 : 613  628 , 1998 .
7. O. Cheong , S. HarPeled , N. Linial , and J. Matousek. The oneround Voronoi game . Discrete Comput. Geom. , 31 : 125  138 , 2004 .
8. K. L. Clarkson and P. W. Shor . Applications of random sampling in computational geometry, II. Discrete Comput . Geom. , 4 : 387  421 , 1989 .
9. E. Cohen , Y. Mansour , and M. Sharir . Approximation with relative errors in range spaces of finite VC dimension . Manuscript, 2006 .
10. F. Dehne , R. Klein , and R. Seidel . Maximizing a Voronoi region: the convex case . Internat. J. Comput. Geom. Appl. , 15 : 463  475 , 2005 .
11. A. Efrat and S. HarPeled . Guarding galleries and terrains . Inform. Process. Lett., 100 : 238  245 , 2006 .
12. A. Gajentaan and M. H. Overmars . On a class of O(n2) problems in computational geometry . Comput. Geom. Theory Appl. , 5 : 165  185 , 1995 .
13. L. Gewali , A. Meng , J. S. B. Mitchell , and S. Ntafos . Path planning in 0/1/∞ weighted regions with applications . ORSA J. Comput. , 2 : 253  272 , 1990 .
14. S. K. Ghosh . Approximation algorithms for art gallery problems . In Proc. Canadian Inform. Process. Soc. Congress , pages 429  434 , 1987 .
15. S. HarPeled and M. Sharir . Relative εapproximations in geometry . Manuscript. Available from http://www.uiuc.edu/~sariel/papers/06/integrate, 2006 .
16. D. Hochbaum and A. Pathria . Analysis of the greedy approach in covering problems . Naval Res. Quart. , 45 : 615  627 , 1998 .
17. Y. Li , P. M. Long, and A. Srinivasan . Improved bounds on the sample complexity of learning . J. Comput. System Sci. , 62 : 516  527 , 2001 .
18. R. Motwani and P. Raghavan. Randomized Algorithms . Cambridge University Press, New York, 1995 .
19. D. M. Mount , R. Silverman , and A. Y. Wu . On the area of overlap of translated polygons . Comput. Vision Image Understanding CVIU , 64 : 53  61 , 1996 .
20. S. C. Ntafos and M. Z. Tsoukalas . Optimum placement of guards . Inform. Sci. , 76 : 141  150 , 1994 .
21. A. Okabe , B. Boots , K. Sugihara , and S. N. Chiu . Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, 2nd edition . Probability and Statistics. Wiley, New York, 2000 .
22. J. O'Rourke. Art Gallery Theorems and Algorithms . The International Series of Monographs on Computer Science . Oxford University Press, New York, 1987 .
23. J. Urrutia . Art gallery and illumination problems . In J. Sack and J. Urrutia, editors, Handbook of Computational Geometry , pages 973  1027 . NorthHolland, Amsterdam, 2000 .
24. P. Valtr . Guarding galleries where no point sees a small area . Israel J. Math , 104 : 1  16 , 1998 .