Robust Shape Fitting via Peeling and Grating Coresets
Discrete Comput Geom
Robust Shape Fitting via Peeling and Grating Coresets
Pankaj K. Agarwal 0 1
Sariel HarPeled 0 1
Hai Yu 0 1
0 S. HarPeled Department of Computer Science, University of Illinois , Urbana, IL 61801 , USA
1 P.K. Agarwal,
Let P be a set of n points in Rd . A subset S of P is called a (k, ε)kernel if for every direction, the directional width of S εapproximates that of P , when k “outliers” can be ignored in that direction. We show that a (k, ε)kernel of P of size O(k/ε(d−1)/2) can be computed in time O(n + k2/εd−1). The new algorithm works by repeatedly “peeling” away (0, ε)kernels from the point set. We also present a simple εapproximation algorithm for fitting various shapes through a set of points with at most k outliers. The algorithm is incremental and works by repeatedly “grating” critical points into a working set, till the working set provides the required approximation. We prove that the size of the working set is independent of n, and thus results in a simple and practical, nearlinear εapproximation algorithm for shape fitting with outliers in low dimensions. We demonstrate the practicality of our algorithms by showing their empirical performance on various inputs and problems.
Shape fitting; Coresets; Geometric approximation algorithms

1 Introduction
In many areas such as computational geometry, computer graphics, machine learning,
and data mining, considerable work has been done on computing descriptors of the
extent of a set P of n points in Rd . These descriptors, called extent measures, either
compute certain statistics of P itself such as diameter and width, or compute some
geometric shape enclosing P with respect to a certain optimization criterion, such as
computing the smallest radius of a sphere or cylinder, the minimum volume or surface
area of a box, and the smallest spherical or cylindrical shell that contain P . Motivated
by more recent applications, there has also been work on maintaining extent measures
of a set of moving points, e.g., using the kinetic data structure framework [
6, 11
].
The existing exact algorithms for computing extent measures are generally
expensive. For example, the best known algorithm for computing the smallest enclosing
cylindrical shell in R3 requires O(n5) time [
5
]. Consequently, attention has shifted
to developing faster approximation algorithms; see, e.g., [
4, 5, 10, 12
]. Agarwal et al.
[7] proposed a unified framework for computing numerous extent measures
approximately in low dimensions. Their approach is to first extract a small subset from the
input, known as a coreset, and then return the extent measure of this subset as an
approximation to that of the original input. The running time of their algorithm,
substantially improving upon many previous results, is typically of the form O(n + 1/εc),
where n is the input size, c is a constant that may depend on the dimension d , and ε
is the approximation error.
Most of the existing work assumes that the input does not contain noisy data.
However in the real world, noise may come from different sources during data acquisition,
transmission, storage and processing, and is unavoidable in general. Meanwhile, most
extent measures are very sensitive to noise; a small number of inaccurate data points
(i.e., the socalled outliers) may substantially affect extent measures of the entire
input. In order to compute more reliable extent measures on the input, it is thus
natural to require that the outliers should be excluded from consideration. For example,
the smallest enclosing cylinder problem with k outliers is formulated as finding the
smallest cylinder that covers all but at most k of the input points.
Following up the work in [
7, 19
], we consider the problem of finding robust
coresets for various extent measures that are able to handle outliers. Assuming there are at
most k outliers in the input, our goal is to compute a coreset of small size, so that the
best solution on the coreset with at most k outliers would provide an εapproximation
to the original input with at most k outliers. We are mainly concerned with the case
in which the number k of outliers is small compared to the input size n. Otherwise,
randomsampling techniques have been effective in handling outliers [9].
Problem Statement Let P be a set of n points in Rd . For a direction u ∈ Sd−1 and
an integer 0 ≤ k < n, the level of a point a ∈ Rd in direction u is the size of the set
{p ∈ P  u, p > u, a }, i.e., the number of points in P that lie in the open halfspace
u, x − a > 0. This notion of level is the dual of the level of a point in an arrangement
of hyperplanes [
23
]. In this paper, when we refer to a direction u ∈ Sd−1, we always
assume for the sake of simplicity that no two points in P lie on the same level in that
direction; more careful but similar arguments would work for directions that do not
satisfy this assumption.
Let P k[u] (resp. Pk[u]) denote the point of P whose level is k (resp. n − k −
1) in a direction u ∈ Sd−1. Let Uk(u, P ) = u, P k[u] denote the klevel of P in
direction u. Let Lk(u, P ) = u, Pk[u] = −Uk(−u, P ). For parameters k and , the
(k, )directional width of P in direction u, denoted by Ek, (u, P ), is defined as
Ek, (u, P ) = Uk(u, P ) − L (u, P ).
For simplicity, we denote Ek,k(u, P ) by Ek(u, P ) and E0(u, P ) by E (u, P ). Similarly,
we denote U0(u, P ) by U(u, P ) and L0(u, P ) by L(u, P ) respectively.
Given a set P of n points in Rd , a parameter ε > 0 and an integer 0 ≤ k < n/2,
a subset S ⊆ P is called a (k, ε)kernel of P if for every u ∈ Sd−1 and every
0 ≤ a, b ≤ k,
(1 − ε) · Ea,b(u, P ) ≤ Ea,b(u, S) ≤ Ea,b(u, P ).
It implies that
Ua(u, S) ≥ Ua(u, P ) − ε · Ea,b(u, P ),
Lb(u, S) ≤ Lb(u, P ) + ε · Ea,b(u, P ).
Note that (0, ε)kernel is the same as the notion of εkernel defined by Agarwal
et al. [
7
].
We are interested in computing a (k, ε)kernel of small size for any given point
set P ⊂ Rd and parameters k and ε. Once we can compute small (k, ε)kernels
efficiently, we will immediately be able to compute robust coresets for various extent
measures, using the standard linearization and duality transforms; see [
7
] for details.
Related Results The notion of εkernels was introduced by Agarwal et al. [
7
] and
efficient algorithms for computing an εkernel of a set of n points in Rd were given
in [
7, 14, 24
]. Yu et al. [
24
] also gave a simple and fast incremental algorithm for
fitting various shapes through a given set of points. See [
8
] for a review of known
results on coresets.
Although there has been much work on approximating a level in an arrangement
of hyperplanes using the randomsampling and εapproximation techniques [
15, 21
],
this line of work has focused on computing a piecewiselinear surface of small
complexity that lies within levels (±ε)k for a given integer k ≥ 0. These algorithms do
not extend to approximating a level in the sense defined in this paper.
Perhaps the simplest case in which one can easily show the existence of a small
(k, ε)kernel is when all points of P are collinear in Rd . One simply returns the first
and last k + 1 points along this line as the desired (k, ε)kernel. In fact, this kernel has
exactly the same klevel directional width as P , for all directions. Note that the size of
this kernel is 2k + 2, which is independent of the input size. Generalizing this simple
example, HarPeled and Wang [
19
] showed that for any point set P ⊂ Rd , one can
compute a (k, ε)kernel of size O(k/εd−1). Their algorithm is based on a recursive
construction, and runs in O(n + k/εd−1) time. Their result led to approximation
algorithms for computing various extent measures with k outliers, whose running
times are of the form O(n + (k/ε)c).
Our Results In Sect. 2 we prove that there exists a (k, ε)kernel of size O(k/ε(d−1)/2)
for any set P of n points in Rd . This result matches the lower bound (k/ε(d−1)/2).
Our construction is relatively simple and intuitive: it works by repeatly peeling
away (ε/4)kernels from the input point set P . The running time is bounded by
O(n + k2/εd−1). The algorithm also leads to a onepass algorithm for computing
(k, ε)kernels. We tested our algorithm on a variety of inputs for d ≤ 8; the empirical
results show that it works well in low dimensions in terms of both the size of the
kernel and the running time.
Our result immediately implies improved approximation algorithms on a wide
range of problems discussed in [
19
]. To name a few, we can compute an
εapproximation of the diameter with k outliers in O(n + k2/εd−1) time, an
εapproximation of the minimumwidth spherical shell with k outliers in
O(n + k2d+1/ε2d(d+1)) time, and a subset of size O(k/εd ) for a set of linearly
moving points in Rd so that at any time the diameter (width, smallestenclosing box, etc.)
with k outliers of this subset is an εapproximation of that of the original moving
point set.
In Sect. 3 we present an incremental algorithm for shape fitting with k outliers,
which is an extension of the incremental algorithm by Yu et al. [
24
]. The
algorithm works by repeatedly grating points from the original point set into a working
set; the points that violate the current solution for the working set the most are
selected by the algorithm. We prove that the number of iterations of the algorithm is
O((k2/εd−1)d−1), which is independent of n. Our empirical results show that the
algorithm converges fairly quickly in practice. Interestingly, while the algorithm itself
does not make explicit use of (k, ε)kernels at all, its analysis crucially relies on the
properties of our new algorithm for constructing (k, ε)kernels.
2 Construction of (k, ε)Kernel
In this section we describe an iterative algorithm for constructing a (k, ε)kernel for
a set P of n points in Rd . Without loss of generality, we assume that ε ≤ 1/2.
2.1 Algorithm
Set δ = ε/4. Our algorithm consists of 2k + 1 iterations. At the beginning of the
ith iteration, for 0 ≤ i ≤ 2k, we have a set Pi ⊆ P ; initially P0 = P . We compute
a δkernel Ti of Pi , using an existing algorithm for computing δkernels [
7, 14, 24
].
We set Pi+1 = Pi \ Ti . After 2k + 1 iterations, the algorithm returns S = i2=k0 Ti as
the desired (k, ε)kernel.
Intuitively, Ti approximates the extent measure of Pi . By peeling away Ti from Pi ,
important points (in the sense of approximating the extent measures) on the next level
of P get “exposed” and can then be subsequently captured in the next iteration of the
algorithm. By repeating this peeling process enough times, the union of these point
sets approximates the extents of all the first k levels. Similar peeling ideas have been
used for halfspace range searching [
3, 16, 17
] and computing khulls [
18
]. However,
unlike our approach, in which we peel away only a small number of points, these
algorithms peel away all points of level 0 in each step.
2.2 Proof of Correctness
Let u ∈ Sd−1 be an arbitrary direction. For 0 ≤ j < n/2, let
denote the ordered sequence of points realizing the top/bottom j levels of P in
direction u. We call the ith iteration of the algorithm successful with respect to direction
u if Vk−1(u, P ) ∩ Ti = ∅ or unsuccessful otherwise. Since Vk−1(u, P ) = 2k and
the algorithm consists of 2k + 1 iterations, at least one of them is unsuccessful with
respect to u.
Lemma 2.1 If the ith iteration is unsuccessful with respect to direction u, then
E (u, Pi ) ≤ (1 + ε/2) Ek(u, P ). In fact,
U(u, Pi ) ≤ Uk(u, P ) + (ε/2)Ek(u, P ),
L(u, Pi ) ≥ Lk(u, P ) − (ε/2)Ek(u, P ).
Proof Since Ti ∩ Vk−1(u, P ) = ∅, we have E (u, Ti ) ≤ Ek(u, P ); see Fig. 1. By
construction, Ti is a δkernel of Pi . Therefore,
E (u, Pi ) ≤ E (u, Ti )/(1 − δ) ≤ (1 + ε/2) Ek(u, P ),
proving the first inequality of the lemma. Note that U(u, Ti ) ≤ Uk(u, P ). We have
U(u, Pi ) ≤ U(u, Ti ) + (E (u, Pi ) − E (u, Ti ))
≤ Uk(u, P ) + δE (u, Pi )
≤ Uk(u, P ) + (ε/2)Ek(u, P ).
The third claim in this lemma can be proved in a similar manner.
Lemma 2.2 S is a (k, ε)kernel of P .
(1)
Proof To prove the claim, we fix an arbitrary direction u ∈ Sd−1 and argue that
Ea,b(u, S) ≥ (1 − ε)Ea,b(u, P )
for all 0 ≤ a, b ≤ k. We only discuss the case a = b = k; other cases can be handled
by slightly modifying the argument given below.
We first show that
Uk(u, S) ≥ Uk(u, P ) − (ε/2) Ek(u, P ).
If P [u] ∈ S for all 0 ≤ ≤ k, then Uk(u, S) ≥ Uk(u, P ) and hence (4) is clearly
true. So let us assume that there exists ≤ k such that P [u] ∈/ S. Observe that for
any iteration i, we must have P [u] ∈ Pi .
We define
Q = p ∈ P  u, p ≥ Uk(u, P ) − (ε/2) Ek(u, P ) .
Then (1) is equivalent to S ∩ Q ≥ k + 1.
Consider the ith iteration of the algorithm that is unsuccessful with respect to
direction u. Since P [u] ∈ Pi and Ti is a δkernel of Pi ,
U(u, Ti ) ≥ U(u, Pi ) − δ E (u, Pi )
≥ U (u, P ) − (ε/4)(1 + ε/2) Ek(u, P ) (by Lemma 2.1)
≥ Uk(u, P ) − (ε/2) Ek(u, P ).
Hence Ti [ ] ∈ Q. Furthermore, since this iteration is unsuccessful, Ti 0[u] ∈/
0 u
{P 0[u], . . . , P k−1[u]}, implying that Ti ∩ (Q \ {P 0[u], . . . , P k−1[u]}) ≥ 1.
Let m be the total number of successful iterations of the algorithm. Then
S ∩ Vk−1(u, P ) ≥ m and therefore S ∩ {P 0[u], . . . , P k−1[u]} ≥ m − k.
Furthermore, as there are 2k + 1 − m unsuccessful iterations, the preceding argument
implies that S ∩ (Q \ {P 0[u], . . . , P k−1[u]}) ≥ 2k + 1 − m. Hence, S ∩ Q ≥
(m − k) + (2k + 1 − m) = k + 1, which in turn implies (1).
Using a similar argument, we can prove that
Lk(u, S) ≤ Lk(u, P ) + (ε/2) Ek(u, P ).
(2)
Putting (1) and (2) together, we get that
Ek(u, S) = Uk(u, S) − Lk(u, S) ≥ (1 − ε)Ek(u, P ).
Since u is an arbitrary direction, S is indeed a (k, ε)kernel of P .
2.3 Time Complexity
Chan [
14
] has shown that a δkernel of size O(1/δ(d−1)/2) can be computed in
O(n + 1/δd−1) time. Using this result, we obtain an algorithm for computing (k,
ε)kernels of size O(k/δ(d−1)/2) with running time O(nk + k/εd−1). We can improve
the running time to O(n + k2/εd−1), using the following observation: for any point
set P ⊂ Rd , if R ⊂ P is a (k, ε1)kernel of P , and S ⊂ R is a (k, ε2)kernel of R,
then S is a (k, ε1 + ε2)kernel of P .
We first invoke the O(n + k/εd−1)time algorithm of HarPeled and Wang [
19
]
to compute a (k, ε/2)kernel R of P of size O(k/εd−1), and then apply the above
O(nk + k/εd−1)time algorithm on R to compute a (k, ε/2)kernel S of R of size
O(k/ε(d−1)/2). The resulting set S is the desired (k, ε)kernel of P , and the total
running time is bounded by O(n + k2/εd−1). We conclude with the following theorem.
Theorem 2.3 Given a set P of n points in Rd and parameters k, ε > 0, one can
compute, in O(n + k2/εd−1) time, a (k, ε)kernel of P of size O(k/ε(d−1)/2).
It is easy to verify that for a point set P which is an (√ε )net of the unit
hypersphere (i.e., the minimum distance in P is (√ε )), all points of P must be in
every (0, ε)kernel of P . By replicating k + 1 times every point of P and perturbing
slightly, the resulting point set P has the property that any (k, ε)kernel of P must
contain (k/ε(d−1)/2) points. Thus, in the worst case, a (k, ε)kernel for P is of size
(k/ε(d−1)/2), matching the upper bound given in Theorem 2.3.
We also note that performing 2k + 1 iterations in the above algorithm is
not only sufficient but also necessary to compute a (k, ε)kernel. For example,
in R2, consider (n) (slightly perturbed) copies of the two points (−1, 0) and
(1, 0) on the xaxis, together with the following 2k + 2 points on the yaxis:
(0, 1/εk−1), (0, 1/εk−2), . . . , (0, 1); (0, −ε), (0, −ε2), . . . , (0, −εk); (0, −εk+1),
(0, εk+1). If the number of iterations is 2k, the algorithm may only output the first 2k
points listed above along the yaxis together with a set of other points on the xaxis,
which is clearly not a (k, ε)kernel in the ydirection.
2.4 Extensions
Onepass Algorithms In many applications it is desirable to compute a certain
function in a single pass over the input data, using a small working memory and
processing each point quickly. Agarwal et al. [
7
], Agarwal and Yu [
2
], and Chan [
14
]
described such onepass algorithms for computing εkernels. Our (k, ε)kernel
algorithm suggests how to develop a onepass algorithm for computing a (k, ε)kernel
by using such an algorithm for εkernel as a subroutine. Suppose there is a onepass
algorithm A that computes εkernels using N (ε) space and T (ε) time per point. To
compute a (k, ε)kernel of a point set P in one pass, we proceed as follows. We
simultaneously run 2k + 1 instances of A, namely A0, A1, . . . , A2k , each of which
maintains an (ε/4)kernel Ti (0 ≤ i ≤ 2k) of its own input seen so far. The input of
A0 is P0 = P , and the input Pi of Ai , for i ≥ 1, is initially empty. The algorithm
A0 processes each point in P0 in turn. For i ≥ 1, we insert a point p ∈ Pi−1 into Pi
whenever any of the following two events happens:
1. p is not added into Ti−1 after being processed by Ai−1; 2. p is deleted from Ti−1 by Ai−1.
It is easy to see that in the end Pi =2kPi−1 \ Ti−1 and Ti is an (ε/4)kernel of Pi , for
each 0 ≤ i ≤ 2k. Therefore, S = i=0 Ti is a (k, ε)kernel of P as desired. The total
space needed is O(k · N (ε/4)) and the amortized time to process each point in P is
O(k · T (ε/4)). Thus we obtain the following result.
Theorem 2.4 Given a onepass algorithm for computing εkernels in N (ε) space
and T (ε) time per point, there is a onepass algorithm for computing (k, ε)kernels
in O(k · N (ε/4)) space and O(k · T (ε/4)) amortized time per point.
Polynomials Let F be a family of d variate polynomials. The (k, )extent of F at
x ∈ Rd , denoted by Ek, (x, F ), is defined by
Ek, (x, F ) = fi (x) − fj (x),
where fi (resp. fj ) is the function in F that has the kth largest (resp. th smallest)
value in the set F (x) = {f (x)  f ∈ F }. A subset G ⊆ F is a (k, ε)kernel of F if for
any 0 ≤ a, b ≤ k and any x ∈ Rd ,
(1 − ε)Ea,b(x, F ) ≤ Ea,b(x, G) ≤ Ea,b(x, F ).
We say that the dimension of linearization of F is m if there exists a map ϕ :
Rd → Rm so that each function f ∈ F maps to a linear function hf : Rm → R in
the sense that f (x) = hf (ϕ(x)) for all x ∈ Rd . Using Theorem 2.3 together with
the standard linearization and duality transforms as described in [
7
], we immediately
have the following.
Theorem 2.5 Let F be a family of n polynomials, and let m be the dimension of
linearization of F . Given parameters k, ε > 0, one can compute a (k, ε)kernel of F
of size O(k/εm/2) in O(n + k2/εm) time.
Roots of Polynomials To compute (k, ε)kernels of fractional powers of
polynomials, we need the following observation from [
24
] (see also [
7
]):
Lemma 2.6 Let 0 < ε < 1, 0 ≤ a ≤ A ≤ B ≤ b, and r be a positive integer. If Br −
A ≥ (1 − εr )(br − ar ), then B − A ≥ (1 − ε)(b − a).
r
Hence, in order to compute a (k, ε)kernel of {f11/r , . . . , fn1/r }, where each fi is
a polynomial and r is a positive integer, it is sufficient to compute a (k, εr )kernel of
{f1, . . . , fn}. Applying Theorem 2.5, we then have the following.
f 1/r , . . . , fn1/r } be a family of n functions, where each fi is
Theorem 2.7 Let F = { 1
a polynomial and r is a positive integer. Let m be the dimension of linearization of
{f1, . . . , fn}. Given parameters k, ε > 0, one can compute a (k, ε)kernel of F of size
O(k/εrm/2) in O(n + k2/εrm) time.
Theorems 2.5 and 2.7 immediately imply improved results for various
shapefitting problems mentioned in [
19
], some of which have been listed in Introduction.
The details can be found in [
19
].
3 Incremental ShapeFitting Algorithm
In this section we present a simple incremental algorithm for shape fitting with k
outliers. Compared to the shapefitting algorithms derived directly from Theorems 2.5
and 2.7, the incremental algorithm does not enjoy a better bound on the running time,
but usually performs faster in practice. The algorithm does not make explicit use of
(k, ε)kernels. However, by exploiting the construction of (k, ε)kernels from the
previous section, we show that the number of iterations performed by the algorithm is
independent of the input size n. We first describe and analyze the algorithm for the
special case in which we wish to find a minimumwidth slab that contains all but at
most k points of a point set. We then show that the same approach can be extended to
a number of other shapes, including cylinders, spherical shells, and cylindrical shells.
3.1 Algorithm
A slab σ ⊆ Rd is the region bounded by two parallel hyperplanes. The width of σ
is the distance between the two hyperplanes. The hyperplane passing through the
middle of σ is called the center hyperplane of σ . For a given parameter c > 0, we
will use c · σ to denote the slab obtained by scaling σ by the factor of c with respect
to its center hyperplane. Let uσ ∈ Sd−1 denote the direction in the upper hemisphere
normal to the hyperplanes bounding σ .
Let Aopt(R, k) be an algorithm that returns a slab of the minimum width that
contains all but at most k points of R. The incremental algorithm proceeds as follows.
We start with an arbitrary subset R ⊆ P of constant size and compute σ = Aopt(R, k).
1
fIfou1n−dε a·nσεcaapnprcooxviemr aatlilonbuotf aAt ompto(Pst,kk)p. oOint htserowfisPe,, wtheenadwdethsetoppoibnetcsaoufseVkw(ueσh,aPve)
(as defined in Sect. 2.2) to R and repeat the above step.
Note that the algorithm always terminates. If the number of iterations of the
algorithm is small, its running time would also be small. We next prove a bound on the
number of iterations that is independent of n.
3.2 Analysis
We will show that there exists a family H of O(k2/εd−1) great hyperspheres on
Sd−1 with the following property: the algorithm stops as soon as it computes a slab
σ1 such that, for some slab σ2 computed in an earlier iteration, uσ1 and uσ2 lie in
the same cell of the arrangement A(H) of H. This would immediately imply an
O((k2/εd−1)d−1) bound on the number of iterations. First we prove a useful property
of affine transforms. We then describe how to choose the great hyperspheres. Finally
we prove the desired property of the chosen hyperspheres and the convergence of the
algorithm.
We write an affine transform τ : Rd → Rd as τ (x) = AT · x + q0 for x ∈ Rd ,
where A is a d × d nonsingular matrix and q0 ∈ Rd is a fixed vector. Given τ , let
τ : Sd−1 → Sd−1 be the map defined by τ (u) = A−1u/ A−1u for u ∈ Sd−1. If the
transform τ is clear from the context, we simply use u to denote τ (u).
Lemma 3.1 Let τ : Rd → Rd be an affine transform. For any direction u ∈ Sd−1,
any four points p, q, r, s ∈ Rd , and any parameter c ∈ R,
u, p − q ≤ c · u, r − s
⇐⇒
u, τ (p) − τ (q) ≤ c · u, τ (r) − τ (s) .
In particular, for any point set P and P = τ (P ), we have P i [u] = τ (P i [u]) for
0 ≤ i < P .
Proof Suppose τ (x) = AT · x + q0 for x ∈ Rd . Then
u, τ (p) − τ (q) = A−1u/ A−1u , AT (p − q)
= uT (AT )−1AT (p − q)/ A−1u
= uT (p − q)/ A−1u = u, p − q / A−1u .
Similarly, we have u, τ (r) − τ (s) = u, r − s / A−1u . Hence the first claim of
the lemma follows. Setting c = 0 in the first claim, we know that u, p ≤ u, q if
and only if u, τ (p) ≤ u, τ (q) . The second claim then follows.
We need the following two lemmas to describe how to choose the desired family
of great hyperspheres.
Lemma 3.2 Let S be a set of n points in Rd . There exists a set H of O(n2) great
hyperspheres in Sd−1 so that for any u, v ∈ Sd−1 lying in the same cell of A(H ), we
have Si [u] = Si [v], for i = 0, . . . , n − 1.
Proof For any pair of points p, q ∈ S, let hpq be the great hypersphere in Sd−1,
defined by the equation
u, p = u, q ,
u ∈ Sd−1.
We let H = {hpq  p, q ∈ S}. Clearly H  = O(n2). Consider any cell ∈ A(H ).
By construction, it is easy to see that the relative ordering of the elements in
{ u, p  p ∈ S} is the same for all u ∈ . Hence, Si [u] = Si [v] for any u, v ∈ ,
as desired.
Lemma 3.3 Let S be a set of n points in Rd whose affine hull spans Rd , and let
δ > 0 be a parameter. There exist an affine transform τ and a set H of O(1/δ) great
hyperspheres in Sd−1 so that for any u, v ∈ Sd−1 lying in the same cell of A(H ) and
for any two points p, q ∈ S,
 u − v, τ (p) − τ (q)  ≤ δ · E (v, τ (S)).
Proof Let Bd ⊆ Rd be a unit ball centered at the origin. By John’s Ellipsoid
Theorem [
23
], there exists an affine transform τ so that (1/d) · Bd ⊆ conv(τ (S)) ⊆ Bd .
Agarwal et al. [
7
] proved that there exists a set H of O(1/δ) great hyperspheres in
Sd−1, such that for any u, v ∈ Sd−1 lying in the same cell of A(H ), u − v ≤ δ/d.
Note that E (v, τ (S)) ≥ 2/d, and for any p, q ∈ S, τ (p) − τ (q) ≤ 2. Thus,
 u − v, τ (p) − τ (q)  ≤ u − v · τ (p) − τ (q)
≤ 2δ/d ≤ δ · E (v, τ (S)),
We assume ε ≤ 1/2. Fix δ = ε/6 ≤ 1/12. Let S be a (k, δ)kernel of P computed
by the algorithm in Sect. 2.1, and let X = P \ S. Using Lemmas 3.2 and 3.3, we
construct a decomposition of Sd−1 as follows. For each point p ∈ S, let Hp be a
family of O(1/δ) great hyperspheres that satisfy Lemma 3.3 for X ∪ {p}, and let
τp be the corresponding affine transform. Let = {τp  p ∈ S}. Let G be the set of
O(S2) great hyperspheres that satisfy Lemma 3.2 for the set S. Set
H = G ∪
Hp .
p∈S
Next we prove crucial properties of the decomposition A(H).
2 + S/δ) = O(k2/εd−1). The number of cells in A(H) is
NOo(teHthda−t1 )H= O=(O(k(2/Sεd−1)d−1).
Lemma 3.4 Let u ∈ Sd−1 be a direction. Set Q = X ∪ {Sk[u]}. For any affine
transform τ , we have
E (u, τ (Q)) ≤ (1 + δ) · Ek(u, τ (P )).
Proof Since the algorithm described in Sect. 2.1 performs 2k + 1 iterations, at least
one of them, say the iteration i, was unsuccessful with respect to direction u. By
Lemma 2.1 and the fact X ⊆ Pi , we know that
U(u, X ) ≤ U(u, Pi ) ≤ Uk(u, P ) + (δ/2) · Ek(u, P ),
L(u, X ) ≥ L(u, Pi ) ≥ Lk(u, P ) − (δ/2) · Ek(u, P ).
(3)
(4)
(5)
Therefore,
k u
E u, X ∪ S [ ]
≤ max u, Sk[u] , U(u, X ) − min u, Sk[u] , L(u, X )
= max Uk(u, S), U(u, X ) − min Lk(u, S), L(u, X )
− Lk(u, P ) − (δ/2) · Ek(u, P )
(by (4) and (5))
Hence by Lemma 3.1, E (u, τ (Q)) ≤ (1 + δ) · Ek(u, τ (P )).
Lemma 3.5 Let u, v ∈ Sd−1 be any two directions lying in the same cell of A(H).
Then, for any 0 ≤ a, b ≤ k, we have
Ea,b(v, Vk(u, P )) ≥ (1 − ε) · Ea,b(v, P ).
Proof We prove the claim for the case a, b = k; the argument easily adapts to other
cases. To this end, we show that for ≤ k,
v, P [u] ≥ Uk(v, P ) − (ε/2) · Ek(v, P ).
We start by considering the case P [u] ∈ S. Observe that Vk(u, S) = Vk(v, S)
(we remind the reader that V is an ordered set, as such equality here means also
identical ordering by level). In particular, since P [u] is clearly at level ≤ of S ⊆ P
in direction u, P [u] is also at level ≤ of S in direction v. Hence,
v, P [u] ≥ U (v, S) ≥ Uk(v, S) ≥ Uk(v, P ) − δ · Ek(v, P ),
where the last inequality follows from the fact that S is a (k, δ)kernel of P .
Now consider the case P [u] ∈ X . Set Q = X ∪ {Sk[u]}, and let τ ∈
affine transform that satisfies Lemma 3.3 for the set Q. Since ≤ k, we have
be the
u, Sk[u] ≤ u, P k[u] ≤ u, P [u] ,
implying that u, Sk[u] − P [u] ≤ 0, or equivalently by applying Lemma 3.1 with
c = 0,
u, τ (Sk[u]) − τ (P [u]) ≤ 0.
Therefore,
v, τ (Sk[u]) − τ (P [u])
= u, τ (Sk[u]) − τ (P [u]) + v − u, τ (Sk[u]) − τ (P [u])
≤ v − u, τ (Sk[u]) − τ (P [u]) .
Note that u, v lie in the same cell of A(H), and P [u], Sk[u] ∈ Q = X ∪ {Sk[u]}.
By applying Lemma 3.3 to the righthand side of (6), we obtain
v, τ (Sk[u]) − τ (P [u])
≤ δ · E (v, τ (Q)) ≤ δ(1 + δ) · Ek(v, τ (P )) ≤ 2δ · Ek(v, τ (P )),
(7)
where the second inequality follows from Lemma 3.4. By Lemma 3.1, (7) implies
v, Sk[u] − P [u] ≤ 2δ · Ek(v, P ).
(6)
(8)
Observing that Sk[u] = Sk[v] and using (8), we obtain
≥ Uk(v, P ) − 3δ · Ek(v, P ) ≥ Uk(v, P ) − (ε/2) · Ek(v, P ).
Similarly, we can prove that for any 0 ≤
Ek(v, P ).
Hence, Ek(v, Vk(u, P )) ≥ (1 − ε) · Ek(v, P ), as claimed.
≤ k, v, P [u] ≤ Lk(v, P ) + (ε/2) ·
We are now ready to bound the number of iterations of the incremental algorithm.
Theorem 3.6 The number of iterations of the incremental algorithm for fitting the
minimumwidth slab with k outliers is bounded by O((k2/εd−1)d−1), which is
independent of n.
Proof Let ui ∈ Sd−1 be the direction orthogonal to the slab computed in the ith
iteration. We say that a cell ∈ A(H) is visited if ui ∈ . Suppose a cell is visited
by two iterations i and j during the execution of the algorithm. Assume i < j . Then
in iteration j , we have Vk(ui , P ) ⊆ R. Let σ be the slab returned by Aopt(R, k) in
iteration j . Then σ —the width of σ —is equal to Ea,b(uj , R) for some appropriate
a, b ≤ k with a + b = k. By Lemma 3.5, we have
σ  = Ea,b(uj , R) ≥ Ea,b(uj , Vk(ui , P )) ≥ (1 − ε)Ea,b(uj , P ),
or equivalently 1 −1ε σ  ≥ Ea,b(uj , P ). This implies that the algorithm would satisfy
the stopping criterion in iteration j . Thus the number of iterations is bounded by
 A(H) + 1 = O((k2/εd−1)d−1).
3.3 Other Shapes
The incremental algorithm of Sect. 3.1 for computing an εapproximation of the
minimumwidth slab with k outliers can be extended to fitting other shapes as well,
such as minimumwidth spherical shells or cylindrical shells, minimumradius
cylinders, etc. In this section we describe these extensions.
Spherical Shells and Cylindrical Shells A spherical shell is a closed region lying
between two concentric spheres in Rd . A cylinderical shell is a closed region lying
between two coaxial cylinders in Rd . Because fitting spherical shells or cylindrical
shells can be formulated as computing the minimum extent of a family F of mvariate
functions for some parameter m [
7
], we describe a general incremental algorithm for
the latter problem. For x ∈ Rm and 0 ≤ k < n, we denote
where Ea,b(x, F ) is as defined in Sect. 2.4. Let Aopt(F , k) be an algorithm that returns
Ek(x, F ) = am+bi=nk Ea,b(x, F ),
x∗ = arg xm∈Rinm Ek(x, F ).
The incremental algorithm starts by picking an arbitrary subset R ⊆ F of constant
size and compute x∗ = Aopt(R, k). If Ek(x∗, R) ≥ (1 − ε)Ek(x∗, F ), then Ek(x∗, R)
is an εapproximation of minx∈Rm Ek(x, F ) and we can stop. Otherwise, we add
Vk(x∗, F )—union of the 2(k + 1) functions of F that attain the k + 1 largest
values and the k + 1 smallest values in F (x∗) = {f (x∗)  f ∈ F }—to R, and repeat the
above step.
To analyze the above algorithm, we need the following lemma which is the dual
version of Lemma 3.5.
Lemma 3.7 Let F be a finite family of mvariate linear functions, and 0 < δ ≤ 1/2
be a parameter. Then there exists a set H of O(k2/δm) hyperplanes in Rm such that
for any u, v ∈ Rm lying in the same cell of A(H), and any 0 ≤ a, b ≤ k, we have
Ea,b(v, Vk(u, F )) ≥ (1 − δ) · Ea,b(v, F ).
Lemma 3.8 Let F be a finite family of mvariate polynomials that admits a
linearization of dimension , and 0 < δ ≤ 1/2 be a parameter. Then there exists a
decomposition of Rm into O(k2m/δm ) cells such that for any u, v ∈ Rm lying in the
same cell of the decomposition, and any 0 ≤ a, b ≤ k, we have
Ea,b(v, Vk(u, F )) ≥ (1 − δ) · Ea,b(v, F ).
Proof Let ϕ : Rm → R be the map so that each function f ∈ F maps to a
linear function hf in the sense that f (x) = hf (ϕ(x)) for all x ∈ Rm. Note that
= {ϕ(x)  x ∈ Rm}, the image of ϕ, is an mdimensional surface in R . Let
F = {hf  f ∈ F }. Applying Lemma 3.7 to F , we obtain a set H of O(k2/δ )
hyperplanes in R . Set H−1 = {h−1 = ϕ−1(h ∩ )  h ∈ H}, where each h−1 ∈ H−1
is an (m − 1)dimensional algebraic surface in Rm. If u, v ∈ Rm lie in the same cell of
A(H−1), then ϕ(u), ϕ(v) lie in the same cell of A(H). Since f (x) = hf (ϕ(x)) for all
f ∈ F and x ∈ Rm, Lemma 3.7 implies that Ea,b(v, Vk(u, F )) ≥ (1 − δ) · Ea,b(v, F ).
The lemma now follows because A(H−1) induces a decomposition of Rm into
O((k2/δ )m) cells [
1
].
Theorem 3.9 Let F = {f1, . . . , fn} be a family of mvariate nonnegative functions,
and 0 < ε ≤ 1/2 be a parameter. Suppose there exists an mvariate positive function
ψ (x) and an integer r ≥ 1, so that each gi (x) = ψ (x)fir (x) is a polynomial.
Furthermore, suppose G = {g1, . . . , gn} admits a linearization of dimension . Then there
exists a decomposition D of Rm into O(k2m/εrm ) cells such that for any u, v ∈ Rm
lying in the same cell of D, and any 0 ≤ a, b ≤ k, we have
Ea,b(v, Vk(u, F )) ≥ (1 − ε) · Ea,b(v, F ).
In addition, the incremental algorithm computes an εapproximation of
minx∈Rm Ek(x, F ) in O(k2m/εrm ) iterations.
Proof We first make the following observation: for any δ ≤ 1, 1 ≤ i, j, h, ≤ n, and
x ∈ Rm,
gi (x) − gj (x) ≥ (1 − δ) gh(x) − g (x)
⇔
fir (x) − fjr (x) ≥ (1 − δ) fhr (x) − f r (x) .
(9)
An immediate consequence of (9) is that gi (x) ≥ gj (x) if and only if fi (x) ≥ fj (x).
Consider the decomposition D of Rm obtained by applying Lemma 3.8 to the
family G with parameter δ = εr . Note that D = O(k2m/εrm ). For any u, v ∈ Rm
lying in the same cell of D, by Lemma 3.8 we have
Ea,b(v, Vk(u, G)) ≥ (1 − εr ) · Ea,b(v, G).
Using (9) and Lemma 2.6, we obtain that Ea,b(v, Vk(u, F )) ≥ (1 − ε) · Ea,b(v, F ), as
desired.
Using the proved result and the same argument as in Theorem 3.6, we immediately
obtain the second half of the theorem.
The problem of computing the minimumwidth spherical shell containing all but
at most k points of a point set P in Rd satisfies Theorem 3.9 with m = d, r = 2,
and = d + 1 [
7
]. Hence the incremental algorithm for this problem terminates
in kO(d)/εO(d2) iterations. Similarly, the incremental algorithm for computing the
minimumwidth cylindrical shell terminates in kO(d)/εO(d3) iterations, as it satisfies
Theorem 3.9 with m = 2d − 2, r = 2, and = O(d2) [
7
]. We thus obtain the
following.
Corollary 3.10 Let P be a set of n points in Rd , and let 0 < ε ≤ 1/2 be a parameter.
The incremental algorithm computes an εapproximation of the smallest spherical
shell containing all but k points of P in kO(d)/εO(d2) iterations, and the smallest
cylindrical shell containing all but k points of P in kO(d)/εO(d3) iterations.
Cylinders Unlike cylindrical shells and spherical shells, the problem of fitting
cylinders cannot be directly formulated as computing the minimum extent of a family of
functions. Instead, it can be reduced to computing minx∈Rm Uk(x, F ) for a family F
of nonnegative functions, where Uk(x, F ) is defined as the (k + 1)th largest value in
the set F (x). The incremental algorithm for such type of problems is as follows. Let
Aopt(F , k) be an algorithm that returns
x∗ = arg xm∈Rinm Uk(x, F ).
The algorithm starts by picking an arbitrary subset R ⊆ F of constant size and
compute x∗ = Aopt(R, k). If Uk(x∗, R) ≥ (1 − ε) · Uk(x∗, F ), then we can stop
because Uk(x∗, R) is an εapproximation of minx∈Rm Uk(x, F ). Otherwise, we add
Uk(x∗, F )—the k + 1 functions of F that attain the k + 1 largest values in F (x∗) =
{f (x∗)  f ∈ F }—to R, and repeat the above step.
Theorem 3.11 Let F = {f1, . . . , fn} be a family of mvariate nonnegative functions,
and 0 < ε ≤ 1/2 be a parameter. Suppose there exists an mvariate positive function
ψ (x) and an integer r ≥ 1, so that each gi (x) = ψ (x)fir (x) is a polynomial.
Furthermore, suppose G = {g1, . . . , gn} admits a linearization of dimension . Then there
exists a decomposition D of Rm into O(k2m/εm ) cells such that for any u, v ∈ Rm
lying in the same cell of D, and any 0 ≤ a ≤ k, we have
Ua(v, Uk(u, F )) ≥ (1 − ε) · Ua(v, F ).
In addition, the incremental algorithm computes an εapproximation of
minx∈Rm Uk(x, F ) in O(k2m/εm ) iterations.
Proof Let G = {g1, . . . , gn} ∪ {−g1, . . . , −gn}. Since G admits a linearization of
dimension , G also admits a linearization of dimension . Let D be the
decomposition of Rm obtained by applying Lemma 3.8 to G with parameter δ = ε. Then
D = O(k2m/εm ). For any u, v ∈ Rm lying in the same cell of D, we have
Ea,a (v, Vk(u, G )) ≥ (1 − ε)Ea,a (v, G ).
By the symmetry of G , we have Ea,a (v, Vk(u, G )) = 2ψ (v)(Ua(v, Uk(u, F )))r , and
Ea,a(v, G ) = 2ψ (v)(Ua(v, F ))r . Hence the above inequality implies
Ua(v, Uk(u, F )) r ≥ (1 − ε) Ua(v, F ) r .
It follows that Ua(v, Uk(u, F )) ≥ (1 − ε) · Ua(v, F ), as desired.
As a direct consequence, the second half of the theorem follows.
Note that the cylinder problem has the same linearization as the cylindrical shell
problem mentioned above. We then obtain the following.
Corollary 3.12 Let P be a set of n points in Rd , and let 0 < ε ≤ 1/2 be a
parameter. The incremental algorithm computes an εapproximation of the smallest cylinder
containing all but k points of P in kO(d)/εO(d3) iterations.
We have not tried to optimize our bounds on the number of iterations, but we
believe that they can be improved. As shown in Sect. 4, the number of iterations in
practice is usually much smaller than the proved bounds here.
4 Experiments
In this section, we demonstrate the effectiveness of our algorithms by evaluating their
performances on various synthetic and real data. All our experiments were conducted
on a Dell PowerEdge 650 server equipped with 3 GHz Pentium IV processor and 3
GB memory, running Linux 2.4.20.
Computing (k, ε)Kernels We implemented a simpler version of our (k, ε)kernel
algorithm, which does not invoke HarPeled and Wang’s algorithm [
19
] first. We
used an implementation of Yu et al. [
24
] for computing δkernels in each iteration.
Although the worstcase running time of the algorithm is larger than that mentioned
in Theorem 2.3, it is simple and works well in practice.
We used three types of synthetic inputs as well as a few large 3D geometric
models [
20
]:
Input
type
Sphere
Cylinder
Clustered
Table 2 Performance of the
(k, ε)kernel algorithm on
various 3D geometric models
with k = 5. Running time is
measured in seconds
1. Points uniformly distributed on a sphere (sphere);
2. Points uniformly distributed on a cylindrical surface (cylinder);
3. Clustered point sets (clustered), consisting of 20 equalsized clusters whose
centers are uniformly distributed in the unit square and radii uniformly distributed
between [0, 0.2];
4. 3D geometric models: bunny (∼36 K points), dragon (∼438 K points), buddha
(∼544 K points).
For each input data, we ran our (k, ε)kernel algorithm with k = 5. The algorithm
performs 11 iterations and chooses roughly 100 points for the kernel in each
iteration. The output size of the algorithm varies between 800 and 1100. To compute the
approximation error between the klevel extents of the kernel S and of the input P ,
we choose a set of 1000 random directions from Sd−1 and compute
err (P , S) = mu∈ax Ek (u, P ) − Ek (u, S) .
Ek (u, P )
Tables 1 and 2 summarize the approximation error and the running time of the
algorithm, for each input data. As can be seen, our algorithm works well in low
dimensions both in terms of the approximation error and the running time. Our algorithm
also performed quite well on several 3D geometric models. In high dimensions, the
performance of our algorithm deteriorates because of the curse of dimensionality.
We also recorded how the approximation error decreases for each of the first 40
levels, after each iteration of the algorithm. The results are shown in Fig. 3.
Observe that the approximation error for every level monotonically decreases during the
execution of the algorithm. Moreover, the error decreases rapidly in the first few
itInput
width
w = 0.05
w = 0.50
w = 5.00
Input
radius
r = 1.000
erations and then it stabilizes. For example, in our experiments for d = 3, the error
reduces to less than 0.1 within 7 iterations even for the level k = 40 and then it
decreases very slowly with each iteration. This phenomenon suggests that in practice
it is unnecessary to run the algorithm for full 2k + 1 iterations in order to compute
(k, ε)kernels. The larger the number of iterations is, the larger the kernel size
becomes, but the approximation error does not decrease much further.
Incremental Algorithm We applied the incremental shapefitting algorithm for
computing an εapproximate minimumwidth annulus of a point set with k outliers in R2.
We first implemented a bruteforce O(n5) exact algorithm for this problem. Clearly,
this algorithm is slow even on mediumsized input. Here our focus is to study the
number of iterations of the incremental algorithm; a faster implementation of the
exact algorithm would naturally result in a faster implementation of the
incremental algorithm. We used the slow exact algorithm as a subroutine to solve the small
subproblems in each iteration of the incremental algorithm. We tested this algorithm
on a set of synthetic data, generated by uniformly sampling from annuli with fixed
inner radius r = 1.00 and widths w varying from 0.05 to 5.00, and then artificially
introducing k = 10 extra outlier points. The experimental results are summarized in
Table 3a; see also Fig. 4 for a few snapshots of the running algorithm. As can be seen,
the number of iterations of the incremental algorithm is never more than 5. In other
words, the algorithm is able to converge to an approximate solution very quickly.
We also applied the incremental algorithm for computing an εapproximate
smallest enclosing circle of a point set with k outliers in R2. Again, we implemented a
bruteforce O(n4) exact algorithm for this problem to solve the small subproblems
in each iteration; implementing a faster algorithm (such as an algorithm by Matoušek
[
22
] or by Chan [
13
]) would result in a faster incremental algorithm. We tested our
algorithm on a set of synthetic data, generated by uniformly sampling from a circle
of radius r = 1.00, and then artificially introducing k = 10 extra outlier points. The
experimental results are shown in Table 3b. Similar to the annulus case, the number
of iterations of the incremental algorithm is also small.
5 Conclusions
We have presented an iterative algorithm, with O(n + k2/εd−1) running time, for
computing a (k, ε)kernel of size O(k/ε(d−1)/2) for a set P of n points in Rd . We also
presented an incremental algorithm for fitting various shapes through a set of points
with outliers, and exploited the (k, ε)kernel algorithm to prove that the number of
iterations of the incremental algorithm is independent of n. Both our algorithms are
simple and work well in practice.
We conclude by mentioning two open problems: Can a (k, ε)kernel of size
O(k/ε(d−1)/2) be computed in time O(n + k/εd−1)? Can the number of iterations
in the incremental algorithm for computing the minimumwidth slab be improved to
O(1/ε(d−1)/2)? For the first question, an anonymous referee pointed out that one can
use the dynamic algorithm for εkernels [
7
] to obtain an algorithm with running time
O n + k/ε3(d−1)/2 · polylog(k, 1/ε) . This bound provides an improvement over the
current running time for a sufficiently large k.
Acknowledgements The authors thank Yusu Wang for helpful discussions and two anonymous referees
for constructive comments that greatly improved the presentation of the paper.
1. Agarwal , P.K. , Sharir , M. : Arrangements and their applications . In: Sack, J.R. , Urrutia , J . (eds.) Handbook of Computational Geometry , pp. 49  119 . Elsevier, Amsterdam ( 2000 )
2. Agarwal , P.K. , Yu , H.: A spaceoptimal datastream algorithm for coresets in the plane . In: Proc. 23rd Annu. Sympos. Comput. Geom. , pp. 1  10 , 2007
3. Agarwal , P.K. , Arge , L. , Erickson , J. , Franciosa , P. , Vitter , J.S.: Efficient searching with linear constraints . J. Comput. Sys. Sci . 61 , 192  216 ( 2000 )
4. Agarwal , P.K. , Aronov , B. , HarPeled , S. , Sharir , M. : Approximation and exact algorithms for minimumwidth annuli and shells . Discrete Comput. Geom . 24 , 687  705 ( 2000 )
5. Agarwal , P.K. , Aronov , B. , Sharir , M. : Exact and approximation algorithms for minimumwidth cylindrical shells . Discrete Comput. Geom . 26 , 307  320 ( 2001 )
6. Agarwal , P.K. , Guibas , L.J. , Hershberger , J. , Veach , E.: Maintaining the extent of a moving point set . Discrete Comput. Geom . 26 , 353  374 ( 2001 )
7. Agarwal , P.K. , HarPeled , S. , Varadarajan , K.R. : Approximating extent measures of points . J. Assoc. Comput. Mach . 51 , 606  635 ( 2004 )
8. Agarwal , P.K. , HarPeled , S. , Varadarajan , K. : Geometric approximation via coresets . In: Goodman, J.E. , Pach , J. , Welzl , E. (eds.) Combinatorial and Computational Geometry. Math. Sci. Research Inst. Pub., Cambridge ( 2005 )
9. Aronov , B. , HarPeled , S. : On approximating the depth and related problems . In: Proc. 16th ACMSIAM Sympos. Discrete Algorithms , pp. 886  894 , 2005
10. Barequet , G. , HarPeled , S. : Efficiently approximating the minimumvolume bounding box of a point set in three dimensions . J. Algorithms 38 , 91  109 ( 2001 )
11. Basch , J. , Guibas , L.J. , Hershberger , J. : Data structures for mobile data . J. Algorithms 31 , 1  28 ( 1999 )
12. Chan , T.M. : Approximating the diameter, width, smallest enclosing cylinder and minimumwidth annulus . Int. J. Comput. Geom. Appl . 12 , 67  85 ( 2002 )
13. Chan , T.M. : Lowdimensional linear programming with violations . SIAM J. Comput . 879  893 ( 2005 )
14. Chan , T.M. : Faster coreset constructions and datastream algorithms in fixed dimensions . Comput. Geom. Theory Appl . 35 , 20  35 ( 2006 )
15. Chazelle , B. : Cutting hyperplanes for divideandconquer . Discrete Comput. Geom. 9 , 145  158 ( 1993 )
16. Chazelle , B. , Preparata , F.P. : Halfspace range search: an algorithmic application of ksets . Discrete Comput. Geom. 1 , 83  93 ( 1986 )
17. Chazelle , B. , Guibas , L.J. , Lee , D.T. : The power of geometric duality . BIT 25 , 76  90 ( 1985 )
18. Cole , R. , Sharir , M. , Yap , C.K. : On khulls and related problems . SIAM J. Comput . 16 , 61  77 ( 1987 )
19. HarPeled , S. , Wang , Y. : Shape fitting with outliers . SIAM J. Comput . 33 , 269  285 ( 2004 )
20. http://www.cc.gatech.edu/projects/large_models/. Large geometric models archive
21. Matoušek , J.: Approximate levels in line arrangements . SIAM J. Comput . 20 , 222  227 ( 1991 )
22. Matoušek , J.: On geometric optimization with few violated constraints . Discrete Comput. Geom . 14 , 365  384 ( 1995 )
23. Matoušek , J.: Lectures on Discrete Geometry . Springer, Heidelberg ( 2002 )
24. Yu , H. , Agarwal , P.K. , Poreddy , R. , Varadarajan , K.R. : Practical methods for shape fitting and kinetic data structures using coresets . Algorithmica (to appear)