Locally Adaptive Frames in the RotoTranslation Group and Their Applications in Medical Imaging
J Math Imaging Vis
Locally Adaptive Frames in the RotoTranslation Group and Their Applications in Medical Imaging
R. Duits 0 1
M. H. J. Janssen 0 1
J. Hannink 0 1
G. R. Sanguinetti 0 1
0 Digital Sports Group, University of ErlangenNürnberg (FAU) , Erlangen , Germany
1 CASA, Eindhoven University of Technology , Eindhoven , The Netherlands
Locally adaptive differential frames (gauge frames) are a wellknown effective tool in image analysis, used in differential invariants and PDEflows. However, at complex structures such as crossings or junctions, these frames are not well defined. Therefore, we generalize the notion of gauge frames on images to gauge frames on data representations U : Rd Sd−1 → R defined on the extended space of positions and orientations, which we relate to data on the rototranslation group S E (d), d = 2, 3. This allows to define multiple frames per position, one per orientation. We compute these frames via exponential curve fits in the extended data representations in S E (d). These curve fits minimize first or secondorder variational problems which are solved by spectral decomposition of, respectively, a structure tensor or Hessian of data on S E (d). We include these gauge frames in differential invariants and crossingpreserving PDEflows acting on extended data representation U and we show their advantage compared to the standard leftinvariant frame on S E (d). Applications include crossingpreserving filtering and improved segmentations of the vascular tree in retinal images, and new 3D extensions of coherenceenhancing diffusion via invertible orientation scores.
Rototranslation group; Gauge frames; Exponential curves; Nonlinear diffusion; Leftinvariant image processing; Orientation scores

R. Duits and M. H. J. Janssen are joint main authors.
1 Introduction
Many existing image analysis techniques rely on differential
frames that are locally adapted to image data. This includes
methods based on differential invariants [33,39,50,60],
partial differential equations [38,60,71], and nonlinear and
morphological scale spaces [12,13,72], used in various
image processing tasks such as tracking and line detection
[6], corner detection and edge focussing [8,39], segmentation
[66], active contours [15,16], DTI data processing [45,46],
featurebased clustering, etc. These local coordinate frames
(also known as ‘gauge frames’ according to [10,33,39])
provide differential frames directly adapted to the local image
structure via a structure tensor or a Hessian of the image.
Typically the structure tensor (based on firstorder Gaussian
derivatives) is used for adapting to edgelike structures, while
the Hessian (based on secondorder Gaussian derivatives) is
used for adapting to linelike structures. The primary benefit
of the gauge frames is that they allow to include
adaptation for anisotropy and curvature in a rotation and translation
invariant way. See Fig. 1, where we have depicted local
adaptive frames based on eigenvector decomposition of the image
Hessian at some given scale, of the MR image in the
background.
It is sometimes problematic that such locally adapted
differential frames are directly placed in the image domain Rd
(d = 2, 3), as at the vicinity of complex structures, e.g.,
crossings, textures, bifurcations, one typically requires
multiple local spatial coordinate frames. To this end, one effective
alternative is to extend the image domain to the joint space
of positions and orientations Rd Sd−1. The advantage is
that it allows to disentangle oriented structures involved in
crossings, and to include curvature, cf. Fig. 2. Such extended
domain techniques rely on various kinds of lifting, such as
coherent state transforms (also known as invertible
orientation scores) [2, 6, 27, 34], continuous wavelet transforms
[6, 23, 27, 63], orientation lifts [11, 73], or orientation
channel representations [32]. In case one has to deal with more
complex diffusionweighted MRI techniques, the data in
extended position orientation domain can be obtained after
a modeling procedure as in [1, 65, 67, 68]. In this article we
will not discuss in detail on how such a new image
representation or lift U : Rd Sd−1 → R is to be constructed
from grayscale image f : Rd → R, and we assume it to
be a sufficiently smooth given input. Here U (x, n) is to be
considered as a probability density of finding a local oriented
structure (i.e., an elongated structure) at position x ∈ Rd with
orientation n ∈ Sd−1.
When processing data in the extended position
orientation domain, it is often necessary to equip the domain
with a structure that links the data across different
orientation channels, in such a way that a notion of alignment
Fig. 1 Left locally adaptive
frames (gauge frames) in the
image domain computed as the
eigenvectors of the Hessian of
the image at each location. Right
such gauge frames can be used
for adaptive anisotropic
diffusion and geometric
reasoning. However, at complex
structures such as
blobstructures/crossings, the
gauge frames are ill defined
causing fluctuations
Fig. 2 We aim for adaptive
anisotropic diffusion of images
which takes into account
curvature. At areas with low
orientation confidence (in blue)
isotropic diffusion is required,
whereas at areas with high
orientation confidence (in red)
anisotropic diffusion with
curvature adaptation is required.
Application of locally adaptive
frames in the image domain
suffers from interference (3rd
column), whereas application of
locally adaptive frames in the
domain Rd Sd−1 allows for
adaptation along all the
elongated structures (4th
column) (Color figure online)
given by
between local orientations is taken into account. This is
achieved by relating data on positions and orientations to
data on the rototranslation group S E (d) = Rd S O(d).
This idea resulted in contextual image analysis methods
[4,17,20,26,27,30,53,63,64,70,73] and appears in models
of lowlevel visual perception and their relation with the
functional architecture of the visual cortex [5,11,19,49,57–
59]. Following the conventions in [30], we denote functions
on the coupled space of positions and orientations by U :
Rd Sd−1 → R. Then, its extension U˜ : S E (d) → R is
(
1
)
(
2
)
(
3
)
U˜ (x, R) := U (x, Ra)
for all x ∈ Rd and all rotations R ∈ S O(d), and given
reference axis a ∈ Sd−1. Throughout this article a is chosen
as follows:
d = 2 ⇒ a = (
1, 0
)T , d = 3 ⇒ a = (
0, 0, 1
)T .
Then, we can identify the joint space of positions and
orientations Rd Sd−1 by
Rd
Sd−1 := S E (d)/({0} × S O(d − 1)),
where this quotient structure is due to (
1
), and where
S O(d − 1) is identified with all rotations on Rd that map
reference axis a onto itself. Note that in Eq. (
1
) the tilde indicates
we consider data on the group instead of data on the quotient.
If d = 2, the tildes can be ignored as R2 S1 = S E (
2
).
However, for d ≥ 3 this distinction is crucial and necessary details
on (
3
) will follow in the beginning of Sect. 6.
In this article, our quest is to find locally optimal
differential frames in S E (d) relying on similar Hessian and/or
structuretensor type of techniques for gauge frames on
images, recall Fig. 1. Then, the frames can be used to
construct crossingpreserving differential invariants and adaptive
diffusions of data in S E (d). In order to find these optimal
frames, our main tool is the theory of curve fits. Early works
on curve fits have been presented [37,54] where the notion
of curvature consistency is applied to inferring local curve
orientations, based on neighborhood cocircularity
continuation criteria. This approach was extended to 2D texture flow
inference in [7], by lifting images in position and orientation
domain and inferring multiple Cartan frames at each point.
Our work is embedded in a Lie group framework where we
consider the notion of exponential curve fits via formal
variational methods. Exponential curves in the S E (d)curved
geometry are the equivalents of straight1 lines in the
Euclidean geometry. If d = 2, the spatial projection of these
1 Exponential curves are autoparallels w.r.t. ‘’Cartan connection, see
Appendix 1, Eq. (132).
exponential curves are osculating circles, which are used
for constructing the curvature consistency in [54], for
defining the tensor voting fields in [52], and for local modeling
association fields in [19]. If d = 3, the spatial projection of
exponential curves are spirals with constant curvature and
torsion. Based on cohelicity principles, similar spirals have
been used in neuroimaging applications [61] or for modeling
heart fibers [62]. In these works curve fits are obtained via
efficient discrete optimization techniques, which are beyond
the scope of this article.
In Fig. 3, we present an example for d = 2 of the overall
pipeline of including locally adaptive frames in a suitable
diffusion operators Φ acting in the lifted domain R2 S1.
For d > 2 the same pipeline applies. Here, an exponential
curve fit γgc∗ (t ) (in blue, with spatial projection in red) at a
group element g ∈ S E (d) is characterized by (g, c∗(g)), i.e.,
a starting point g and an tangent vector c∗(g) that should be
aligned with the structures of interest. In essence, this paper
explains in detail how to compute c∗(g) as this will be the
principal direction the differential frame will be aligned with,
and then gives appropriate conditions for fixing the remaining
directions in the frame.
The main contribution of this article is to provide a
general theory for finding locally adaptive frames in the
rototranslation group S E (d), for d = 2, 3. Some
preliminary work on exponential curve fits of the second order on
S E (
2
) has been presented in [34,35,63]. In this paper we
formalize these previous methods (Theorems 2 and 3) and
we extend them to firstorder exponential curve fits
(Theorem 1). Furthermore, we generalize both approaches to the
case d = 3 (Theorems 4, 5, 6, 7, and 8). All theorems
contain new results except for Theorems 2 and 3. The key
ingredient is to consider the fits as formal variational curve
optimization problems with exact solutions derived by
spectral decomposition of structure tensors and Hessians of the
data U˜ on S E (d). In the S E (
3
)case we show that in order
to obtain torsionfree exponential curve fits with wellposed
projection on R3 S2, one must resign to a twofold
optimization algorithm. To show the potential of considering
these locally adaptive frames, we employ them in medical
image analysis applications, in improved differential
invariants and improved crossingpreserving diffusions. Here, we
provide for the first time coherenceenhancing diffusions via
3D invertible orientation scores [42,43], extending previous
methods [34,35,63] to the 3D Euclidean motion group.
1.1 Structure of the Article
We start the body of this article reviewing preliminary
differential geometry tools in Sect. 2. Then, in Sect. 3 we describe
how a given exponential curve fit induces the locally adaptive
frame. In Sect. 4 we provide an introduction by
reformulating the standard gauge frames construction in images in
a grouptheoretical setting. This gives a roadmap towards
S E (
2
)extensions explained in Sect. 5, where we deal with
exponential curve fits of the first order in Sect. 5.2 computed
via a structure tensor, and exponential curves fits of second
order in Sect. 5.3 computed via the Hessian of the data U˜ . In
the latter case we have two options for the curve
optimization problem: one solved by the symmetric sum, and one by
the symmetric product of the nonsymmetric Hessian. The
curve fits in S E (
2
) in Sect. 5 are extended to curve fits in
S E (
3
) in Sect. 6. It starts with preliminaries on the quotient
(
3
) and then it follows the same structure as the previous
section. Here we present the twofold algorithm for computing
the torsionfree exponential curve fits.
In Sect. 7 we consider experiments regarding
medical imaging applications and feasibility studies. We first
recall the theory of invertible orientation scores needed for
the applications. In the S E (
2
)case we present
crossingpreserving multiscale vessel enhancing filters in retinal
imaging, and in the S E (
3
)case we include a proof of
concept of crossingpreserving (coherenceenhancing
diffusion) steered by gauge frames via invertible 3D orientation
scores.
Finally, there are 5 appendices. Appendix 1 supplements
Sect. 3 by explaining the construction of the frame for
d = 2, 3. Appendix 2 describes the geometry of
neighborg = (x, y, θ ) an exponential curve fit γgc∗ (t ) (in blue, with spatial
projection in red) with tangent γ˙gc∗ (0) = c∗(g) = (c1, c2, c3)T at g. Based
on this fit we construct for each g, a local frame { B1g , B2g , B3g}
which are used in our operators Φ on the lift (here Φ is a nonlinear
diffusion operator) (Color figure online)
ing exponential curves needed for formulating the variational
problems. Appendix 3 complements the twofold approach in
Sect. 6. Appendix 4 provides the definition of the Hessian
used in the paper. Finally, Table 1 in Appendix 5 contains a
list of symbols, their explanations and references to the
equation in which they are defined. We advise the reader to keep
track of this table. Especially, in the more technical sections:
Sects. 5 and 6.
2 Differential Geometrical Tools
Relating our data to data on the Euclidean motion group, via
Eq. (
1
), allows us to use tools from Lie group theory and
differential geometry. In this section we explain these tools
that are important for our notion of an exponential curve
fit to smooth data U˜ : S E (d) → R. Often, we consider
the case d = 2 for basic illustration. Later on, in Sect. 6,
we consider the case d = 3 and extra technicalities on the
quotient structure will enter.
2.1 The RotoTranslation Group
The data U˜ : S E (d) → R is defined on the group S E (d) of
rotations and translations acting on Rd . As the concatenation
of two rigid body motions is again a rigid body motion, the
group S E (d) is equipped with the following group product:
gg = (x, R)(x , R ) = (Rx + x, RR ),
with g = (x, R), g = (x , R ) ∈ S E (d),
where we recognize the semidirect product structure S E (d)
= Rd S O(d), of the translation group Rd with rotation
group S O(d) = {R ∈ Rd×d  RT = R−1, det R = 1}. The
groups S E (d) and S O(d) have dimension
rd := dim(S O(d)) =
nd := dim(S E (d)) =
(d − 1)d
2
d(d + 1)
2
,
= d + rd .
Note that n2 = 3, n3 = 6. One may represent elements g
from S E (d) by the following matrix representation
g ≡ M (g) =
R x
0T 1
M (g g ) = M (g) M (g ).
, which indeed satisfies
We will often avoid this embedding into the set of invertible
(d + 1) × (d + 1) matrices, in order to focus on the geometry
rather than the algebra.
2.2 LeftInvariant Operators
In image analysis applications operators U˜ → Φ˜ (U˜ ) need
to be leftinvariant and not rightinvariant [24,34].
Leftinvariant operators Φ˜ in the extended domain correspond
to rotation and translation invariant operators Υ in the image
domain, which is a desirable property. On the other hand,
rightinvariance boils down to isotropic operators in the
image domain which is an undesirable restriction. By
definition Φ˜ is leftinvariant and not rightinvariant if it commutes
with the leftregular representation L (and not with the
rightregular representation R). Representations L, R are given
by
(
4
)
(
5
)
(
6
)
provide a local moving frame of reference in the tangent
bundle T (S E (d)), which comes in naturally when including
alignment of local orientations in the image processing of U˜ .
Formally, the leftinvariant vector fields are obtained by
nd
taking a basis { Ai }i=1 ∈ Te(S E (d)) in the tangent space at
the unity element e := (0, I ), and then one uses the
pushforward (L g)∗ of the left multiplication
L gh = gh,
to obtain the corresponding tangent vectors in the tangent
space Tg(S E (d)). Thus, one associates to each Ai a
leftinvariant field Ai given by
Ai g = (L g)∗ Ai , for all g ∈ S E (d), i = 1, . . . , nd ,
(
9
)
where we consider each Ai as a differential operator on
smooth locally defined functions φ˜ given by
Ai g φ˜ = (L g)∗ Ai φ˜ := Ai (φ˜ ◦ L g).
An explicit way to construct and compute the differential
operators Ai g from Ai = Ai e is via
Ai g φ˜ = Ai φ˜ (g) = lim
→0
φ˜ (g e Ai ) − φ˜ (g)
,
twiahlefrreomA L→ie eaAlge=bra Tk∞e=(0S EAk!k(dd)e)ntooteLsiethgeromuaptriSxEe(xdp)o.
nTehnedifferential operators {Ai }in=d1 induce a corresponding dual
i nd , which is a basis for the cotangent bundle
frame {ω }i=1
T ∗(S E (d)). This dual frame is given by
ω , A j = δij for i, j = 1, . . . nd ,
i
where δij denotes the Kronecker delta. Then the derivative
of a differentiable function φ˜ : S E (d) → R is expressed as
follows:
(LhU˜ )(g) = U˜ (h−1g), (RhU˜ )(g) = U˜ (gh),
(
7
)
dφ˜ =
i
Ai φ˜ ω ∈ T ∗(S E (d)).
nd
i=1
for all h, g ∈ S E (d). So operator Φ˜ must satisfy Φ˜ ◦ Lg =
Lg ◦ Φ˜ and Φ˜ ◦ Rg = Rg ◦ Φ˜ for all g ∈ S E (d).
2.3 LeftInvariant Vector Fields and Dual Frame
A special case of leftinvariant operators are leftinvariant
derivatives. More precisely (see Remark 1 below), we need
to consider leftinvariant vector fields g → Ag, as the
leftinvariant derivative Ag depends on the location g where it is
ni
attached. Intuitively, the leftinvariant vector fields {Ai }i=1
Remark 1 In differential geometry, there exist two equivalent
viewpoints [3, Ch. 2] on tangent vectors Ag ∈ Tg(S E (d)):
either one considers them as tangents to locally defined
curves; or one considers them as differential operators on
locally defined functions. The connection between these
viewpoints is as follows. We identify a tangent vector γ˙˜ (t ) ∈
Tγ˜ (t)(S E (d)) with the differential operator (γ˙˜ (t ))(φ˜ ) :=
ddt φ˜ (γ˜ (t )) for all locally defined, differentiable, realvalued
functions φ˜ .
(
8
)
(
10
)
(
11
)
(
12
)
d
dt φ˜ (γ˜ (t )) = dφ˜ (γ˜ (t )), γ˙˜ (t ) =
γ˙˜ i (t ) Ai γ˜ (t) φ˜
nd
i=1
nd
with γ˙˜ (t ) =
i=1
defined on an open set around γ˜ (t ). Eq. (
13
) will play a
crucial role in Sect. 5 (exponential curve fits for d = 2) and
Sect. 6 (exponential curve fits for d = 3).
γ˙˜ i (t ) Ai γ˜ (t), and with φ˜ smooth and
Example 1 For d = 2 we take A1 = ∂x e, A2 = ∂y e,
A3 = ∂θ e. Then we have the leftinvariant vector fields
A1(x,y,θ) := cos θ
A2(x,y,θ) := − sin θ
∂
A3(x,y,θ) := ∂θ (x,y,θ)
.
The dual frame is given by
ω1 = cos θ dx + sin θ dy,
ω2 = − sin θ dx + cos θ dy,
ω3 = dθ .
∂
∂ x (x,y,θ)
∂
∂ x (x,y,θ)
+ sin θ
+ cos θ
∂
∂ y (x,y,θ)
∂
∂ y (x,y,θ)
,
,
Next we express tangent vectors explicitly in the
leftinvariant moving frame of reference, by taking a directional
derivative:
and thereby have constant velocity in the moving frame of
reference, i.e., γ˙˜ i = ci in Eq. (
13
). A way to compute the
exponentials is via matrix exponentials and (
6
).
Example 2 If d = 2 we have exponential curves:
γ˜gc0 (t ) = g0 et (c1A1+c2A2+c3A3) = (x (t ), y(t ), θ (t )),
(
18
)
which are circular spirals with
c1
x (t ) = x0 + c3 sin(c3t + θ0) − sin(θ0)
c2
+ c3 cos(c3t + θ0) − cos(θ0) ,
c1
y(t ) = y0 − c3 cos(c3t + θ0) − cos(θ0)
c2
+ c3 sin(c3t + θ0) − sin(θ0) ,
θ (t ) = θ0 + t c3,
(
13
)
(
14
)
(
15
)
(
16
)
(
17
)
for the case c3 = 0, and all t ≥ 0 and straight lines with
x (t ) = x0 + t c1 cos θ0 − c2 sin θ0 ,
y(t ) = y0 + t c1 sin θ0 + c2 cos θ0 ,
θ (t ) = θ0,
for the case c3 = 0, where g0 = (x0, y0, θ0) ∈ S E (
2
). See
the left panel in Fig. 4.
Example 3 For d = 3, the formulae for exponential curves
in S E (
3
) are given in for example [18,30]. Their spatial part
are circular spirals with torsion τ (t ) = c(1c)(·1c)(
2
) κ (t ) and
curvature
κ (t ) =
1
c(
1
) 2
cos(t c(
2
) ) c(
2
) × c(
1
)
+
sin(t c(
2
) ) c(
2
) × c(
2
) × c(
1
) .
c(
2
)
Note that their magnitudes are constant:
κ =
c(
1
) × c(
2
)
c(
1
) 2
and τ  = c(
1
) ·cc((
12
)) · κ .
2.5 LeftInvariant Metric Tensor on S E(d)
We use the following (leftinvariant) metric tensor:
Gμ γ˜ (γ˜˙, γ˙˜ ) = μ2
γ˜˙ i 2 +
d
i=1
nd
i=d+1
γ˜˙ i 2,
(
19
)
(
20
)
(
21
)
(
22
)
(
23
)
For explicit formulas for leftinvariant vector fields in S E (
3
)
see [18,30].
2.4 Exponential Curves in S E(d)
Let (c(
1
), c(
2
))T ∈ Rd+rd = Rnd be a given column vector,
where c(
1
) = (c1, . . . , cd ) ∈ Rd denotes the spatial part and
c(
2
) = (cd+1, . . . , cnd ) ∈ Rrd denotes the rotational part.
The unique exponential curve passing through g ∈ S E (d)
with initial velocity c(g) = in=d1 ci Ai g equals
with Ai = Ai e denoting a basis of Te(S E (d)). In fact such
exponential curves satisfy
t nd ci Ai
γ˜gc(t ) = g e i=1
nd
i=1
γ˙˜ (t ) =
ci Ai γ˜ (t)
where γ˙˜ = in=d1 γ˜˙ i Ai γ˜ , and with stiffness parameter μ
along any smooth curve γ˜ in S E (d). Now, for the special
case of exponential curves, one has γ˙˜ i = ci is constant. The
metric allows us to normalize the speed along the curves by
imposing a normalization constraint
with Mμ :=
c 2μ :=
Mμc 2
d
nd
= μ2
ci 2 +
i=1 i=d+1
= μ2 c(
1
) 2 + c(
2
) 2 = 1,
ci 2
μId
0
0
Ird
∈ Rnd ×nd .
(
24
)
We will use this constraint in the fitting procedure in order
to ensure that our exponential curves (
17
) are parameterized
by Riemannian arclength t .
2.6 Convolution and Haar Measure on S E(d)
In general a convolution of data U˜ : S E (d) → R with kernel
K˜ : S E (d) → R is given by
(K˜ ∗ U˜ )(g) =
K˜ (h−1g) U˜ (h) dμ(h)
S E(d)
K˜ ((R )−1(x − x ), (R )−1R)dx dμSO(d)(R ),
=
Rd SO(d)
with dμ(h) = dx dμSO(d)(R ),
2.7 Gaussian Smoothing and Gradient on S E(d)
We define the regularized data
V˜ := G˜ s ∗ U˜ ,
where s = (s p, so) are the spatial and angular scales,
respectively, of the separable Gaussian smoothing kernel defined
by
Rd
G˜ s(x, R) := Gsp (x) GsSod−1 (Ra).
This smoothing kernel is a product of the heat kernel
x 2
e− 4sp
GsRpd (x) = (4πsp)d/2 on Rd centered at 0 with spatial scale
s p > 0, and a heat kernel GsSod−1 (Ra) on Sd−1 centered
around a ∈ Sd−1 with angular scale so > 0.
By definition the gradient ∇U˜ of image data U˜ :
S E (d) → R is the Riesz representation vector of the
derivative dU˜ :
∇U˜ := Gμ−1dU˜ =
μ−2Ai U˜ Ai +
A j U A j
d
i=1
≡ Mμ−2 (A1U˜ , . . . , And U˜ )T ,
nd
j=d+1
relying on Mμ as defined in (
24
). Here, following standard
conventions in differential geometry, Gμ−1 denotes the inverse
of the linear map associated to the metric tensor (
23
). Then,
the Gaussian gradient is defined by
∇sU˜ := ∇ V˜ = ∇(G˜ s ∗ U˜ ) = ∇ G˜ s ∗ U˜ .
(
25
)
2.8 Horizontal Exponential Curves in S E(d)
(
26
)
(
27
)
(
28
)
(
29
)
for all h = (x , R ) ∈ S E (d), where Haar measure μ is the
direct product of the usual Lebesgue measure on Rd with the
Haar measure on S O(d).
Typically, in the distribution U˜ (e.g., if U˜ is an orientation
score of a grayscale image) the mass is concentrated around
socalled horizontal exponential curves in S E (d) (see Fig. 3).
Next we explain this notion of horizontal exponential curves.
(
30
)
(
31
)
(
32
)
(
33
)
A curve t → (x (t ), y(t )) ∈ R2 can be lifted to a curve
t → γ˜ (t ) = (x (t ), y(t ), θ (t )) in S E (
2
) via
θ (t ) = arg{x˙(t ) + i y˙(t )}.
Generalizing to d ≥ 2, one can lift a curve t → x(t ) ∈ Rd
towards a curve t → γ (t ) = (x(t ), n(t )) in Rd Sd−1 by
setting
n(t ) = x˙(t ) −1 x˙(t ).
A curve t → x(t ) can be lifted towards a family of lifted
curves t → γ˜ (t ) = (x(t ), Rn(t)) into the rototranslation
group S E (d) by setting Rn(t) ∈ S O(d) such that it maps
reference axis a onto n(t ):
Rn(t)a = n(t ) = x˙(t ) −1 x˙(t ).
Here we use Rn to denote any rotation that maps reference
axis a onto n. Clearly, the choice of rotation is not unique for
d > 2, e.g., if d = 3 then RnRa,αa = a regardless the value
of α, where Ra,α denotes the counterclockwise 3D rotation
about axis a by angle α.
Next we study the implication of restriction (
31
) on the
tangent bundle of S E (d).
– For d = 2, we have restriction x˙(t ) = (x˙(t ), y˙(t )) =
x˙(t ) (cos θ (t ), sin θ (t )), i.e.,
(
34
)
(
35
)
If exponential curves are not horizontal, then we indicate how
much the local tangent of the exponential curve points outside
the spatial part of , by a ‘deviation from horizontality angle’
χ , which is given by
χ = arccos
c(
1
) · a
c(
1
)
.
Example 4 In case d = 2 we have n2 = 3, a = (
1, 0
)T .
The horizontal part of the tangent bundle is given by (
32
),
and horizontal exponential curves are obtained from (
18
) by
setting c2 = 0. For exponential curves in general, we have
deviation from horizontality angle
χ = arccos
c1
c12 + c22
.
An exponential curve in S E (
2
) is horizontal if and only if
χ = 0. See Fig. 4, where in the left we have depicted a
horizontal exponential curve and where in the right we have
visualized distribution .
Example 5 In case d = 3, we have n3 = 6, a = (
0, 0, 1
)T .
The horizontal part of the tangent bundle is given by (
33
), and
horizontal exponential curves are characterized by c3, c4, c5
whereas c1 = c2 = c6 = 0. By Eq. (
22
) these curves have
zero torsion τ  = 0 and constant curvature √(c4)c23+(c5)2 and
thus they are planar circles. For exponential curves in general,
we have deviation from horizontality angle
γ˙˜ ∈
γ˜ , with
= span{cos θ ∂x + sin θ ∂y , ∂θ }
χ = arccos
= span{A1, A3},
where denotes the socalled horizontal part of tangent
bundle T (S E (
2
)). See Fig. 4.
– For d = 3, we impose the constraint:
γ˙˜ (t ) ∈ Δγ˜ (t), with Δ := span{A3, A4, A5},
where A3 = n·∇R3 , since then spatial transport is always
along n which is required for for (
31
).
Curves γ˜ (t ) satisfying the constraint (
32
) for d = 2, and (
33
)
for d = 3 are called horizontal curves. Note that dim(Δ) =
d.
Next we study how the restriction applies to the particular
case of exponential curves on S E (d).
– For d = 2 horizontal exponential curves are obtained
from (
18
), (
19
), (
20
) by setting c2 = 0.
– For d = 3, we use a different reference axis a, and
horizontal exponential curves are obtained from (
16
) by
setting c1 = c2 = c6 = 0.
An exponential curve in S E (
3
) is horizontal if and only if
χ = 0 and c6 = 0.
3 From Exponential Curve Fits to Gauge Frames
on S E(d)
In Sects. 5 and 6 we will discuss techniques to find an
exponential curve γ˜gc(t ) that fits the data U˜ : S E (d) → R locally.
Let c(g) = (γ˜gc) (0) be its tangent vector at g.
In this section we assume that the tangent vector c(g) =
(c(
1
)(g), c(
2
)(g))T ∈ Rd+rd = Rnd is given. From this vector
we will construct a locally adaptive frame { B1g , . . . , Bnd g},
orthonormal w.r.t. Gμmetric in such a way that
1. the main spatial generator (A1 for d = 2 and Ad for
2. tdhe>s2p)atiisamlgaepnpeerdatoonrtso{BB1igg}=id=2 ainr=de1ocbi(tagi)neAdi fgr,om the
other leftinvariant spatial generators { Ai g}id=1 by a
planar rotation of a onto c(
1
)
c(
1
) by angle χ . In particular, if
χ = 0, the other spatial generators do not change their
direction. This allows us to still distinguish spatial
generators and angular generators in our adapted frame.
Next we provide for each g ∈ S E (d) the explicit
construction of a rotation matrix Rc(g) and a scaling by Mμ−1
on Tg(S E (d)), which maps frame { A1g , . . . , And g} onto
{ B1g , . . . , Bnd g}.
The construction for d > 2 is technical and provided in
Theorem A in Appendix 1. However, the whole construction
of the rotation matrix Rc via a concatenation of two
subsequent rotations is similar to the case d = 2 that we will
explain next.
Consider d = 2 where the frames {A1, A2, A3} and
{B1, B2, B3} are depicted in Fig. 5 The explicit relation
between the normalized gauge frame and the leftinvariant
vector field frame is given by
B := (Rc)T Mμ−1A,
with A := (A1, A2, A3)T , B := (B1, B2, B3)T , and with
rotation matrix
Rc = R2R1 ∈ S O(
3
), with
⎛ cos χ − sin χ 0 ⎞ ⎛
R1 = ⎝ sin χ cos χ 0 ⎠ , R2 = ⎝
0 0 1
cos ν 0 sin ν ⎞
0 1 0 ⎠ ,
− sin ν 0 cos ν
(
36
)
(
37
)
where the rotation angles are the deviation from horizontality
angle χ and the spherical angle
c3
c μ
ν = arcsin
∈ [−π/2, π/2].
Recall that χ is given by (
35
). The multiplication Mμ−1A
ensures that each of the vector fields in the locally adaptive
frame is normalized w.r.t. the Gμmetric, recall (
23
).
Remark 2 When imposing isotropy (w.r.t. the metric Gμ) in
the plane orthogonal to B1, there is a unique choice Rc
mapping (
1, 0, 0
)T onto (μc1, μc2, c3)T such that it keeps the
other spatial generator in the spatial subspace of Tg(S E (
2
))
(and with χ = 0 ⇔ B2 = μ−1A2). This choice is given by
(
37
).
The generalization to the ddimensional case of the
construction of a locally adaptive frame {Bi }in=d1 from {Ai }in=d1 and the
c
tangent vector c of a given exponential curve fit γ˜g (·) to data
U˜ : S E (d) → R is explained in Theorem 7 in Appendix 1.
4 Exponential Curve Fits in Rd
In this section we reformulate the classical construction of
a locally adaptive frame to image f at location x ∈ Rd , in
a grouptheoretical way. This reformulation seems technical
at first sight, but helps in understanding the formulation of
projected exponential curve fits in the higher dimensional Lie
group S E (d).
4.1 Exponential Curve Fits in Rd of the First Order
We will take the structure tensor approach [9,48], which will
be shown to yield firstorder exponential curve fits.
The Gaussian gradient
Fig. 5 Locally adaptive frame { B1g , B2g , B3g} (in blue) in
Tg(S E(
2
)) (with g placed at the origin) is obtained from frame
{ A1g , A2g , A3g} (in red) and c(g), via normalization and two
subsequent rotations Rc = R2R1, see Eq. (
36
), revealing deviation from
horizontality χ, and spherical angle ν in Eq. (
37
). Vector field A1 takes
a spatial derivative in direction n, whereas B1 takes a derivative along
the tangent c of the local exponential curve fit (Color figure online)
(
38
)
(
39
)
(
40
)
with Gaussian kernel
x 2
Gs (x) = (4π s)−d/2e− 4s ,
is used in the definition of the structure matrix:
Ss,ρ ( f ) := Gρ ∗ ∇s f (∇s f )T ,
with s = 21 σs2, and ρ = 21 σρ2 the scale of regularization
typically yielding a nondegenerate and positive definite matrix.
In the remainder we use short notation Ss,ρ := Ss,ρ ( f ). The
structure matrix appears in solving the following
optimization problem where for all x ∈ Rd we aim to find optimal
tangent vector
c∗(x) = arg min
Gρ (x − x )∇s f (x ) · c2dx
c∈cR=d1, Rd
= arg min cT Ss,ρ (x)c.
c∈Rd ,
c =1
In this optimization problem we find the tangent c∗(x) which
minimizes a (Gaussian) weighted average of the squared
directional derivative ∇s f (x ) · c2 in the neighborhood of
x. The second identity in (
41
), which directly follows from
the definition of the structure matrix, allows us to solve
optimization problem (
41
) via the Euler–Lagrange equation
Ss,ρ (x) c∗(x) = λ1c∗(x),
since the minimizer is found as the eigenvector c∗(x) with
the smallest eigenvalue λ1.
Now let us put Eq. (
41
) in grouptheoretical form by
reformulating it as an exponential curve fitting problem. This is
helpful in our subsequent generalizations to S E (d). On Rd
exponential curves are straight lines:
γxc(t ) = x + expRd (t c) = x + t c,
and on T (Rd ) we impose the standard flat metric tensor
G(c, d) = id=1 ci di . In (
41
) we replace the directional
derivative by a time derivative (at t = 0) when moving over
an exponential curve:
c∗(x) =
Rd
where
arg min
c∈Rd , c =1
d
dt
Gρ (x − x )
(Gs ∗ f )(γxc ,x(t ))
t=0
2
dx ,
t → γxc ,x(t ) = γxc(t ) − x + x = γxc (t ).
Because in (
41
) we average over directional derivatives in the
neighborhood of x we now average the time derivatives over
c
a family of neighboring exponential curves γx ,x(t ), which
are defined to start at neighboring positions x but having the
same spatial velocity as γxc(t ). In Rd the distinction between
γxc ,x(t ) and γxc (t ) is not important but it will be in the S E
(d)case.
Definition 1 Let c∗(x) ∈ Tx(Rd ) be the minimizer in (
44
).
We say γx(t ) = x + expRd (t c∗(x)) is the firstorder
exponential curve fit to image data f : Rd → R at location x.
(
41
)
For secondorder exponential curve fits we need the Hessian
matrix defined by
(
42
)
(
43
)
(
44
)
(
45
)
(
46
)
(
47
)
(
48
)
(Hs ( f ))(x) = ∂x j ∂xi (Gs ∗ f )(x) ,
with Gs the Gaussian kernel given in Eq. (
39
). From now on
we use short notation Hs := Hs ( f ). When using the Hessian
matrix for curve fitting we aim to solve
c∗(x) = arg min cT Hs (x)c.
c∈Rd ,
c =1
In this optimization problem we find the tangent c∗(x)
which minimizes the secondorder directional derivative of
(Gaussian) regularized data Gs ∗ f . When all Hessian
eigenvalues have the same sign we can solve the optimization
problem (
47
) via the Euler–Lagrange equation
Hs (x) c∗(x) = λ1c∗(x),
and the minimizer is found as the eigenvector c∗(x) with the
smallest eigenvalue λ1.
Now, we can again put Eq. (
47
) in grouptheoretical form
by reformulating it as an exponential curve fitting problem.
This is helpful in our subsequent generalizations to S E (d).
We again rely on exponential curves as defined in (
43
). In
(
47
) we replace the secondorder directional derivative by a
secondorder time derivative (at t = 0) when moving over
an exponential curve:
c∗(x) =
arg min
c∈Rd , c =1
2
ddt2 (Gs ∗ f )(γxc(t ))
t=0
.
(
49
)
Remark 3 In general the eigenvalues of Hessian matrix Hs
do not have the same sign. In this case we still take c∗(x) as the
eigenvector with smallest absolute eigenvalue (representing
minimal absolute principal curvature), though this no longer
solves (
47
).
Definition 2 Let c∗(x) ∈ Tx(Rd ) be the minimizer in (
49
).
We say γx(t ) = x + expRd (t c∗(x)) is the secondorder
exponential curve fit to image data f : Rd → R at location x.
Remark 4 In order to connect optimization problem (
49
)
with the firstorder optimization (
44
), we note that (
49
) can
also be written as an optimization over a family of curves
c
γx ,x defined in (
45
):
Fig. 6 Family of neighboring
exponential curves, given a fixed
point g ∈ S E(
2
) and a fixed
tangent vector
c = c(g) ∈ Tg(S E(
2
)). Left our
choice of family of exponential
c
curves γh,g for neighboring
h ∈ S E(
2
). Right exponential
curves γhc with c = c(g) are not
suited for local averaging in our
curve fits. The red curves start
from g (indicated with a dot),
the blue curves from h = g
(Color figure online)
c∗(x) = arg min
c∈Rd ,
c =1
Rd
d2
Gs (x − x ) dt 2 ( f )(γxc ,x(t ))
t=0
dx ,
(
50
)
because of linearity of the secondorder time derivative.
5 Exponential Curve Fits in S E(2)
As mentioned in the introduction, we distinguish between
two approaches: a firstorder optimization approach based on
a structure tensor on S E (
2
), and a secondorder optimization
approach based on the Hessian on S E (
2
). The firstorder
approach is new, while the secondorder approach formalizes
the results in [28,34]. They also serve as an introduction to
the new, more technical, S E (
3
)extensions in Sect. 6.
All curve optimization problems are based on the idea
that a curve (or a family of curves) fits the data well if a
certain quantity is preserved along the curve. This preserved
quantity is the data U˜ (γ˜ (t )) for the firstorder optimization,
and the time derivative ddt U˜ (γ˜ (t )) or the gradient ∇U˜ (γ˜ (t ))
for the secondorder optimization. After introducing a family
of curves similar to the ones used in Sect. 4 we will, for all
three cases, first pose an optimization problem, and then give
its solution in a subsequent theorem.
In this section we rely on grouptheoretical tools explained
in Sect. 2 (only the case d = 2), listed in panels (a) and (b) in
our table of notations presented in Appendix 5. Furthermore,
we introduce notations listed in the first part of panel (c) in
our table of notations in Appendix 5.
5.1 Neighboring Exponential Curves in S E(2)
Akin to (
45
) we fix reference point g ∈ S E (
2
) and velocity
components c = c(g) ∈ R3, and we shall rely on a family
(
51
)
(
52
)
c
{γ˜h,g} of neighboring exponential curves around γ˜gc. As we
c
will show in subsequent Lemma 1 neighboring curve γ˜h,g
departs from h and has the same spatial and rotational
velocity as the curve γ˜gc departing from g. This geometric idea
is visualized in Fig. 6, where it is intuitively explained why
one needs the initial velocity vector R˜ h−1gc, instead of c in
the following definition for the exponential curve departing
from a neighboring point h close to g.
Definition 3 Let g ∈ S E (
2
) and c = c(g) ∈ R3 be given.
c
Then we define the family {γ˜h,g} of neighboring exponential
curves
Proof The proof follows from the proof of a more general
theorem on the S E (
3
) case which follows later (in Lemma 3).
For the first step we use (
51
) and the fundamental property
(
17
) of exponential curves such that via application of (
13
):
(
54
)
,
where we use short notation R˜ h−1gc = i3=1(R˜ h−1gc)iAi h .
In the second step we use the definition of the gradient
(
28
) and the metric tensor (
23
) to rewrite this expression to
dV˜ h , R˜ h−1gc)
2
=
2
Gμ h (∇ V˜ (h), R˜ h−1gc) .
Then, in the third step we write this in vectormatrix form
and we obtain via (
28
)
= cT Mμ2 R˜ hT−1g∇ V˜ (h)(∇ V˜ (h))T R˜ h−1gMμ2 c,
(
58
)
where we used the fact that Mμ2 and R˜ hT−1g commute.
Finally, we use the structure tensor definition (
55
) to
rewrite the convex optimization functional in (
54
) as
Remark 5 Eq. (
53
) is the extension of Eq. (
45
) on R2 to the
S E (
2
) group.
Additional geometric background is given in Appendix 2.
5.2 Exponential Curve Fits in S E(2) of the First Order
For firstorder exponential curve fits we solve an optimization
problem similar to (
44
) given by
with V˜ = G˜ s ∗ U˜ , g = (x, R), h = (x , R ) and dμ(h) =
dx dμSO(
2
)(R ) = dx dθ . Here we first regularize the data
with spatial and angular scale s = (s p, so) and then average
over a family of curves where we use spatial and angular
scale ρ = (ρ p, ρo). Here s p, ρ p > 0 are isotropic scales on
R2 and so, ρo > 0 are scales on S1 of separable Gaussian
kernels, recall (
27
). Recall also (
24
) for the definition of the
norm · μ. When solving this optimization problem the
following structure matrix appears
Ss,ρ (U˜ ))(g) =
In the remainder we use short notation Ss,ρ := Ss,ρ (U˜ ). We
assume that U˜ , ρ, s, and g are chosen such that Ss,ρ (g) is a
nondegenerate matrix. The optimization problem is solved
in the next theorem.
Theorem 1 (FirstOrder Fit via Structure Tensor) The
normalized eigenvector Mμc∗(g) with smallest eigenvalue of the
rescaled structure matrix MμSs,ρ (g)Mμ provides the
solution c∗(g) to optimization problem (
54
).
Proof We will apply four steps. In the first step we write
the time derivative as a directional derivative, in the second
step we express the directional derivative in the gradient. In
the third step we put the integrand in matrixvector form. In
the final step we express our optimization functional in the
structure tensor and solve the Euler–Lagrange equations.
(
56
)
(
57
)
(
59
)
(
60
)
(
61
)
d
dt V˜ (γ˜hc,g(t ))
t=0
2
dμ(h)
while the boundary condition c μ = 1 can be written as
The Euler–Lagrange equation reads ∇E (c∗) = λ1∇ϕ(c∗),
with λ1 the smallest eigenvalue of MμSs,ρ (g)Mμ and we
have
MμSs,ρ (g)Mμ(Mμc∗(g)) = λ1(Mμc∗(g)),
from which the result follows.
The next remark explains the frequent presence of the Mμ
matrices in (
69
).
Remark 6 The diagonal Mμ matrices enter the functional
due to the gradient definition (
28
), and they enter the
boundary condition via c 2μ = cT Mμ2 c = 1. In both cases they
come from the metric tensor (
23
). Parameter μ which controls
the stiffness of the exponential curves has physical dimension
[Length]−1. As a result, the normalized eigenvector Mμc∗(g)
is, in contrast to c∗(g), dimensionless.
We now discuss the secondorder optimization approach
based on the Hessian matrix. At each g ∈ S E (
2
) we define
a 3 × 3 nonsymmetric Hessian matrix
and where i denotes the row index and j denotes the column
index, and with G˜ s a Gaussian kernel with isotropic spatial
s
part as described in Eq. (
27
). In the remainder we write H :=
Hs(U˜ ).
Remark 7 As the leftinvariant vector fields are
noncommutative, there are many ways to define the Hessian matrix
on S E (
2
), since the ordering of the leftinvariant derivatives
matters. From a differential geometrical point of view our
choice (
62
) is correct, as we motivate in Appendix 4.
For secondorder exponential curve fits we consider 2
different optimization problems. In the first case we minimize the
secondorder derivative along the exponential curve:
In the second case we minimize the norm of the firstorder
derivative of the gradient of the neighboring family of
exponential curves:
Gμ ddt ∇ V˜ (γ˜hc,g(t )) t=0 , ddt ∇ V˜ (γ˜hc,g(t ))
t=0
Remark 8 Optimization problem (
63
) can also be written
as an optimization problem over the neighboring family of
curves, as it is equivalent to problem:
c∗(g) = arg min
c∈R3,
c μ=1 S E(
2
)
d2
G˜ s(h−1g) dt 2 U˜ (γ˜hc,g(t ))
dμ(h) .
t=0
(
65
)
In the next two theorems we solve these optimization
problems.
(
67
)
(
68
)
Theorem 2 (SecondOrder Fit via Symmetric Sum Hessian)
Let g ∈ S E (2) be such that the eigenvalues of the rescaled
symmetrized Hessian
Mμ−1(Hs(g) + (Hs(g))T )Mμ−1
have the same sign. Then the normalized eigenvector
Mμc∗(g) with smallest eigenvalue of the rescaled
symmetrized Hessian matrix provides the solution c∗(g) of
optimization problem (
63
).
Proof Similar to the proof of Theorem 1 we first write the
time derivative as a directional derivative using Eq. (
13
).
Since now we have a secondorder derivative this step is
applied twice:
d2
dt 2 V˜ (γ˜gc(t ))
t=0
where only the symmetric part remains. Finally, the
optimization functional in (
63
) (which is convex if the eigenvalues
have the same sign) can be written as
Then we write the result in matrixvector form and split the
matrix in a symmetric and antisymmetric part
3
d
dt
3
3
i=1
= 21 cT (Hs(g) + (Hs(g))T )c
+ 21 cT (Hs(g) − (Hs(g))T )c
1
= 2 cT (Hs(g) + (Hs(g))T )c ,
which boils down to finding the eigenvector with minimal
absolute eigenvalue λ1 which gives our result.
Theorem 3 (SecondOrder Fit via Symmetric Product
Hessian) Let ρ p, ρo, s p, so > 0. The normalized
eigenvector Mμc∗(g) with smallest eigenvalue of matrix
Mμ−1
T
G˜ ρ (h−1g) · R˜ h−1g(Hs(h))T
provides the solution c∗(g) of optimization problem (
64
).
Proof First we use the definition of the gradient (
28
) and
then we again rewrite the time derivative as a directional
derivative:
d
dt ∇ V˜ (γ˜hc,g(t ))
t=0
3
i=1
d
= dt
=
3
for c˜ = R˜h−1gc, recall (
52
), and where μi = μ for i = 1, 2
and μi = 1 for i = 3. Here we use γ˜hc,g(0) = h, and the
formula for leftinvariant vector fields (
10
). Now insertion of
(
71
) into the metric tensor Gμ (
23
) yields
Gμ
d
dt ∇ V˜ (γ˜hc,g(t ))
t=0
= c˜T (Hs(h))T Mμ−2Hs(h)c˜
d
, dt ∇ V˜ (γ˜hc,g(t ))
t=0
= cT R˜hT−1g(Hs(h))T Mμ−2Hs(h) R˜h−1gc.
Finally, the convex optimization functional in (
64
) can be
written as
E (c) := cT
T
G˜ ρ (h−1g) · R˜ h−1g(Hs(h))T
Again we have the boundary condition ϕ(c) = cT Mμ2 c −
1 = 0 and the result follows by application of the Euler–
Lagrange formalism: ∇E (c∗) = λ1∇ϕ(c∗).
6 Exponential Curve Fits in S E(3)
In this section we generalize the exponential curve fit theory
from the preceding chapter on S E (
2
) to S E (
3
). Because our
data on the group S E (
3
) was obtained from data on the
quotient R3 S2, we will also discuss projections of exponential
curve fits on the quotient.
We start in Sect. 6.1 with some preliminaries on the
quotient structure (
3
). Here we will also introduce the concept of
projected exponential curve fits. Subsequently, in Sect. 6.2,
we provide basic theory on how to obtain the appropriate
family of neighboring exponential curves. More details can
be found in Appendix 2. In Sect. 6.3 we formulate
exponential curve fits of the first order as a variational problem. For
that we define the structure tensor on S E (
3
), which we use
to solve the variational problem in Theorems 4 and 5. Then
we present the twofold algorithm for achieving torsionfree
exponential curve fits. In Sect. 6.4 we formulate exponential
curve fits of the second order as a variational problem. Then
we define the Hessian tensor on S E (
3
), which we use to
solve the variational problem in Theorem 6. Again
torsionfree exponential curve fits are accomplished via a twofold
algorithm.
Throughout this section we will rely on the differential
geometrical tools of Sect. 2, listed in panels (a) and (b) in
in Table 1 in Appendix 5. We also generalize concepts on
exponential curve fits introduced in the previous section to
the case d = 3 (requiring additional notation). They are listed
in panel (c) in the table in Appendix 5.
6.1 Preliminaries on the Quotient R3
S2
Now let us set d = 3, and let us assume input U is given and
let us first concentrate on its domain. This domain equals
the joint space R3 S2 of positions and orientations of
dimension 5, which we identified with a 5dimensional group
quotient of S E (
3
), where S E (
3
) is of dimension 6 (recall (
3
)).
For including a notion of alignment it is crucial to include
the noncommutative relation in (
4
) between rotations and
translation, and not to consider the space of positions and
orientations as a flat Cartesian product. Therefore, we model
the joint space of positions and orientations as the Lie group
quotient (
3
), where
S O(
2
) ≡ Stab(a) = {R ∈ S O(
3
)  Ra = a}
for reference axis a = ez = (
0, 0, 1
)T . Within this quotient
structure two rigid body motions g = (x, R), g = (x , R ) ∈
S E (
3
) are equivalent if
g ∼ g ⇔ (g )−1g ∈ {0} × S O(
2
) ⇔
x − x = 0 and ∃Rez,α∈SO(
2
) : (R )−1R = Rez,α.
Furthermore, one has the action of g = (x, R) ∈ S E (
3
)
onto (y, n) ∈ R3 × S2, which is defined by
g
(y, n) = (x, R)
(y, n) := (x + Ry, Rn).
(74)
As a result we have
g ∼ g ⇔ g
(0, a) = g
(0, a).
Thereby, a single element in R3 S2 can be considered as
the equivalence class of all rigid body motions that map
reference position and orientation (0, a) onto (x, n). Similar
to the common identification of S2 ≡ S O(
3
)/S O(
2
), we
denote elements of the Lie group quotient R3 S2 by (x, n).
6.1.1 Legal Operators
Let us recall from Sect. 3 that exponential curve fits induce
gauge frames. Note that both the induced gauge frame
{B1, . . . , B6} and the nonadaptive frame {A1, . . . , A6} are
defined on the Lie group S E (
3
), and cannot be defined on
the quotient. Nevertheless, combinations of them can be well
defined on R3 S2 (e.g., R3 = A12+A22+A32 is well defined
on the quotient). This brings us to the definition of socalled
legal operators, as shown in [29, Thm. 1]. In short, the
operator U˜ → Φ˜ (U˜ ) is legal (leftinvariant and well defined on
the quotient) if and only if
Φ˜ = Φ˜ ◦ Rhα for all α ∈ [0, 2π ).
Φ˜ ◦ Lg = Lg ◦ Φ˜ for all g ∈ S E (
3
),
recall (
7
), where
hα := (0, Rez,α).
(75)
(76)
with the Rez,α the counterclockwise rotation about ez . Such
legal operators relate onetoone to operators Φ : L2(R3
S2) → L2(R3 S2) via
U → Φ(U ) ↔
U˜ → Φ˜ (U˜ ) = Φ(U ),
relying consequently on (
1
).
6.1.2 Projected Exponential Curve Fits
Action (74) allows us to map a curve γ˜ (·) = (x(·), R(·)) in
S E (
3
) onto a curve (x(·), n(·)) on R3 S2 via
(x(t ), n(t )) := γ˜ (t )
(0, ez ) = (x(t ), R(t ) ez ).
(77)
This can be done with exponential curve fits γ˜gc=c∗(g)(t ) to
define projected exponential curve fits.
Definition 4 We define for g = (x, Rn) the projected
exponential curve fit
γ(∗x,n)(t ) := γ˜gc∗(g)(t )
(0, ez ).
Lemma 2 The projected exponential curve fit is well defined
on the quotient, i.e., the righthand side of (78) is independent
of the choice of Rn s.t. Rnez = n, if the optimal tangent found
in our fitting procedure satisfies:
c∗(ghα) = ZαT c∗(g),
for all α ∈ [0, 2π ],
and for all g ∈ S E (
3
), with
Zα :=
Rez,α
0
0
Rez,α
∈ S O(
6
).
Proof For wellposed projected exponential curve fits we
need the righthand side of (78) to be independent of Rn s.t.
Rnez = n, i.e., it should be invariant under g → ghα.
Therefore, we have the following constraint on the fitted curves:
γ˜gc∗(g)(t )
c∗(ghα)(t )
(0, ez ) = γ˜ghα
(0, ez ).
Then the constraint on the optimal tangent (79) follows from
fundamental identity
(γ˜gchα (·)) = γ˜gZαc(·) hα,
which holds2 for all hα. We apply this identity (82) to the
righthand side of (81) and use the definition of defined in
(74) yielding:
(79)
(80)
(81)
(82)
(83)
γ˜gc∗(g)(t )
γ˜gc∗(g)(t )
(0, ez ) = γ˜gZαc∗(ghα)(t )hα
(0, ez )
(0, ez ) = γ˜gZαc∗(ghα)(t )
(0, ez )
c∗(g) = Zαc∗(ghα).
α = Zα−1.
Finally our constraint (79) follows from ZT
6.2 Neighboring Exponential Curves in S E(3)
Here we generalize the concept of family of neighboring
exponential curves (
45
) in the Rd case, and Definition 3 in
the S E (
2
)case, to the S E (
3
)case.
Definition 5 Given a fixed reference point g ∈ S E (
3
) and
velocity component c = c(g) = (c(
1
)(g), c(
2
)(g)) ∈ R6, we
c
define the family {γ˜h,g(·)} of neighboring exponential curves
by
2 Equation (82) follows from (122) in Appendix 2, by setting Q = Zα.
(84)
c
Fig. 7 Illustration of the family of curves γ˜h,g in S E(
3
). Left The
(spatially projected) expcurve t → PR3 γ˜gc(t), with g = (x, Rn) in red.
The frames indicate the rotation part PSO(
3
)γ˜gc(t), which for clarity we
depicted only at two time instances t. Middle neighboring expcurve
t → γ˜gc,h(t) with h = (x , Rn), x = x in blue, i.e., neighboring
with rotation matrix R˜h−1g ∈ S O(
6
) defined by
R˜ h−1g :=
for all g = (x, R), h = (x , R ) ∈ S E (
3
).
The next lemma motivates our specific choice of neighboring
exponential curves. The geometric idea is visualized in Fig. 7
and is in accordance with Fig. 6 on the S E (
2
)case.
Lemma 3 Exponential curve γ˜hc,g departing from h =
(x , R ) ∈ S E (
3
) given by (84) has the same spatial and
rotational velocity as exponential curve γ˜gc departing from
g = (x, R) ∈ S E (
3
).
On the Lie algebra level, we have that the initial velocity
c
component vectors of the curves γ˜gc and γ˜h,g relate via c →
R˜ h−1gc.
On the Lie group level, we have that the curves themselves
γ˜gc(·) = (xg(·), Rg(·)), γ˜hc,g(·) = (xh (·), Rh (·)) relate via
xh (t ) = xg(t ) − x + x ,
Rh (t ) = Rg(t )R−1R .
Proof See Appendix 2.
Remark 9 Lemma 3 extends Lemma 1 to the S E (
3
) case.
c
When projecting the curves γ˜gc and γ˜h,g into the quotient,
one has that curves γ˜gc (0, a) and γ˜hc,g (0, a) in R3 S2
carry the same spatial and angular velocity.
Remark 10 In order to construct the family of neighboring
exponential curves in S E (
3
), one applies the transformation
c → R˜h−1gc in the Lie algebra. Such a transformation
preserves the leftinvariant metric:
expcurve departing with same orientation and different position. Right
expcurve t → γ˜gc,h(t) with h = (x, Rn ), n = n, i.e., the neighboring
expcurve departing with same position and different orientation (Color
figure online)
1 = Gγ˜gc(t) (γ˙˜gc(t ), γ˙˜gc(t )) = Gγ˜hc,g(t) (γ˙˜hc,g(t ), γ˙˜hc,g(t )),
for all h ∈ S E (
3
) and all t ∈ R. For further differential
geometrical details see Appendix 2.
Now let us generalize the firstorder exponential curve fits of
Theorem 1 to the setting of R3 S2. Here we first consider
the following optimization problem on S E (
3
) (generalizing
(
44
)):
Recall that · μ was defined in (
24
), V˜ in (
26
), and μ
in (
25
). The reason for including the condition c6 = 0 will
become clear after defining the structure matrix.
6.3.1 The Structure Tensor on S E (
3
)
We define structure matrices Ss,ρ of U˜ by
(Ss,ρ (U˜ ))(g) =
G˜ ρ (h−1g) · R˜hT−1g∇s
S E(
3
)
U˜ (h)(∇sU˜ (h))T R˜ h−1gdμ(h),
(89)
where we use matrix R˜h−1g defined in Eq. (85). Again we
use short notation Ss,ρ := Ss,ρ (U˜ ).
Remark 11 By construction (
1
) and (
10
) we have
This again yields the following optimization functional
(A6U˜ )(g) = lhi↓m0
U˜ (g eh A6 ) − U˜ (g)
h
= 0,
so the null space of our structure matrix includes
N := span{(
0, 0, 0, 0, 0, 1
)T }.
Remark 12 We assume that s = (s p, so) and function U˜ are
chosen in such a way that the null space of the structure
matrix is precisely equal to N (and not larger).
Due to the assumption in Remark 12, we need to impose the
condition
c6 = 0 ⇔ γ˜˙gc(0) ∩ N = ∅
in our exponential curve optimization to avoid nonuniqueness
of solutions. To clarify this, we note that the optimization
functional in (88) can be rewritten as
E (c) := cT Mμ2 Ss,ρ (g)Mμ2 c,
as we will show in the next theorem where we solve the
optimization problem for firstorder exponential curve fits.
Indeed, for uniqueness we need (91) as otherwise we would
have E (c + Mμ−2 c0) = E (c) for all c0 ∈ N .
Theorem 4 (FirstOrder Fit via Structure Tensor) The
normalized eigenvector Mμc∗(g) with smallest nonzero
eigenvalue of the rescaled structure matrix MμSs,ρ (g)Mμ
provides the solution c∗(g) to optimization problem (88).
Proof All steps (except for the final step of this proof, where
the additional constraint c6 = 0 enters the problem) are
analogous to the proof of the firstorder method in the SE(
2
) case:
the proof of Theorem 1. We will now shortly repeat these first
steps. First we rewrite the time derivative as a directional
derivative which is then rewritten to the gradient
(90)
(91)
d
dt
t=0
(94)
(95)
(96)
(97)
(98)
= cT Mμ2 R˜ hT−1g(∇ V˜ (h))(∇ V˜ (h))T R˜ h−1gMμ2 c.
(93)
which is independent of the choice of Rn and Rn.
E (c) =
S E(
3
)
So, just as in the S E (
2
)case we have the following Euler–
Lagrange equations:
MμSs,ρ (g)Mμ(Mμc∗(g)) = λ1 (Mμc∗(g)).
Again the second equality in (95) follows from the first by
multiplication by Mμ−1.
Finally, the constraint c6 = 0 is included in our
optimization problem (88) to excluded the null space (90) from
the optimization; therefore, we take the eigenvector with the
smallest nonzero eigenvalue providing us the final result.
6.3.2 Projected Exponential Curve Fits in R3
S2
In reducing the problem to R3
S2 we first note that
Ss,ρ (g hα) = ZT Ss,ρ (g)Zα,
α
with Zα defined in Eq. (80), and where we recall hα =
(0, Rez,α).
In the following theorem we summarize the
wellposedness of our projected curve fits on data U : R3 S2 → R
and use the quotient structure to simplify the structure tensor.
Theorem 5 (FirstOrder Fit and Quotient Structure) Let g =
(x, Rn) and h = (x , Rn ) where Rn and Rn denote any
rotation which maps ez onto n and n , respectively. Then, the
structure tensor defined by (89) can be expressed as
R3 S2
Ss,ρ (g) = 2π
R3
Gsp (x − x ) GsS02 (RnT n)
R˜ hT−1g∇ V˜ (h) (∇ V˜ (h))T R˜ h−1gdσ (n )dx .
The normalized eigenvector Mμc∗(x, Rn) with smallest
nonzero eigenvalue of the rescaled structure matrix
MμSs,ρ (g)Mμ provides the solution of (88) and defines a
projected curve fit in R3 S2:
γ(∗x,n)(t ) =
γ˜(cx∗,(Rxn,R)n)(t )
(0, ez ),
Fig. 8 Volume rendering of a
3D testimage. The real part of
the orientation score (cf.
Sect. 7.1) provides us a density
U on R3 S2. Left spatial parts
of exponential curves (in black)
aligned with spatial generator
A3(x,Rnmax (x)), where
nmax(x) = argmaxU (x, n).
n∈S2
Right spatial parts of our
exponential curve fits Eq. (104)
computed via the algorithm in
Sect. 6.3.3, which better follow
the curvilinear structures
Proof The proof consists of two parts. First we prove that
(97) follows from the structure tensor defined in (89). Then
we use Lemma 2 to prove that our projected exponential
curve fit (98) is well defined. For both we use Theorem 4 as
our venture point.
For the first part of the proof we note that the integrand in
the structure tensor definition Eq. (89) is invariant under h →
hhα = h(0, Rez,α) on the integration variable. To show this
we first note that, Zα defined in (80), satisfies Zα(Zα)T = I .
Furthermore, we have
T T T
∇ V˜ (hhα) ≡ Zα ∇ V˜ (h), R˜(hhα)−1g = R˜h−1gZα
and G˜ ρ (hhα) = G˜ ρ (h). Therefore, integration over third
Eulerangle α is no longer needed in the definition of the
structure tensor (89) as it just produces a constant 2π factor.
For the second part we apply Lemma 2 and thereby it
remains to be shown that condition c∗(ghα) = ZαT c∗(g) is
satisfied. This directly follows from (96):
Ss,ρ (g hα)c∗(g hα) = λ1c∗(g hα)
ZT Ss,ρ (g)Zαc∗(g hα) = λ1c∗(g hα)
α
Ss,ρ (g) Zαc∗(g hα) = λ1 Zαc∗(g hα)
Zαc∗(g hα) = c∗(g),
(99)
which shows our condition.
6.3.3 TorsionFree Exponential Curve Fits of the First
Order via a Twofold Approach
Theorem 4 provides us exponential curve fits that
possibly carry torsion. From Eq. (
22
) we deduce that the torsion
norm of such an exponential curve fit is given by τ  =
c(
11
) (c1c4 + c2c5 + c3c6)κ. Together with the fact that
we exclude the null space N from our optimization domain
by including constraint c6 = 0, this results in insisting
on zero torsion along horizontal exponential curves where
c1 = c2 = 0. Along other exponential curves torsion appears
if c1c4 + c2c5 = 0.
Now the problem is that insisting, a priori, on zero
torsion for horizontal curves while allowing nonzero torsion
for other curves is undesirable. On top of this, torsion is a
higher order lessstable feature than curvature. Therefore,
we would like to exclude it altogether from our exponential
curve fits presented in Theorems 4 and 5, by a different theory
and algorithm. The results of the algorithm show that even
if structures do have torsion, the local exponential curve fits
do not need to carry torsion in order to achieve good results
in the local frame adaptation, see, e.g., Fig. 8.
The constraint of zero torsion forces us to split our
exponential curve fit into a twofold algorithm:
Step 1 Estimate at g ∈ S E (
3
) the spatial velocity part
c(
1
)(g) from the spatial structure tensor.
Step 2 Move to a different location gnew ∈ S E (
3
) where a
horizontal exponential curve fit makes sense and then
estimate the angular velocity c(
2
) from the rotation
part of the structure tensor over there.
This forced splitting is a consequence of the next lemma.
Lemma 4 Consider the class of exponential curves with
nonzero spatial velocity c(
1
) = 0 such that their spatial
projections do not have torsion. Within this class the
constraint c6 = 0 does not impose constraints on curvature if
and only if the exponential curve is horizontal.
Proof For a horizontal curve γ˜gc(t ) we have χ = 0 ⇔ c1 =
c2 = 0 and indeed τ  = c(
1
)c·c(1(
2
))κ = c6κ = 0 and we
see that constraints c6 = 0 and τ  = 0 reduce to only one
constraint. The curvature magnitude stays constant along the
exponential curve and the curvature vector at t = 0, recall
Eq. (
21
), is in this case given by
1 ⎛ c5c3 ⎞
κ (0) = c3 ⎝ −c4c3 ⎠ ,
0
which can be any vector orthogonal to spatial velocity c(
1
) =
(0, 0, c3)T . Now let us check whether the condition is
necessary. Suppose t → γ˜gc(t ) is not horizontal, and suppose it
is torsion free with c6 = 0. Then we have c1c4 + c2c5 = 0,
as a result the initial curvature
1 ⎛ c5c3 ⎞
4 3
κ (0) = c3 ⎝ c4c−2 c−cc1c5 ⎠ ,
is both orthogonal to vector c(
1
) = (c1, c2, c3)T and
orthogonal to (−c2, c1, 0)T , and thereby constrained to a one
dimensional subspace.
From these observations we draw the following conclusion
for our exponential curve fit algorithms.
Conclusion In order to allow for all possible curvatures
in our torsionfree exponential curve fits, we must
relocate the exponential curve optimization at g ∈ S E (
3
) in
U˜ : S E (
3
) → R to a position gnew ∈ S E (
3
) where a
horizontal exponential curve can be expected. Subsequently, we
can use Lemma 3 to transport the horizontal and torsionfree
curve through gnew, back to a torsionfree exponential curve
through g.
This conclusion is the central idea behind our following
twofold algorithm for exponential curve fits.
Algorithm Twofold Approach
The algorithm follows the subsequent steps:
Step 1a Initialization. Compute structure tensor Ss,ρ (g) from
input image U : R3 × S2 → R+ via Eq. (97).
Step 1b Find the optimal spatial velocity:
c(
1
)(g) = arg min
Mμ2 Ss,ρ (g)Mμ2
c(
1
) T
0
cc((
11
))∈=Rμ3−,1
c(
1
)
0
,
(100)
for g = (x, Rn), which boils down to finding the eigenvector
with minimal eigenvalue of the 3 × 3 spatial submatrix of
the structure tensor (89).
Step 2a Given c(
1
)(g) we aim for an auxiliary set of
coefficients, where we also take into account rotational velocity. To
(101)
(102)
achieve this in a stable way we move to a different location
in the group:
gnew = (x, Rnnew ), nnew = Rnc(
1
),
and apply the transport of Lemma 3 afterwards. At gnew, we
enforce horizontality, see Remark 13 below, and we consider
the auxiliary optimization problem
cnew(gnew) =
arg min cT Mμ2 Ss,ρ (gnew)Mμ2 c .
c∈R6,
c μ=1,
c1=c2=c6=0
Here zero deviation from horizontality (
34
) and zero torsion
(
22
) is equivalent to the imposed constraint:
χ = 0 and τ  = 0 ⇔ c1 = c2 = c6 = 0.
Step 2b The auxiliary coefficients cnew(gnew) = (0, 0,
c3(gnew), c4(gnew), c5(gnew), 0)T of a torsionfree,
horizontal exponential curve fit γ˜gcnneeww through gnew. Now we apply
transport (via Lemma 3) of this horizontal exponential curve
fit towards the corresponding exponential curve through g:
c∗f inal (g) =
RnT Rnnew
0
0
RnT Rnnew
cnew(gnew).
(103)
This gives the final, torsionfree, exponential curve fit t →
γ˜gc∗(g)(t ) in S E (
3
), yielding the final output projected curve
fit
t →
γ˜gc∗f inal (g)(t )
(0, ez ) ∈ R3 × S2,
(104)
with g = (x, Rn), recall Eq. (74).
Remark 13 In step 2a of our algorithm we jump to a new
location gnew = (x, Rnnew ) with possibly different
orientation nnew such that spatial tangent vector
3
i=1
i
c Ai (x,Rnnew ) ,
points in the same direction as nnew ∈ S2, recall Eq. (
31
),
from which it follows that nnew is indeed given by (101). If
c(
1
) = (c1, c2, c3)T = a = (
0, 0, 1
)T then nnew = n.
Lemma 5 The preceding algorithm is well defined on the
quotient R3 S2 = S E (
3
)/({0} × S O(
2
)).
Proof To show that the preceeding algorithm is well defined
on the quotient, we need to show that the final result (104)
is independent on both the choice of of Rn ∈ S O(
3
) s.t.
Rnez = n and the choice of Rnnew ∈ S O(
3
) s.t. Rnnew ez =
nnew.
First, we show independence on the choice of Rn. We
apply Lemma 2 and thereby it remains to be shown that
condition c∗final(ghα) = ZT c∗final(g) is satisfied. This follows
α
directly from Eq. (103) if as long as nnew found in Step 2a
is independent of the choice of Rn. This property indeed
follows from c(
1
)(ghα) = ReTz,αc(
1
)(g) which can be proven
analogously to (99). Then we have
nnew(ghα) = RnRez,αc(
1
)(ghα)
= RnRez,αReTz,αc(
1
)(g) = nnew(g).
(105)
So we conclude that (104) is indeed independent on the
choice of Rn.
Finally, Eq. (104) is independent of the choice of Rnnew .
This follows from cnew(ghα) = ZαT cnew(g) in Step 2a. Then
c∗final in Eq. (103) is independent of the choice of Rnnew
because ZαT in cnew → ZαT cnew is canceled by Rnew →
RnewRez,α in Eq. (103).
In Fig. 8 we provide an example of spatially projected
exponential curve fits in S E (
3
) via the twofold approach.
Here we see that the resulting gauge frames better follow
the curvilinear structures of the data (in comparison to the
normal leftinvariant frame).
6.4 Exponential Curve Fits in S E(3) of the Second
Order
In this section we will generalize Theorem 2 to the case
d = 3, where again we include the restriction to
torsionfree exponential curves.
6.4.1 The Hessian on SE(
3
)
For secondorder curve fits we consider the following
optimization problem:
with V˜ = G˜ s ∗ U˜ . Before solving this optimization problem
in Theorem 6, we first define the 6×6 nonsymmetric Hessian
matrix by
(Hs(U˜ ))(g) = [A j Ai (V˜ )](g) ,
with V˜ = G˜ s ∗ U˜
(107)
and where i = 1, . . . , 6 denotes the row index, and j =
s
1, . . . , 6 denotes the column index. Again we write H :=
Hs(U˜ ).
Theorem 6 (SecondOrder Fit via Symmetric Sum Hessian)
Let g ∈ S E (3) be such that the symmetrized Hessian matrix
21 Mμ−1(Hs(g) + (Hs(g))T )Mμ−1 has eigenvalues with the
same sign. Then the normalized eigenvector Mμc∗(g) with
smallest absolute nonzero eigenvalue of the symmetrized
Hessian matrix provides the solution c∗(g) of optimization
problem (106).
Proof Similar to the proof of Theorem 2 (only now with
summations from 1 to 5). Again we include our additional
constraint c6 = 0 by taking the smallest nonzero eigenvalue.
Remark 14 The restriction to g ∈ S E (
3
) such that the
eigenvalues of the symmetrized Hessian carry the same sign is
necessary for a unique solution of the optimization. Note
that in case of our firstorder approach via the positive
definite structure tensor, no such cumbersome constraints arise.
In case g ∈ S E (
3
) is such that the eigenvalues of the
symmetrized Hessian have different sign there are 2 options:
1. Move towards a neighboring point where the Hessian
eigenvalues have the same sign and apply transport
(Lemma 3, Fig. 7) of the exponential curve fit at the
neighboring point.
2. Take c∗(g) still as the eigenvector with smallest absolute
eigenvalue (representing minimal absolute principal
curvature), though this no longer solves (106).
In order to obtain torsionfree exponential curve fits of the
second order via our twofold algorithm, we follow the same
algorithm as in Subsection 6.3.3, but now with the Hessian
field Hs (107) instead of the structure tensor field.
Step 1a Initialization. Compute Hessian Hs(g) from input
image U : R3 × S2 → R+ via Eq. (107).
Step 1b Find the optimal spatial velocity by (100) where we
replace Mμ2 Ss,ρ (g)Mμ2 by Hs(g).
Step 2a We again fit a horizontal curve at gnew given by (101).
The procedure is done via (102) where we again replace
Mμ2 Ss,ρ (g)Mμ2 by Hs(g).
Step 2b Remains unchanged. We again apply Eq. (103) and
Eq. (104).
There are some serious computational technicalities in the
efficient computation of the entries of the Hessian for discrete
input data, but this is outside the scope of this article and will
be pursued in future work.
Remark 15 In Appendix 3 we propose another twofold
secondorder exponential curve fit method. Here one solves
Fig. 9 In black the spatially
projected part of exponential
curve fits t → γgc(t ) of the
second kind (fitted to the real
part of the 3D invertible
orientation score, for details
see Fig. 10) of the 3D image
visualized via volume rendering.
Left output of the twofold
approach outlined in Sect. 6.4.2.
Right output of the twofold
approach outlined in
1
Appendix 3 with sp = 2 ,
so = 21 (0.4)2, μ = 10
a variational problem for exponential curve fits where
exponentials are factorized over, respectively, spatial and angular
part. Empirically, this approach performs good (see, e.g.,
Fig. 9).
7 Image Analysis Applications
In this section we present examples of applications where the
use of gauge frame in S E (d) obtained via exponential curve
fits is used for defining dataadaptive leftinvariant operators.
Before presenting the applications, we start by briefly
summarizing the invertible orientation score theory in Sect. 7.1.
In case d = 2 the application presented is the enhancing
of the vascular tree structure in 2D retinal images via
differential invariants based on gauge frames. This is achieved by
extending the classical Frangi vesselness filter [36] to
distributions U˜ on S E (
2
). Gauge frames in S E (
2
) can also be
used in nonlinear multiplescale crossingpreserving
diffusions as demonstrated in [63], but we will not discuss this
application in this paper.
In case d = 3 the envisioned applications include blood
vessel detection in 3D MR angiography, e.g., the detection
of the Adamkiewicz vessel, relevant for surgery planning.
Also in extensions towards fiberenhancement of
diffusionweighted MRI [29, 30] the nonlinear diffusions are of
interest. Some preliminary practical results have been conducted
on such 3D datasets [21, 23, 42], but here we shall restrict
ourselves to very basic artificial 3D datasets to show a proof of
concept, and leave these three applications for future work.
7.1 Invertible Orientation Scores
In the image analysis applications discussed in this section
our function U : Rd Sd−1 → R is given by the real part
of an invertible orientation score:
(108)
(109)
(110)
U (x, n) = Re{Wψ f (x, Rn)},
where Rn is any rotation mapping reference axis a onto n ∈
Sd−1, where f ∈ L2(Rd ) denotes a input image, and where
ψ is a socalled ’cakewavelet’ and with
Wψ f (x, Rn) =
ψ (Rn−1(y − x)) f (y) dy.
Rd
For d > 2 we restrict ourselves to wavelets ψ satisfying
ψ (Ra−,1α x) = ψ (x), for all x ∈ Rd
and for all rotations Ra,α ∈ Stab(a) (for d = 3 this means
for all rotations about axis a, Eq. (
2
)). As a result U is well
defined on the left cosets Rd Sd−1 = S E (d)/({0}×S O (d −
1)) as the choice of Rn ∈ S O (d) mapping a onto n is
irrelevant. See Fig. 10 for an example of a 3D orientation score.
If we restrict to disklimited images, exact reconstruction
is performed via the adjoint:
f = Wψ∗ Wψ f
⎡
= F −1 ⎢⎣ ω →
1
d
(2π ) 2 Mψ (ω)
⎤
S O(d)
F ψ (R−1ω)dμS O(d)(R)⎥ .
⎦
F [Wψ f (·, R)](ω)
if ψ is an admissible wavelet. The condition for admissibility
of wavelets ψ are given in [24]. In this article, the wavelets
ψ are given either by the 2D ‘cakewavelets’ used in [6, 23]
or by their recent 3D equivalents given in [42]. Detailed
formulas and recipes to construct such wavelets efficiently can
be found in [42] and in order to provide the global intuitive
picture they are depicted in Fig. 11.
In the subsequent sections we consider two types of
operators acting on the invertible orientation scores (recall Φ in
the commuting diagram of Fig. 3):
1. for d = 2, differential invariants on orientation scores
based on gauge frames {B1, B2, B3}.
2. for d = 2, 3, nonlinear adaptive diffusions steered along
the gauge frames, i.e.,
W (x, n, t ) = W!(x, Rn, t ) = Φt (U˜ )(x, Rn),
where W!(g, t ), with t ≥ 0, is the solution of
⎨⎧ ∂∂Wt˜ (g, t ) =
Dii (Bi )2 g W˜ (g, t ),
(111)
(112)
where the gauge frame is induced by an exponential curve
fit to data U˜ at location g ∈ S E (d).
7.2 Experiments in S E(2)
We consider the application of enhancing and detecting the
vascular tree structure in retinal images. Such image
processing task is highly relevant as the retinal vasculature provides
noninvasive observation of the vascular system. A variety
of diseases such as glaucoma, agerelated macular
degeneration, diabetes, hypertension, arteriosclerosis, or Alzheimer’s
affect the vasculature and may cause functional or
geometric changes [41]. Automated quantification of these defects
promises massive screenings for vascularrelated diseases on
the basis of fast and inexpensive retinal photography. To
automatically assess the state of the retinal vascular tree,
vessel segmentation is needed. Because retinal images
usually suffer from low contrast on small scales, the vasculature
Fig. 11 Visualization of
cakewavelets in 2D (top) and
3D (bottom). In 2D we fill up
the ‘pie’ of frequencies with
overlapping “cake pieces,” and
application of an inverse DFT
(see [6]) provides wavelets
whose real and imaginary parts
are, respectively, line and edge
detectors. In 3D we include
antisymmetrization and the
Funk transform [22] on L2(S2)
to obtain the same, see [42]. The
idea is to redistribute spherical
data from orientations towards
circles laying in planes
orthogonal to those orientations.
Here, we want the real part of
our wavelets to be line detectors
(and not plate detectors) in the
spatial domain. In the figure one
positive isolevel is depicted in
orange and one negative
isolevel is depicted in blue
(Color figure online)
in the images needs to be enhanced prior to the
segmentation. One wellestablished approach is the Frangi vesselness
filter [36] which is used in robust retinal vessel
segmentation methods [14, 51]. However, a drawback of the Frangi
filter is that it cannot handle crossings or bifurcations that
make up an important part of the vascular network. This is
precisely where the orientation score framework and the
presented locally adaptive frame theory comes into play.
The S E (
2
)vesselness filter, extending Frangi vesselness
[36] to S E (
2
) (cf. [40]) and based on the locally adapted
frame {B1, B2, B3} is given by the following leftinvariant
operator:
Φ(U˜ ) =
⎧ R2
⎪⎨ e− 2σ12
structureness: S = (B12U˜ )2 + (B22U˜ + B32U˜ )2,
convexity: Q = B2 U + B32U ,
2
if Q ≥ 0, ,
if Q < 0.
B12U˜
B22U˜ + B32U˜
,
(114)
with σ1 = 21 and σ2 = 0.2 B22U˜ +B32U˜ ∞. Here the
decomposition of the vesselness in structureness, anisotropy, and
convexity follows the same general principles of the
vesselness. As in vessels are linelike structures, we use the
exponential curve fits of second order obtained via the
symmetric product of the Hessian (i.e., solving the optimization
problem in Theorem 3).
Similarly to the vesselness filter [36], we need a
mechanism to robustly deal with vessels of different width. This is
why for this application we extend the (allscale) orientation
scores to multiplescale invertible orientation scores. Such
multiplescale orientation scores [63] coincide with wavelet
transforms on the similitude group S I M (
2
) = R2 S O (
2
)×
R+, where one uses a Bspline [32, 69] basis decomposition
along the logradial axis in the Fourier domain. In our
experiments we used N = 4, 12, or 20 orientation layers and a
decomposition centered around M = 4 discrete scales al
given by
al = aminel (M−1)−1 log(amax/amin),
(113)
l = 0, . . . , M − 1 where amax is inverse proportional to
the Nyquistfrequency ρn and amin close to the inner scale
[33] induced by sampling (see [63] for details). Then, the
multiplescale orientation score is given by the following
wavelet transform Wψ f : S I M (
2
) → C:
Wψ f (x, θ , a) =
ψ (a−1Rθ−1(y − x)) f (y) dy,
(115)
Rd
and we again set U := Re{Wψ f }. Finally we define the total
integrated multiplescale S I M (
2
)vesselness by
(Φ S I M(
2
)(U ))(x)
(Φ(U (·, ·, ·, ai )))(x, θ j ),
(116)
where S E (
2
)vesselness operator Φ is given by Eq. (113),
and where μ∞ and μi,∞ denote maxima w.r.t. supnorm · ∞
taken over the subsequent terms.
Note that another option for constructing a S I M
(
2
)vesselness is to use the nonadaptive leftinvariant frame
{A1, A2, A3} instead of the gauge frame. This nonadaptive
S E (
2
)vesselness operator is obtained by simply
replacing the Bi operators by the Ai operators in Eq. (113)
accordingly.
The aim of the experiments presented in this section is to
show the following advantages:
Advantage 1 The improvement of considering the
multiplescale vesselness filter via gauge frames
in S E (
2
), compared to multiplescale
vesselness [36] acting directly on images.
Advantage 2 Further improvement when using the gauge
frames instead of using the leftinvariant
vector fields in S E (
2
)vesselness (113).
In the following experiment, we test these 3
techniques (Frangi vesselness [36], S I M (
2
)vesselness via the
nonadaptive leftinvariant frame, and the newly proposed
S I M (
2
)vesselness via gauge frames) on the publically
available3 HighResolution Fundus (HRF) dataset [47],
containing manually segmented vascular trees by medical
experts. The HRF dataset consists of widefield fundus
photographs for a healthy, diabetic retinopathy and a glaucoma
group (15 images each). A comparison of the 3 vesselness
filters on a small patch is depicted in Fig. 12. Here, we see
that our method performs better both at crossing and
noncrossing structures.
To perform a quantitative comparison, we devised a simple
segmentation algorithm to turn a vesselness filtered image
V( f ) into a segmentation. First an adaptive thresholding is
applied, yielding a binary image
f B = Θ [V( f ) − Gγ ∗ V( f )] − h ,
(117)
where Θ is the unit step function, Gγ is a Gaussian of
scale γ = 21 σ 2 1 and h is a threshold parameter. In
a second step, the connected morphological components in
f B are subject to size and elongation constraints.
Components counting less than τ pixels or showing elongations
below a threshold ν are removed. Parameters γ , τ , and ν
are fixed at 100 px, 500 px, and 0.85, respectively. The
vesselness map V( f ) : R2 → R is one of the three methods
considered.
The segmentation algorithm described above is evaluated
on the HRF dataset. Average sensitivity and accuracy over
the whole dataset are shown in Fig. 13 as a function of
the threshold value h. It can be observed that our method
performs considerably better than the one based on the
multiscale Frangi filter. The segmentation results obtained with
S I M (
2
)vesselness (116) based on gauge frames are more
stable w.r.t variations in the threshold h and the performance
on the small vasculature has improved as measured via the
sensitivity. Average sensitivity and accuracy at a threshold
of h = 0.05 compare well with other segmentation
methods evaluated on the HRF dataset for the healthy cases (see
[14, Tab. 5], [40]). On the diabetic retinopathy and glaucoma
3 cf. http://www5.cs.fau.de/research/data/fundusimages/.
to ±1 σ . Right comparing of S I M(
2
)vesselness with and without
including the gauge frame (i.e. using {A1, A2, A3} in Eq. (113))
Fig. 14 Center original image from HRF dataset (healthy subject
nr. 5). Rows the softsegmentation (left) and the corresponding
performance maps (right) based on the hard segmentation (117). In green true
positives, in blue true negatives, in red false positives, compared to
manual segmentation by expert. First row S I M(
2
)vesselness (116) based
on nonadaptive frame {A1, A2, A3}. Second row S I M(
2
)vesselness
(116) based on the gauge frame (Color figure online)
group, our method even outperforms existing segmentation
methods.
Finally, regarding the second advantage we refer to
Fig. 14, where the S I M (
2
)vesselness filtering via the locally
adaptive frame produces a visually much more
appealing softsegmentation of the blood vessels than S I M
(
2
)vesselness filtering via the nonadaptive frame. It therefore
also produces a more accurate segmentation as can be
deducted from the comparison in Fig. 13. For comparison,
the multiscale Frangi vesselness filter is also computed via
summation over singlescale results and maxnormalized.
Generally, we conclude from the experiments that the locally
adaptive frame approach better reduces background noise,
showing much less false positives in the final segmentation
results. This can be seen from the typical segmentation results
on relatively challenging patches in Fig. 15.
7.3 Experiments in SE(3)
We now show first results of the extension of
coherenceenhancing diffusion via invertible orientation scores (CEDOS
[35]) of 2D images to the 3D setting. Again, data are
processed according to Fig. 3. First, we construct an
orientation score according to (108), using the 3D cakewavelets
(Fig. 11). For determining the gauge frame we use the
firstorder structure tensor method in combination with Eq. (118)
in Appendix 1. In CEDOS we have Φ = Φt , as defined in
(111) and (112), which is a diffusion along the gauge frame.
Fig. 15 Two challenging
patches (one close to the optic
disk and one far away from the
optic disk processed with the
same parameters). The gauge
frame approach typically
reduces false positives (red) on
the small vessels, and increases
false negatives (blue) at the
larger vessels. The top patch
shows a missing hole at the top
in the otherwise reasonable
segmentation by the expert
(Color figure online)
The diffusion in CEDOS can enhance elongated
structures in 3D data while preserving the crossings as can be
seen in the two examples in Fig. 16. In these experiments
as well as in the example used in Figs. 8, 9, and 10, we
used the following 3D cakewavelet parameters for
constructing the 3D invertible orientation scores: N0 = 42, sφ =
0.7, k = 2, N = 20, γ = 0.85, L = 16 evaluated on a
grid of 21 × 21 × 21 pixels, for details see [42]. The
settings for tangent vector estimation using the structure tensor
are s p = 21 (1.5)2, s0 = 0, ρo = 21 (0.8)2, and μ = 0.5.
We used ρ p = 21 (
2
)2 for the first dataset (Fig. 16 top), and
ρ p = 21 (3.5)2 for the second dataset (Fig. 16 bottom). For
the diffusion we used t = 2.5, D11 = D22 = 0.01, D33 =
1, D44 = D55 = D66 = 0.04, where the diffusion matrix is
given w.r.t. gauge frame {B1, B2, B3, B4, B5, B6}, and
normalized frame {μ−1A1, μ−1A2, μ−1A3, A4, A5, A6}.
The advantages of including the gauge frames w.r.t. the
nonadaptive frame can be better appreciated in Fig. 17. Here,
we borrow from the neuroimaging community the glyph
visualization, a standard technique for displaying
distributions U : R3 × S2 → R+. In such visualizations every
voxel contains a spherical surface plot (a glyph) in which
the radial component is proportional to the output value of
the distribution at that orientation, and the colors indicate the
orientations. One can observe that diffusion along the gauge
frames include better adaptation for curvature. This is mainly
due to the angular part in the B3direction, cf. Fig. 18, which
includes curvature, in contrast to A3direction. The angular
part in B3 causes some additional angular blurring leading to
more isotropic glyphs.
8 Conclusion
Locally adaptive frames (‘gauge frames’) on images based
on the structure tensor or Hessian of the images are illposed
at the vicinity of complex structures. Therefore, we create
locally adaptive frames on distributions on S E (d), d = 2, 3
that extend the image domain (with positions and
orientations). This gives rise to a whole family of local frames per
position, enabling us to deal with crossings and bifurcations.
In order to generalize gauge frames in the image domain
to gauge frames in S E (d), we have shown that exponential
curve fits gives rise to suitable gauge frames. We
distinguished between exponential curve fits of the first order and
of the second order:
1. Along the firstorder exponential curve fits, the firstorder
variation of the data (on S E (d)) along the exponential
curve is locally minimal. The Euler–Lagrange equations
are solved by finding the eigenvector of the structure
tensor of the data, with smallest eigenvalue.
2. Along the secondorder exponential curve fits, a
secondorder variation of the data (on S E (d)) along the
exponential curve is locally minimal. The Euler–Lagrange
equations are solved by finding the eigenvector of the
Hessian of the data, with smallest eigenvalue.
In S E (
2
), the firstorder approach is new, while the
secondorder approach formalizes previous results. In S E (
3
), these
two approaches are presented for the first time. Here, it is
necessary to include a restriction to torsionfree exponential
curve fits in order to be both compatible with the null space
of the structure/Hessian tensors and the quotient structure of
R3 S2. We have presented an effective twofold algorithm
to compute such torsionfree exponential curve fits.
Experiments on artificial datasets show that even if the elongated
structures have torsion, the gauge frame is well adapted to
the local structure of the data.
Finally, we considered the application of a differential
invariant for enhancing retinal images. Experiments show
clear advantages over the classical vesselness filter [36].
Furthermore, we also show clear advantages of including the
gauge frame over the standard leftinvariant frame in S E (
2
).
Regarding 3D image applications, we managed to construct
Fig. 16 Results of CEDOS
with and without the use of
gauge frames, on 3D artificial
datasets containing highly
curved structures. Gauge frames
are obtained, see Appendix 1,
via 1st order exponential curve
fits using the twofold algorithm
of Sect. 6.3.3. a 3D data. b Slice
of data. c Curve fits. d
Slice+Noise. e Gauge. f No
gauge. g 3D data. h Data slice. i
Curve fits. j Slice+Noise. k
Gauge. l No gauge
and implement crossingpreserving coherenceenhancing
diffusion via invertible orientation scores (CEDOS), for the
first time. However, it has only been tested on artificial
datasets. Therefore, in future work we will study the use
of locally adaptive frames in real 3D medical imaging
applications, e.g., in 3D MR angiography [43]. Furthermore, in
future work we will apply the theory of this work and focus
on the explicit algorithms, where we plan to release
Mathematica implementations of locally adaptive frames in SE(
3
).
Acknowledgments The authors wish to thank J.M. Portegies for
fruitful discussions on the construction of gauge frames in S E (
3
) and
T.C.J. Dela Haije for help in optimizing code for the S E (
3
)case.
Finally, we would like to thank the reviewers and Dr. A.J.E.M. Janssen
for careful reading and valuable suggestions on the structure of the
paper. The research leading to these results has received funding from
the European Research Council under the European Community’s
Seventh Framework Programme (FP7/20072013) / ERC Grant Lie
Analysis, agr. nr. 335555.
Open Access This article is distributed under the terms of the Creative
Commons Attribution 4.0 International License (http://creativecomm
ons.org/licenses/by/4.0/), which permits unrestricted use, distribution,
and reproduction in any medium, provided you give appropriate credit
to the original author(s) and the source, provide a link to the Creative
Commons license, and indicate if changes were made.
Appendix 1: Construction of the Locally Adaptive
Frame from an Exponential Curve Fit
t nd ci Ai
Let γ˜gc(t ) = g e i=1
that fits data U˜ : S E (d) → R at g ∈ S E (d) in Lie group
S E (d) of dimension nd = d(d + 1)/2. In Sect. 5 (d = 2),
and in Sect. 6 (d = 3), we provide theory and algorithms to
derive such curves. In this section we assume γgc(·) is given.
Recall from (
17
) that the (physical) velocity at time t of the
exponential curve γ˜gc equals (γ˜gc) (t ) = in=d1 ci Ai g=γ˜gc(t).
be an exponential curve through g
– Spatial leftinvariant vector fields that are Gμorthogonal
to this main spatial direction stay in the spatial part of
the tangent space Tg(S E (d)) under rotation Rc and they
are invariant up to normalization under the action (120)
if and only if the exponential curve fit is horizontal.
Proof Regarding the first property we note that
(L g)∗
a
0
· A e =
· A g
a
0
(118)
as leftinvariant vector fields are obtained by pushforward of
the left multiplication. Furthermore, by Eqs. (120) and (119)
we have
Recall that the spatial and, respectively, rotational
components of the velocity are stored in the vectors
c(
1
) = (c1, . . . , cd )T ∈ Rd ,
c(
2
) = (cd+1, . . . , cnd )T ∈ Rrd .
c(
1
)
Let us write c = c(
2
) ∈ Rnd , with nd = d + rd .
Akin to the case d = 2 discussed in the introduction, we
define the Gauge frame via B := (Rc)T Mμ−1A, but now with
Mμ =
B = (B1, . . . , Bnd )T , A = (A1, . . . , And )T ,
μId 0
0 Ird
, and Rc = R2R1 ∈ S O(nd ).
For explicit formulae of the leftinvariant vector fields in the
ddimensional case we refer to [31].
Now R1 is the counterclockwise rotation that rotates the
spatial reference axis 0a , recall our convention (
2
), onto
strictly within the 2D plane spanned by these
two vectors. Rotation R2 is the counterclockwise rotation
μ c(
1
) a μc(
1
)
that rotates c(
2
) onto c(
2
) strictly within the
2D plane spanned by these two vectors. As a result one has
R2
→
μc(
1
)
c(
2
)
= Mμc
Theorem 7 (Construction of the gauge frame) Let c(g)
denote the local tangent components of exponential curve
fit t → γ˜gc(g)(t ) at g = (x, R) ∈ S E (d) in the data given by
U˜ (x, R) = U (x, Ra). Consider the mapping of the frame
of leftinvariant vector fields A g to the locally adaptive
frame:
B g := (Rc(g))T Mμ−1 A g ,
with Rc = R2R1 ∈ S O(nd ), with subsequent
counterclockwise planar rotations R1, R2 given by (119). Then the
mapping Ag → Bg has the following properties:
a
– The main spatial tangent direction (L g)∗ 0
· A e is
mapped to exponential curve fit direction cT (g) · A g.
(119)
(120)
a
0
Rc
b
0
· B = Mμ−1Rc
a
0 · A = c · A.
= R2R1
= R2
b
0
b
0
Regarding the second property, we note that if b · a = 0 ⇒
b
0
and only spatial normalization by μ−1 is applied.
and γ˜gc is horizontal iff cc((
11
)) = a in which case the planar
rotation R2 reduces to the identity and R2R1 = b0 T
Remark 16 For d = 2 and a = (
1, 0
)T the above theorem
can be observed in Fig. 5, where main spatial direction A1 =
cos θ ∂x + sin θ ∂y is mapped onto B1 = c · A and where A2
is mapped onto B2 = μ−1(− sin χ A1 + cos χ A2).
Remark 17 For d = 3 and a = (
0, 0, 1
)T the above theorem
can be observed in Fig. 18, where main spatial direction A3 =
n · ∇R3 is mapped onto B3 = c · A, and where A1 and A2
are mapped to the strictly spatial generators B1 and B2. For
further details see [43].
Appendix 2: The Geometry of Neighboring Exponential Curves
In this appendix we provide some differential geometry
underlying the family of neighboring exponential curves.
First we prove Lemma 3 on the construction of the family
c
{γ˜h,g} of neighboring exponential curves in S E (
3
), recall
Fig. 7, and then we provide an alternative coordinatefree
definition of γ˜hc,g in addition to our Definition 5 .
For the proof of Lemma 3 we will just show equalities
(86) as from this equality it directly follows by differentiation
w.r.t. t that the exponential curves γ˜hc,g(·) = (xh (·), Rh (·))
and γ˜gc(·) = (xg(·), Rg(·)) have the same spatial and angular
Fig. 18 Visualization of the mapping of leftinvariant frame
{A1, . . . , A6}g onto locally adaptive spatial frame {B1, . . . , B6}g
and γ˜gc(·) a nonhorizontal and torsionfree exponential curve passing
trough g = e = (0, I ). The top row indicates the spatial part in R3,
whereas the bottom row indicates the angular part in S2. The top black
curve is the spatial projection of γ˜gc(·), and the bottom black curve is
the angular projection of the exponential curve. After application of R2T
velocity. For the spatial velocities it is obvious, for the angular
velocities, we note that rotational velocity matrices Ω h and
Ω g are indeed equal:
Rh (t ) = Rg (t )R−1R ⇒
d
Ω h := dt Rh (t ) t=0
d
= dt Rg (t )
t=0
(Rh (0))−1
(Rg (0))−1 = Ω g ,
where we note that Rg (t ) = etΩg Rg (0) = etΩg R, and
Rh (t ) = etΩh Rh (0) = etΩh R .
Regarding the remaining derivation of (86), we note that
it is equivalent to
by group product (
4
). So we focus on the derivation of this
identity. Let us set Q := (R )T R. Now relying on the matrix
representation (
6
) and matrix exponential, we deduce the
following identity for h = e = (0, I ):
γ˜ec,g (t ) = γ˜e
(t ) = (0, Q) γ˜ec(t ) (0, Q−1),
(122)
γ˜hc,g (t ) = h (0, (R )TR) (g−1γ˜gc(t )) (0, (R )TR)−1,
(121)
Uh,g := (L h )∗R˜ h−1g (L g−1 )∗,
the exponential curve is horizontal w.r.t. the frame { Aˆi }, subsequently
R1T leaves spatial generators orthogonal to a horizontal curve invariant,
so that Aˆ1 = B1 and Aˆ2 = B2 are strictly spatial, as is in accordance
with Theorem 7. The angular part of B3 is shown via the curvature at
the blue arrow. The spatial part of B4 and B5 is depicted in the center
of the ball (Color figure online)
which holds for all Q ∈ S O (
3
), in particular for Q =
(R )T R. Consequently, we have
γ˜hc,g (t ) = hγ˜ec,g (t ) = h (0, Q) g−1γ˜gc(t ) (0, Q−1),
from which the result follows.
We conclude From Lemma 3 that our Definition 5 is indeed
the right definition for our purposes, but as it is a definition
expressed in leftinvariant coordinates it also leaves the
question what the underlying coordinatefree unitary map from
Tg (S E (
3
)) to Th (S E (
3
)) actually is. Next we answer this
question where we keep Eq. (121) in mind.
Definition 6 Let us define the unitary operator Uh,g :
Tg (S E (
3
)) → Th (S E (
3
)) by
for each pair g = (x, R), h = (x , R ) ∈ S E (
3
).
Remark 18 From (87) it follows that the unitary
correspondence between Tγ˜gc(t) and Tγ˜hc,g(t) is preserved for all t ∈ R.
Definition 7 The coordinatefree definition of γ˜hc,g is that it
is the unique exponential curve passing through h at t = 0
with
(γ˜hc,g) (0) = Uh,g
(γ˜gc) (0) .
Remark 19 The previous definition (Definition 5) follows
from the coordinatefree definition (Definition 7). This can
be shown via identity (121) which can be rewritten as
γ˜hc,g(t ) = Lh ◦ conj(0, Q) ◦ (L g−1 )γ˜gc(t ),
(123)
which indeed yields
(Lh ◦ conj(0, Q) ◦ (L g−1 ))∗ = (Lh )∗ ◦ Ad(0, Q) ◦ (L g−1 )∗
again with Q = (R )T R, conj(g)h = ghg−1, and Ad(g) =
(conj(g))∗ is the adjoint representation [44].
Appendix 3: Exponential Curve Fits on S E(3) of the
Second Order via Factorization
Instead of applying a secondorder exponential curve fit (106)
containing a single exponential, one can factorize
exponentials, and consider the following optimization:
t=0
.
(124)
As shown in Theorem 8 the Euler–Lagrange equations are
solved by spectral decomposition of the symmetric Hessian
given by
⎟ ,
⎠
with V˜ = G˜ s ∗ U˜ . This Hessian differs from the consistent
Hessian in Appendix 2.
Theorem 8 (SecondOrder Fit via Factorization) Let g ∈
s
S E (
3
) be such that Hessian matrix Mμ−1(H (g))Mμ−1 has
two eigenvalues with the same sign. Then the normalized
eigenvector Mμc∗(g) with smallest eigenvalue provides the
solution c∗(g) of the following optimization problem (124).
Proof Define F1 := c(
1
) · A(
1
) ∈ Te(S E (
3
)) with A(
1
) :=
( A1, A2, A3)T . Define F2 := c(
2
) · A(
2
) ∈ Te(S E (
3
))
with A(
2
) := ( A4, A5, A6)T . Define vector fields F1g :=
(L g)∗ F1, F2g := (L g)∗ F2. Then Then
(125)
In this section we will provide a formal differential
geometrical underpinning for our choice of Hessian matrix
d2
dt 2 V˜ (g et F1 et F2 )
= hli→m0
t=0
V˜(geh F1 eh F2 ) − 2V˜(g) + V˜(ge−h F1 e−h F2 )
h2
= F1F1V˜ (g) + F2F2V˜ (g) + 2F1F2V˜ (g)
s
= (c(g))T H (g)c(g).
This follows by direct computation and the formula
h2
V˜ (qeh Fk ) = V˜ (q) + hFk V˜ (q) + 2 Fk2V˜ (q) + O(h3),
applied for (q = geh F1 , k = 2) and (q = g, k = 1).
Therefore, we can express the optimization functional as
d2
dt 2 V˜ (get (c1 A1+c2 A2+c3 A3)et (c4 A4+c5 A5))
t=0
(126)
with again boundary condition ϕ(c) = cT M2μc = 1, from
which the result follows via Euler–Lagrange ∇E = λ∇ϕ
and left multiplication with Mμ−1.
This approach can again be decomposed in the twofold
approach. Effectively, this means that in Sect. 6.4.2 the upper
triangle of the Hessian Hs is replaced by the lower triangle,
whereas the lower triangle is maintained. This approach
performs well in practice; see, e.g., Fig. 9 where the results of
the exponential curve fits of second order are similar to
exponential curve fits of first order.
Appendix 4: The Hessian Induced by the Left
Cartan Connection
H(U˜ ) = [A j (Ai U˜ )],
(127)
where i denotes the row index and j the column index
on S E (d), recall the case d = 2 in (
62
) and recall the
case d = 3 in (107). Recall from Theorems 2, 3 and
6 that this Hessian naturally appears via direct sums or
products in our exponential curve fits of second order
on S E (d).
Furthermore, we relate our exponential curve fit theory to
the theory in [44], where the same idea of secondorder fits of
autoparallel curves to a given smooth function U˜ : M → R
in a Riemannian manifold is visible in [44, Eq. 3.3.50]. Here
we stress that in the book of Jost [44, Eq. 3.3.50] this is done
in the very different context of the torsionfree LeviCivita
connections, instead of the left Cartan connection which does
have nonvanishing torsion.
Let us start with the coordinatefree definition of the
Hessian induced by a given a connection ∇∗ on the cotangent
bundle.
Definition 8 (Coordinatefree definition Hessian) On a
Riemannian manifold (M, G) with connection ∇∗ on T ∗(M ),
the Hessian of smooth function U˜ : M → R is defined
coordinate independently [44, Def. 3.3.5] by ∇∗dU˜ .
In coordinatefree form one has (cf. [44, Eq. 3.3.50])
t=0
for the autoparallel (i.e., ∇γ˙ γ˙ = 0) curve γ (t ) with tangent
γ (0) = X p passing through c(0) = p at time zero.
Remark 20 In many books on differential geometry ∇∗ is
again denoted by ∇ (we also did this in our previous works
[25,28]). In this appendix, however, we distinguish between
the connection ∇ on the tangent bundle and its adjoint
connection ∇∗ on the cotangent bundle T ∗(S E (d)).
Let us recall that the structure constants of the Lie algebra
are given by
[Ai , A j ] = Ai A j − A j Ai =
nd
k=1
k
ci j Ak .
As shown in previous work [28] the left Cartan connection4 ∇
on M = (S E (d), Gμ) is the (metric compatible) connection
whose Christoffel symbols, expressed in the leftinvariant
moving (co)frame of reference, are equal to the structure
constants of the Lie algebra:
Γikj = c ji = −ci j ∈ {−1, 0, 1}.
k k
More precisely, this means that if we compute the covariant
idnerTiv(aStiEve(do)f) aalvoencgtotrhefietaldngYen=tγ˙˜ (tkn)=d=1 yk Ain=dk 1(γi˜˙. ei(.,t )a sAeictγi˜ o(tn)
of some smooth curve t → γ˜ (t ) in S E (d). This is done as
follows
∇γ˙˜ Y =
⎝ y˙k −
nd ⎛
nd
i, j=1
cikj γ˙˜ y ⎠ Ak ,
i j
where we follow the notation in Jost’s book [44, p. 108]
and define y˙k (t ) := ddt yk (γ˜ (t )). Byduality this induces the
4 Also known as minus Cartan connection.
(128)
(129)
(130)
following (adjoint) covariant derivative of a covector field λ
(i.e., a section in T ∗(S E (d))):
(131)
∇γ∗˙ λ =
˜
⎝ λ˙i +
nd ⎛
k, j=1
⎞
cikj λk γ˜˙ j ⎠
with λ˙i (t ) = ddt λi (γ˜ (t )). Then by antisymmetry of the
structure constants it directly follows (see, e.g., [25]) that the
autoparallel curves are the exponential curves:
c
∇γ˙˜ γ˜˙ = 0 and γ˙˜ (0) = c and γ˜ (0) = g ⇔ γ˜ = γ˜g .
(132)
Remark 21 Due to torsion of the left Cartan connection, the
autoparallel curves do not coincide with the geodesics w.r.t.
metric tensor Gξ . This is in contrast to the Levi–Cevita
connection (see, for example, Jost’s book [44, ch:3.3, ch:4])
where autoparallels are precisely the geodesics (see [44,
ch:4.1]).
Intuitively speaking this means that in the curved
geometry of the left Cartan connection on S E (d) (that is present in
the domain of an orientation score, see Fig. 3) the ‘straight
curves’ (i.e., the autoparallel curves) do not coincide with
the ‘shortest curves’ (i.e., the Riemannian distance
minimizers).
The left Cartan connection is the consistent connection on
S E (d) in the sense that autoparallel curves are the
exponential curves studied in this article. Therefore, the consistent
Hessian form on S E (d) is induced by the left Cartan
connection. Expressing it in the leftinvariant frame yields
∇∗dU˜ (Ai , A j ) (1d=3e1f) (∇ndA∗i dU˜ )(A j )
nd k
(Ai A j U˜ + k=1 c j i AkU˜ ) ω j (A j )
(=11) (Ai A j U˜ +
nd k
c ji AkU˜ )
k=1
(129)
= (Ai (A j U˜ ) + (A j Ai − Ai A j )U˜ )
= (A j (Ai U˜ ),
(133)
where i denotes the row index and j the column index. So
we conclude from this computation that (127) is the correct
consistent Hessian on S E (d) for our purposes.
Remark 22 The left Cartan connection has torsion and is not
the same as the standard torsionfree Cartan–Schouten
connection on Lie groups, which have also many applications in
image analysis an statistics on Lie groups, cf. [55,56]. Recall
that within the orientation score framework, right invariance
is undesirable.
Appendix 5: Table of Notations
See Table 1.
Table 1 Table of notations, with five panels. In (a) we provide notations
regarding input data and the spaces on the domain of this data, In (b) we
provide notations for the tools from differential geometry. In (c) we
provide notation regarding exponential curves, where we split between the
general case d ≥ 2 (part I) and specific notation for the case d = 3 (part
II). In (d) we provide notations related to image processing applications
via invertible orientation scores. In (e) we provide notations regarding
Cartan connections
(a) Spaces and input data
Sd−1
V
˜
(b) Tools from differential geometry
The group of rotations and translations on Rd
Space of positions & orientations as a group quotient in S E (d)
Input data U˜ : S E (d) → R
Input data U : Rd
Sd−1 → R
Gaussian smoothed input data V˜ = G˜ s ∗ U˜
(c) Part I: Exponential curves and exponential curve fits on SE(d)
Hs
(c) Part II: Exponential curve fits on SE(
3
) with projections in R3
Gaussian Hessian Hs := Hs(U˜ ) = H(V˜ ) of input data U˜
S2
Leftinvariant vector field Ai restricted to g ∈ S E (d)
Gauge vector field Bi restricted to g ∈ S E (d)
Metric tensor Gμ restricted to g ∈ S E (d)
μnorm on Rnd , with nd = dim(S E (d)) = d(d2+1)
Matrix Mμ :=
· μ
μId 0
0 Ird
is used in definition of the μnorm
Derivative of U˜ at g which is a covector in Tg∗(S E (d))
Gradient of U˜ at g which is a vector in Tg(S E (d))
Deviation from horizontality angle
Leftregular representation given by LgU˜ (h) = U˜ (g−1h)
Rightregular representation given by RgU˜ (h) = U˜ (h g)
Left multiplication L g h = gh
Exponential curve starting from g with velocity c = (c(
1
), c(
2
))
Neighboring exponential curve starting at h ∈ S E (d) with the
same velocity as curve γ˜gc
Exponential curve fit to data U˜ at g ∈ S E (d)
Local tangent vector to exponential curve fit γ˜gc∗ to data U˜
c
Rotation in Th (S E (d)) arising in the construction of γ˜h,g
Structure tensor Ss,ρ := Ss,ρ (U˜ ) of input data U˜
Counterclockwise 3D rotation about axis a by angle φ
Any 3D rotation that maps a = (
0, 0, 1
)T onto n ∈ S2
Element hα = (0, Ra,α ) of the subgroup ≡ {0} × S O(
2
)
Symbol denoting action of SE(
3
) onto R3 S2
Rotation matrix in S O(
6
) that arises in ∇U˜ if g → ghα
Null space of the structure tensor Ss,ρ
Projected exponential curve fit to data U at (y, n) ∈ R3
S2
Location in S E (
3
) for horizontal exponential curve fit
R˜ h−1g
Ss,ρ
Ra,φ
Rn
hα
Zα
N
Reference
Sections 1, 2.1, and (
4
)
(
3
), and Sect. 6.1
(
1
), and Section 2.1
(
1
), (
3
), and Sect. 6.1
(
26
), and (
27
)
Section 2.3, and (
10
), (
9
)
Section 3, and (
36
)
Section 2.5, and (
23
)
Section 2.5, and (
24
)
Section 2.5, and (
24
)
Section 2.7 and (
28
)
(
12
)
(
34
)
(
7
)
(
7
)
(
8
)
(
16
), and Section 2.4
(
51
) and Section 5.1, (84) and Section 6.2
(
16
), and Theorem 1,2,3,4,5,6, Fig. 3
(
54
), (
63
), (
64
), (88), and (106)
(
52
), and (85)
(
9
), (89), and (97)
(
62
), and (107)
(76)
(80)
(90)
Text below (
31
)
(
31
), and Theorem 5
(74), and Section 6.1
(
9
), and (98)
(101)
Wψ f
Φt
∇γ˙˜ Y
∇γ∗˙ ω
˜
W˜ (g, t )
Φ
(e) Appendix
Explanation
Input grayscale image f : Rd → R
Orientation score of grayscale image f via cakewavelet ψ
Nonlinear diffusion operator (diagonal diffusion in gauge
frame)
Scale space representation of U˜ at g ∈ S E (d) and scale t > 0
Vesselness operator
Covariant derivative of vector field Y along γ˙˜ w.r.t. Left Cartan
connection ∇ on T (S E (d))
Covariant derivative of covector field ω along γ˙˜ w.r.t. the
adjoint Left Cartan connection ∇∗ on T ∗(S E (d))
Coordinatefree definition of the Hessian
(108), and (110)
(108), and Fig. 3, 10, 11
(112)
(112)
(113)
(130), and Appendix 4
(131), and Appendix 4
(128), (133), and Appendix 4
R. Duits received his M.Sc.
degree (cum laude) in
Mathematics in 2001 at the TU/e,
The Netherlands. He received
his Ph.D. degree (cum laude) at
the Department of Biomedical
Engineering at the TU/e.
Currently he is an associate professor
at the Department of Applied
Mathematics & Computer
Science and affiliated at the
Biomedical Image Analysis group
at the Biomedical Engineering
Department. He has received an
ERCStG 2013 personal grant
(Lie analysis, no. 335555). His research interests include group theory,
functional analysis, differential geometry, PDE’s, harmonic analysis,
geometric control, and their applications to biomedical imaging and
vision.
G. R. Sanguinetti received his
E. Eng. and Ph.D. degrees at
Universidad de la República,
Uruguay, in 2005 and 2011,
respectively. He held a
visiting position at Università di
Bologna, Italy and he was a
postdoctoral fellow at INRIA, Sophia
Antipolis, France. Currently he is
postdoctoral fellow at the TU/e,
The Netherlands. His research
interests include image
processing, applied analysis, and
computational neurosciences.
1. Aganj , I. , Lenglet , C. , Sapiro , G. , Yacoub , E. , Ugurbil , K. , Harel , N.: Reconstruction of the orientation distribution function in single and multiple shell qball imaging within constant solid angle . MRM . 64 ( 2 ), 554566 ( 2010 )
2. Ali , S.T. , Antoine , J.P. , Gazeau , J.P. : Coherent States, Wavelets and Their Generalizations . Springer, New York ( 2000 )
3. Aubin , T.: A Course in Differential Geometry . Graduate Studies in Mathematics , vol. 27 . American Mathematical Society, Providence ( 2001 )
4. August , J. , Zucker , S.W.: The curve indicator random field: curve organization and correlation . Perceptual Organization for Artificial Vision Systems , pp. 265  288 . Kluwer Academic, Boston ( 2000 )
5. Barbieri , D. , Citti , G. , Sanguinetti , G. , Sarti , A. : An uncertainty principle underlying the functional architecture of V1 . J. Physiol. Paris 106 (5 6 ), 183  193 ( 2012 )
6. Bekkers , E. , Duits , R. , Berendschot , T., ter Haar , B.M.: A multiorientation analysis approach to retinal vessel tracking . J. Math. Imaging Vis . 49 , 583  610 ( 2014 )
7. BenShahar , O. , Zucker , S.W.: The perceptual organization of texture flow: a contextual inference approach . IEEE Trans. PAMI 25 ( 4 ), 401  417 ( 2003 )
8. Bergholm , F. : Edge focussing . IEEE Trans. PAMI 9 ( 6 ), 726  741 ( 1987 )
9. Bigun , J. , Granlund , G.: Optimal orientation detection of linear symmetry . In: ICCV , pp. 433  438 ( 1987 )
10. Blom , J.: Topological and geometrical aspects of image structure . PhD thesis , University of Utrecht ( 1992 )
11. Boscain , U. , Chertovskih , R.A. , Gauthier, J.P. , Remizov , A.O. : Hypoelliptic diffusion and human vision: a semidiscrete new twist . SIAM J. Imaging Sci . 7 ( 2 ), 669  695 ( 2014 )
12. Breuss , M. , Burgeth , B. , Weickert , J.: Anisotropic continuousscale morphology . IbPRIA. LNCS , vol. 4478 , pp. 515  522 . Springer, Heidelberg ( 2007 )
13. Burgeth , M. , Breuss , M. , Didas , S. , Weickert , J.: PDEbased morphology for matrix fields: numerical solution schemes . In: AjaFernandez, S., de LuisGarcia, R. , Tao , D. , Li , X . (eds.) Tensors in Image Processing and Computer Vision , pp. 125  150 . Springer, London ( 2009 )
14. Budai , A. , Bock , R. , Maier , A. , Hornegger , J. , Michelson , G.: Robust vessel segmentation in fundus images . Int. J. Biomed. Imaging ( 2013 )
15. Cao , F. : Geometric Curve Evolution and Image Processing . Springer, Heidelberg ( 2003 )
16. Caselles , V. , Kimmel , R. , Sapiro , G.: Geodesic active contours . Int. J. Comput. Vis . 22 ( 1 ), 61  79 ( 1997 )
17. Chirikjian , G.S. : Stochastic Models, Information Theory, and Lie Groups. Analytic Methods and Modern Applications , vol. 2 . Birkhäuser, Boston ( 2011 )
18. Chirikjian , G.S. , Kyatkin , A.B. : Engineering Applications of Noncommutative Harmonic Analysis: With Emphasis on Rotation and Motion Groups . CRC, Boca Raton ( 2000 )
19. Citti , G. , Sarti , A. : A cortical based model of perceptual completion in the rototranslation space . J. Math. Imaging Vis . 24 ( 3 ), 307  326 ( 2006 )
20. Citti , G. , Franceschiello , B. , Sanguinetti , G. , Sarti , A. : SubRiemannian mean curvature flow for image processing . Preprint on arXiv:1504.03710 ( 2015 )
21. Creusen , E.J. , Duits , R. , Vilanova , A. , Florack , L.M.J.: Numerical schemes for linear and nonlinear enhancement of DWMRI . NMTMA 6 ( 1 ), 138  168 ( 2013 )
22. Descoteaux , M. , Angelino , E. , Fitzgibbons , S. , Deriche , R.: Regularized, fast, and robust analytical Qball imaging . Magn. Reson. Med . 58 ( 3 ), 497  510 ( 2007 )
23. Duits , R. , Felsberg , M. , Granlund , G., ter Haar Romeny, B.M. : Image analysis and reconstruction using a wavelet transform constructed from a reducible representation of the Euclidean motion group . Int. J. Comput. Vis . 79 ( 1 ), 79  102 ( 2007 )
24. Duits , R.: Perceptual organization in image analysis, a mathematical approach based on scale, orientation and curvature . PhDthesis , TU/e, Eindhoven ( 2005 )
25. Duits , R. , Boscain , U. , Rossi , F. , Sachkov , Y. : Association Fields via Cuspless SubRiemannian Geodesics in SE(2) . J. Math. I Vis . 49 ( 2 ), 384  417 ( 2014 )
26. Duits , R., van Almsick, M.A. : The explicit solutions of linear leftinvariant second order stochastic evolution equations on the 2DEuclidean motion group . Q. Appl. Math. AMS 66 ( 1 ), 27  67 ( 2008 )
27. Duits , R. , Franken , E.M.: Left invariant parabolic evolution equations on S E (2) and contour enhancement via invertible orientation scores, part I: linear leftinvariant diffusion equations on S E (2 ). Q. Appl. Math. AMS 68 , 255  292 ( 2010 )
28. Duits , R. , Franken , E.M.: Left invariant parabolic evolution equations on S E (2) and contour enhancement via invertible orientation scores, part II: nonlinear leftinvariant diffusions on invertible orientation scores . Q. Appl. Math. AMS 68 , 293  331 ( 2010 )
29. Duits , R. , Dela Haije , T.C.J. , Creusen , E.J. , Ghosh , A. : Morphological and linear scale spaces for fiber enhancement in DWMRI . J. Math. Imaging Vis . 46 ( 3 ), 326368 ( 2013 )
30. Duits , R. , Franken , E.M. : Leftinvariant diffusions on the space of positions and orientations and their application to crossing preserving smoothing of HARDI images . Int. J. Comput. Vis . 92 , 231  264 ( 2011 )
31. Duits , R. , Ghosh , A. , Dela Haije , T.C.J. , Sachkov , Y.L. : Cuspless subRiemannian geodesics within the Euclidean motion group SE(d) . Neuromathematics of Vision . Springer Series Lecture Notes in Morphogenesis, pp. 173  240 . Springer, Berlin ( 2014 )
32. Felsberg , M. : Adaptive filtering using channel representations . In: Florack, L. , et al. (eds.) Mathematical Methods for Signal and Image Analysis and Representation . Computational Imaging and Vision , vol. 41 , pp. 35  54 . Springer, London ( 2012 )
33. Florack , L.M.J.: Image Structure. KAP , Dordrecht ( 1997 )
34. Franken , E.M.: Enhancement of crossing elongated structures in images . PhDthesis , Department of Biomedical Engineering, Eindhoven University of Technology ( 2008 )
35. Franken , E.M. , Duits , R.: Crossing preserving coherenceenhancing diffusion on invertible orientation scores . Int. J. Comput. Vis . 85 ( 3 ), 253278 ( 2009 )
36. Frangi , A.F. , Niessen , W.J. , Vincken , K.L. , Viergever , M.A. : Multiscale Vessel Enhancement Filtering . LNCS , vol. 1496 , pp. 130  137 . Springer, Berlin ( 1998 )
37. van Ginkel, M., van de Weijer, J., van Vliet, L.J. , Verbeek , P.W. : Curvature estimation from orientation fields . Proc. 11th Scand. Conf. Image Anal . 2 , 545  552 ( 1999 )
38. Guichard , F. , Morel , J.M.: Geometric partial differential equations and iterative filtering . In: Heymans, H.J.A.M. , Roerdink , J.B.T.M. (eds.) Mathematical Morphology and Its Applications to Image and Signal Processing , pp. 127  138 . KAP, Dordrecht ( 1998 )
39. ter Haar Romeny, B.M. : FrontEnd Vision and MultiScale Image Analysis , Computational Imaging and Vision , vol. 27 . Springer, Berlin ( 2003 )
40. Hannink , J. , Duits , R. , Bekkers , E.J. : Multiple scale crossing preserving vesselness . In: MICCAI Proc., LNCS 8674 , pp. 603  610 ( 2014 )
41. Ikram , M.K. , Ong , Y.T. , Cheung , C.Y. , Wong , T.Y.: Retinal vascular caliber measurements: clinical significance, current knowledge and future perspectives . Ophthalmologica 229 ( 3 ), 125  136 ( 2013 )
42. Janssen , M.H.J. , Duits , R. , Breeuwer , M. : Invertible orientation scores of 3D images . In: SSVM 2015. LNCS 9087 , pp. 563  575 ( 2015 )
43. Janssen , M.H.J.: 3D orientation scores applied to MRA vessel analysis . Master Thesis , Department of Biomedical Image Analysis, Eindhoven University of Technology, The Netherlands ( 2014 )
44. Jost , J.: Riemannian Geometry and Geometric Analysis, 4th edn . Springer, Berlin ( 2005 )
45. Kindlmann , G. , Ennis , D.E. , Witaker , R.T., Westin , C.F. : Diffusion tensor analysis with invariant gradients and rotation tangents . IEEE Trans. Med. Imaging 23 ( 11 ), 1483  1499 ( 2007 )
46. Kindlmann , G. , Estepar , R.S.J. , Smith , S.M. , Westin , C.F. : Sampling and visualization creases with scalespace particles . IEEE Trans. VCG 15 ( 6 ), 1415  1424 ( 2010 )
47. Kohler , T. , Budai , A. , Kraus , M.F. , Odstrcilik , J. , Michelson , G. , Hornegger , J. : Automatic noreference quality assessment for retinal fundus images using vessel segmentation . In: IEEE 26th Symposium on CBMS , pp. 95  100 ( 2013 )
48. Knutsson , H.: Representing local structure using tensors . In: Scandinavian Conference on Image Analysis , pp. 244  251 ( 1989 )
49. Lawlor , M. , Zucker , S.W. : Third order edge statistics: contour continuation, curvature, and cortical connections . In: NIPS , pp. 1763  1771 ( 2013 )
50. Lindeberg , T. : ScaleSpace Theory in Computer Vision . The Springer International Series in Engineering and Computer Science. Kluwer Academic Publishers, Dordrecht ( 1994 )
51. Lupascu , C.A. , Tegolo , D. , Trucco , E.: FABC: retinal vessel segmentation using AdaBoost . IEEE Trans. Inf. Technol . 14 ( 5 ), 1267  1274 ( 2010 )
52. Medioni , G. , Lee , M.S. , Tang , C.K. : A Computational Framework for Feature Extraction and Segmentation . Elsevier, Amsterdam ( 2000 )
53. Mumford , D. : Elastica and computer vision . In: Bajaj, C.L . (ed.) Algebraic Geometry and Its Applications . Springer, New York ( 1994 )
54. Parent , P. , Zucker , S.W. : Trace inference, curvature consistency, and curve detection . IEEE Trans. PAMI 11 ( 8 ), 823  839 ( 1989 )
55. Pennec , X. , Fillard , P. , Ayache , N.: Invariant metric on SPD matrices and use of Frechet mean to define manifoldvalued image processing algorithms. A Riemannian framework for tensor computing . Int. J. Comput. Vis . 66 ( 1 ), 41  66 ( 2006 )
56. Pennec , X. , Arsigny , V. : Exponential Barycenters of the Canonical Cartan connection and invariant means on Lie groups . Matrix Information Geometry , pp. 123  166 . Springer, Heidelberg ( 2012 )
57. Petitot , J.: The neurogeometry of pinwheels as a subRiemannian contact structure . J. Phys. Paris 97 (2 3 ), 265  309 ( 2003 )
58. Sanguinetti , G.: Invariant models of vision between phenomenology, image statistics and neurosciences . PhD thesis , Universidad de la Republica, Uruguay ( 2011 )
59. Sanguinetti , G. , Citti , G. , Sarti , A. : A model of natural image edge cooccurrence in the rototranslation group . J Vis 10 ( 14 ), 37 ( 2010 )
60. Sapiro , G.: Geometric Partial Differential Equations and Image Analysis . Cambridge University Press, Cambridge ( 2001 )
61. Savadjiev , P., Campbell , J.S.W. , Pike , G.B. , Siddiqi , K. : 3D curve inference for diffusion MRI regularization and fibre tractography . Med . Image Anal. 10 ( 5 ), 799  813 ( 2006 )
62. Savadjiev , P. , Strijkers , G.J. , Bakermans , A.J. , Piuze , E. , Zucker , S.W. , Siddiqi , K. : Heart wall myofibers are arranged in minimal surfaces to optimize organ function . PNAS 109 ( 24 ), 9248  9253 ( 2012 )
63. Sharma , U. , Duits , R.: Leftinvariant evolutions of wavelet transforms on the similitude group . Appl. Comput. Harm. Anal . 39 , 110  137 ( 2015 )
64. MomayyezSiahkal , P. , Siddiqi , K. : 3D Stochastic completion fields: a probabilistic view of brain connectivity . IEEE Trans. PAMI 35 ( 4 ), 983  995 ( 2013 )
65. Sinnaeve , D. : The StejskalTanner Equation generalized for any gradient shapean overview of most pulse sequences measuring free diffusion . Concepts Magn. Reson. Part A 40A(2) , 39  65 ( 2012 )
66. Staal , J. , Abramoff , M.D. , Viergever , M.A., van Ginneken , B. : Ridgebased vessel segmentation in color images of the retina . IEEE Trans. Med. Imaging 23 , 501  509 ( 2004 )
67. Tournier , J.D. , Yeh , C.H. , Calamante , F. , Cho , K.H. , Connolly , A. , Lin , C.P. : Resolving crossing fibres using constrained spherical deconvolution: validation using diffusionweighted imaging phantom data . NeuroImage 42 , 617  625 ( 2008 )
68. Tuch , D.S. , Reese , T.G. , Wiegell , M.R. , Makris , N. , Belliveau , J.W. , Wedeen , V.J.: High angular resolution diffusion imaging reveals intravoxel white matter fiber heterogeneity . MRM 48 , 577  582 ( 2002 )
69. Unser , M. , Aldroubi , A. , Eden , M.: BSpline signal processing: part Itheory . IEEE Trans. Signal Proc . 41 , 831  833 ( 1993 )
70. van Almsick, M. : Context models of lines and contours . PhD thesis , Department of Biomedical Engineering, Eindhoven University of Technology, the Netherlands ( 2007 )
71. Weickert , J.: Anisotropic diffusion in image processing . ECMI Series , TeubnerVerlag, Stuttgart ( 1998 )
72. Welk , M. : Families of generalised morphological scale spaces. Scale Space Methods in Computer Vision . LNCS, vol. 2695 , pp. 770  784 . Springer, Berlin ( 2003 )
73. Zweck , J. , Williams , L.R. : Euclidean group invariant computation of stochastic completion fields using shiftabletwistable functions . J. Math. Imaging Vis . 21 ( 2 ), 135  154 ( 2004 ) J. Hannink received his M.Sc . degree in Physics in 2014 at University of Göttingen, Germany. His graduation project was conducted at TU/e, The Netherlands . Currently he is a Ph.D. student in the Digital Sports Group at the Pattern Recognition Lab of the University of ErlangenNürnberg (FAU), Germany. He is working on the topic of individualized, sensorbased diagnostics and therapy monitoring for mobility impairment analysis . His research interests include