This special issue comprises thirteen recent and relevant works in different aspects of the computer analysis of images and patterns. We cover important topics like stereo image processing, image and shape representation, color texture classification, parallel systems and membrane computing, face recognition, pedestrian classification, estimation of photometry and analysis of...

We consider morphological and linear scale spaces on the space ℝ3⋊S 2 of 3D positions and orientations naturally embedded in the group SE(3) of 3D rigid body movements. The general motivation for these (convection-)diffusions and erosions is to obtain crossing-preserving fiber enhancement on probability densities defined on the space of positions and orientations. The strength of...

In this paper we present a spatially-adaptive method for image reconstruction that is based on the concept of statistical multiresolution estimation as introduced in Frick et al. (Electron. J. Stat. 6:231–268, 2012). It constitutes a variational regularization technique that uses an ℓ ∞-type distance measure as data-fidelity combined with a convex cost functional. The resulting...

Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analysis of the scale selection properties of a generalized framework for detecting interest points from scale-space features presented in Lindeberg (Int. J. Comput. Vis. 2010, under...

We present an analysis of sets of matrices with rank less than or equal to a specified number s. We provide a simple formula for the normal cone to such sets, and use this to show that these sets are prox-regular at all points with rank exactly equal to s. The normal cone formula appears to be new. This allows for easy application of prior results guaranteeing local linear...

In this paper, we propose a new self-calibration algorithm for upgrading projective space to Euclidean space. The proposed method aims to combine the most commonly used metric constraints, including zero skew and unit aspect-ratio by formulating each constraint as a cost function within a unified framework. Additional constraints, e.g., constant principal points, can also be...

Most automatic focusing methods are based on a sharpness function, which delivers a real-valued estimate of an image quality. In this paper, we study an L 2-norm derivative-based sharpness function, which has been used before based on heuristic consideration. We give a more solid mathematical foundation for this function and get a better insight into its analytical properties...

This paper presents two new higher order diffusion models for removing noise from images. The models employ fractional derivatives and are modifications of an existing fourth order partial differential equation (PDE) model which was developed by You and Kaveh as a generalization of the well-known second order Perona-Malik equation. The modifications serve to cure the ill...

We advocate the use of an alternative calculus in biomedical image analysis, known as multiplicative (a.k.a. non-Newtonian) calculus. It provides a natural framework in problems in which positive images or positive definite matrix fields and positivity preserving operators are of interest. Indeed, its merit lies in the fact that preservation of positivity under basic but...

We introduce a new framework based on Riemann-Finsler geometry for the analysis of 3D images with spherical codomain, more precisely, for which each voxel contains a set of directional measurements represented as samples on the unit sphere (antipodal points identified). The application we consider here is in medical imaging, notably in High Angular Resolution Diffusion Imaging...

In the paper the moments-based fast wedgelet transform has been presented. In order to perform the classical wedgelet transform one searches the whole wedgelets’ dictionary to find the best matching. Whereas in the proposed method the parameters of wedgelet are computed directly from an image basing on moments computation. Such parameters describe wedgelet reflecting the edge...

The Beltrami flow is an efficient nonlinear filter, that was shown to be effective for color image processing. The corresponding anisotropic diffusion operator strongly couples the spectral components. Usually, this flow is implemented by explicit schemes, that are stable only for very small time steps and therefore require many iterations. In this paper we introduce a semi...

We propose a deblurring algorithm that explicitly takes into account the sparse characteristics of natural images and does not entail solving a numerically ill-conditioned backward-diffusion. The key observation is that the sparse coefficients that encode a given image with respect to an over-complete basis are the same that encode a blurred version of the image with respect to a...

Single-shell high angular resolution diffusion imaging data (HARDI) may be decomposed into a sum of eigenpolynomials of the Laplace-Beltrami operator on the unit sphere. The resulting representation combines the strengths hitherto offered by higher order tensor decomposition in a tensorial framework and spherical harmonic expansion in an analytical framework, but removes some of...

In 2006, Saito and Remy proposed a new transform called the Laplace Local Sine Transform (LLST) in image processing as follows. Let f be a twice continuously differentiable function on a domain Ω. First we approximate f by a harmonic function u such that the residual component v=f−u vanishes on the boundary of Ω. Next, we do the odd extension for v, and then do the periodic...

In this paper, we are interested in texture modeling with functional analysis spaces. We focus on the case of color image processing, and in particular color image decomposition. The problem of image decomposition consists in splitting an original image f into two components u and v. u should contain the geometric information of the original image, while v should be made of the...

In this paper, we propose a variational soft segmentation framework inspired by the level set formulation of multiphase Chan-Vese model. We use soft membership functions valued in [0,1] to replace the Heaviside functions of level sets (or characteristic functions) such that we get a representation of regions by soft membership functions which automatically satisfies the sum to...

Local feature matching is an essential component of many image and object retrieval algorithms. Euclidean and Mahalanobis distances are mostly used in order to quantify the similarity of two stipulated feature vectors. The Euclidean distance is inappropriate in the typical case where the components of the feature vector are incommensurable entities, and indeed yields...