Catálogo de publicaciones - libros

Compartir en
redes sociales


Independent Component Analysis and Signal Separation: 7th International Conference, ICA 2007, London, UK, September 9-12, 2007. Proceedings

Mike E. Davies ; Christopher J. James ; Samer A. Abdallah ; Mark D Plumbley (eds.)

En conferencia: 7º International Conference on Independent Component Analysis and Signal Separation (ICA) . London, UK . September 9, 2007 - September 12, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74493-1

ISBN electrónico

978-3-540-74494-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Compressed Sensing and Source Separation

Thomas Blumensath; Mike Davies

Separation of underdetermined mixtures is an important problem in signal processing that has attracted a great deal of attention over the years. Prior knowledge is required to solve such problems and one of the most common forms of structure exploited is sparsity.

Another central problem in signal processing is sampling. Recently, it has been shown that it is possible to sample well below the Nyquist limit whenever the signal has additional structure. This theory is known as compressed sensing or compressive sampling and a wealth of theoretical insight has been gained for signals that permit a sparse representation.

In this paper we point out several similarities between compressed sensing and source separation. We here mainly assume that the mixing system is known, i.e. we do not study source separation. With a particular view towards source separation, we extend some of the results in compressed sensing to more general overcomplete sparse representations and study the sensitivity of the solution to errors in the mixing system.

- Sparse Methods | Pp. 341-348

Morphological Diversity and Sparsity in Blind Source Separation

J. Bobin; Y. Moudden; J. Fadili; J. -L. Starck

This paper describes a new blind source separation method for instantaneous linear mixtures. This new method coined GMCA (Generalized Morphological Component Analysis) relies on morphological diversity. It provides new insights on the use of sparsity for blind source separation in a noisy environment. GMCA takes advantage of the sparse representation of structured data in large overcomplete signal dictionaries to separate sources based on their morphology. In this paper, we define morphological diversity and focus on its ability to be a helpful source of diversity between the signals we wish to separate. We introduce the blind GMCA algorithm and we show that it leads to good results in the overdetermined blind source separation problem from noisy mixtures. Both theoretical and algorithmic comparisons between morphological diversity and independence-based separation techniques are given. The effectiveness of the proposed scheme is confirmed in several numerical experiments.

- Sparse Methods | Pp. 349-356

Identifiability Conditions and Subspace Clustering in Sparse BSS

Pando Georgiev; Fabian Theis; Anca Ralescu

We give general identifiability conditions on the source matrix in Blind Signal Separation problem. They refine some previously known ones. We develop a subspace clustering algorithm, which is a generalization of the -plane clustering algorithm, and is suitable for separation of sparse mixtures with bigger sparsity (i.e. when the number of the sensors is bigger at least by 2 than the number of non-zero elements in most of the columns of the source matrix). We demonstrate our algorithm by examples in the square and underdetermined cases. The latter confirms the new identifiability conditions which require less hyperplanes in the data for full recovery of the sources and the mixing matrix.

- Sparse Methods | Pp. 357-364

Two Improved Sparse Decomposition Methods for Blind Source Separation

B. Vikrham Gowreesunker; Ahmed H. Tewfik

In underdetermined blind source separation problems, it is common practice to exploit the underlying sparsity of the sources for demixing. In this work, we propose two sparse decomposition algorithms for the separation of linear instantaneous speech mixtures. We also show how a properly chosen dictionary can improve the performance of such algorithms by improving the sparsity of the underlying sources. The first algorithm proposes the use of a single channel Bounded Error Subset Selection (BESS) method for robustly estimating the mixing matrix. The second algorithm is a decomposition method that performs a constrained decomposition of the mixtures over a stereo dictionary.

- Sparse Methods | Pp. 365-372

Probabilistic Geometric Approach to Blind Separation of Time-Varying Mixtures

Ran Kaftory; Yehoshua Y. Zeevi

We consider the problem of blindly separating time-varying instantaneous mixtures. It is assumed that the arbitrary time dependency of the mixing coefficient, is known up to a finite number of parameters. Using sparse (or sparsified) sources, we geometrically identify samples of the curves representing the parametric model. The parameters are found using a probabilistic approach of estimating the maximum likelihood of a curve, given the data. After identifying the model parameters, the mixing system is inverted to estimate the sources. The new approach to blind separation of time-varying mixtures is demonstrated using both synthetic and real data.

- Sparse Methods | Pp. 373-380

Infinite Sparse Factor Analysis and Infinite Independent Components Analysis

David Knowles; Zoubin Ghahramani

A nonparametric Bayesian extension of Independent Components Analysis (ICA) is proposed where observed data is modelled as a linear superposition, , of a potentially infinite number of hidden sources, . Whether a given source is active for a specific data point is specified by an infinite binary matrix, . The resulting sparse representation allows increased data reduction compared to standard ICA. We define a prior on using the Indian Buffet Process (IBP). We describe four variants of the model, with Gaussian or Laplacian priors on and the one or two-parameter IBPs. We demonstrate Bayesian inference under these models using a Markov Chain Monte Carlo (MCMC) algorithm on synthetic and gene expression data and compare to standard ICA algorithms.

- Sparse Methods | Pp. 381-388

Fast Sparse Representation Based on Smoothed ℓ Norm

G. Hosein Mohimani; Massoud Babaie-Zadeh; Christian Jutten

In this paper, a new algorithm for Sparse Component Analysis (SCA) or atomic decomposition on over-complete dictionaries is presented. The algorithm is essentially a method for obtaining sufficiently sparse solutions of underdetermined systems of linear equations. The solution obtained by the proposed algorithm is compared with the minimum ℓ-norm solution achieved by Linear Programming (LP). It is experimentally shown that the proposed algorithm is about two orders of magnitude faster than the state-of-the-art ℓ-magic, while providing the same (or better) accuracy.

- Sparse Methods | Pp. 389-396

Estimating the Mixing Matrix in Sparse Component Analysis Based on Converting a Multiple Dominant to a Single Dominant Problem

Nima Noorshams; Massoud Babaie-Zadeh; Christian Jutten

We propose a new method for estimating the mixing matrix, , in the linear model , for the problem of underdetermined Sparse Component Analysis (SCA). Contrary to most previous algorithms, there can be more than one dominant source at each instant (we call it a “multiple dominant” problem). The main idea is to convert the multiple dominant problem to a series of single dominant problems, which may be solved by well-known methods. Each of these single dominant problems results in the determination of some columns of . This results in a huge decrease in computations, which lets us to solve higher dimension problems that were not possible before.

- Sparse Methods | Pp. 397-405

Dictionary Learning for L1-Exact Sparse Coding

Mark D. Plumbley

We have derived a new algorithm for dictionary learning for sparse coding in the ℓ exact sparse framework. The algorithm does not rely on an approximation residual to operate, but rather uses the special geometry of the ℓ exact sparse solution to give a computationally simple yet conceptually interesting algorithm. A self-normalizing version of the algorithm is also derived, which uses negative feedback to ensure that basis vectors converge to unit norm. The operation of the algorithm is illustrated on a simple numerical example.

- Sparse Methods | Pp. 406-413

Supervised and Semi-supervised Separation of Sounds from Single-Channel Mixtures

Paris Smaragdis; Bhiksha Raj; Madhusudana Shashanka

In this paper we describe a methodology for model-based single channel separation of sounds. We present a sparse latent variable model that can learn sounds based on their distribution of time/ frequency energy. This model can then be used to extract known types of sounds from mixtures in two scenarios. One being the case where all sound types in the mixture are known, and the other being being the case where only the target or the interference models are known. The model we propose has close ties to non-negative decompositions and latent variable models commonly used for semantic analysis.

- Sparse Methods | Pp. 414-421