Catálogo de publicaciones - libros

Compartir en
redes sociales


Independent Component Analysis and Signal Separation: 7th International Conference, ICA 2007, London, UK, September 9-12, 2007. Proceedings

Mike E. Davies ; Christopher J. James ; Samer A. Abdallah ; Mark D Plumbley (eds.)

En conferencia: 7º International Conference on Independent Component Analysis and Signal Separation (ICA) . London, UK . September 9, 2007 - September 12, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74493-1

ISBN electrónico

978-3-540-74494-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Independent Process Analysis Without a Priori Dimensional Information

Barnabás Póczos; Zoltán Szabó; Melinda Kiszlinger; András Lőrincz

Recently, several algorithms have been proposed for independent subspace analysis where hidden variables are i.i.d. processes. We show that these methods can be extended to certain AR, MA, ARMA and ARIMA tasks. Central to our paper is that we introduce a cascade of algorithms, which aims to solve these tasks without previous knowledge about the number and the dimensions of the hidden processes. Our claim is supported by numerical simulations. As an illustrative application where the dimensions of the hidden variables are unknown, we search for subspaces of facial components.

- Algorithms | Pp. 252-259

An Evolutionary Approach for Blind Inversion of Wiener Systems

Fernando Rojas; Jordi Solé-Casals; Carlos G. Puntonet

The problem of blind inversion of Wiener systems can be considered as a special case of blind separation of post-nonlinear instantaneous mixtures. In this paper, we present an approach for nonlinear deconvolution of one signal using a genetic algorithm. The recovering of the original signal is achieved by trying to maximize an estimation of mutual information based on higher order statistics. Analyzing the experimental results, the use of genetic algorithms is appropriate when the number of samples of the convolved signal is low, where other gradient-like methods may fail because of poor estimation of statistics.

- Algorithms | Pp. 260-267

A Complexity Constrained Nonnegative Matrix Factorization for Hyperspectral Unmixing

Sen Jia; Yuntao Qian

Hyperspectral unmixing, as a blind source separation (BSS) problem, has been intensively studied from independence aspect in the last few years. However, independent component analysis (ICA) can not totally unmix all the materials out because the sources (abundance fractions) are not statistically independent. In this paper a complexity constrained nonnegative matrix factorization (CCNMF) for simultaneously recovering both constituent spectra and correspondent abundances is proposed. Three important facts are exploited: First, the spectral data are nonnegative; second, the variation of the material spectra and abundance images is smooth in time and space respectively; third, in most cases, both of the material spectra and abundances are localized. Experimentations on real data are provided to illustrate the algorithm’s performance.

- Algorithms | Pp. 268-276

Smooth Component Analysis as Ensemble Method for Prediction Improvement

Ryszard Szupiluk; Piotr Wojewnik; Tomasz Ząbkowski

In this paper we apply a novel smooth component analysis algorithm as ensemble method for prediction improvement. When many prediction models are tested we can treat their results as multivariate variable with the latent components having constructive or destructive impact on prediction results. We show that elimination of those destructive components and proper mixing of those constructive can improve the final prediction results. The validity and high performance of our concept is presented on the problem of energy load prediction.

- Algorithms | Pp. 277-284

Comparative Speed Analysis of FastICA

Vicente Zarzoso; Pierre Comon

FastICA is arguably one of the most widespread methods for independent component analysis. We focus on its deflation-based implementation, where the independent components are extracted one after another. The present contribution evaluates the method’s speed in terms of the overall computational complexity required to reach a given source extraction performance. FastICA is compared with a simple modification referred to as RobustICA, which merely consists of performing exact line search optimization of the kurtosis-based contrast function. Numerical results illustrate the speed limitations of FastICA.

- Algorithms | Pp. 293-300

Kernel-Based Nonlinear Independent Component Analysis

Kun Zhang; Laiwan Chan

We propose the kernel-based nonlinear independent component analysis (ICA) method, which consists of two separate steps. First, we map the data to a high-dimensional feature space and perform dimension reduction to extract the effective subspace, which was achieved by kernel principal component analysis (PCA) and can be considered as a pre-processing step. Second, we need to adjust a linear transformation in this subspace to make the outputs as statistically independent as possible. In this way, nonlinear ICA, a complex nonlinear problem, is decomposed into two relatively standard procedures. Moreover, to overcome the ill-posedness in nonlinear ICA solutions, we utilize the minimal nonlinear distortion (MND) principle for regularization, in addition to the smoothness regularizer. The MND principle states that we would prefer the nonlinear ICA solution with the mixing system of minimal nonlinear distortion, since in practice the nonlinearity in the data generation procedure is usually not very strong.

- Algorithms | Pp. 301-308

Linear Prediction Based Blind Source Extraction Algorithms in Practical Applications

Zhi-Lin Zhang; Liqing Zhang

Blind source extraction (BSE) is of advantages over blind source separation (BSS) when obtaining some underlying source signals from high dimensional observed signals. Among a variety of BSE algorithms, a large number of algorithms are based on linear prediction (LP-BSE). In this paper we analyze them from practical point of view. We reveal that they are, in nature, minor component analysis (MCA) algorithms, and thus they have some problems that are inherent in MCA algorithms. We also find a switch phenomenon of online LP-BSE algorithms, showing that different parts of a single extracted signal are the counterparts of different source signals. The two issues should be noticed when one applies these algorithms to practical applications. Computer simulations are given to confirm these observations.

- Algorithms | Pp. 309-316

Blind Audio Source Separation Using Sparsity Based Criterion for Convolutive Mixture Case

A. Aïssa-El-Bey; K. Abed-Meraim; Y. Grenier

In this paper, we are interested in the separation of audio sources from their instantaneous or convolutive mixtures. We propose a new separation method that exploits the sparsity of the audio signals via an ℓ-norm based contrast function. A simple and efficient natural gradient technique is used for the optimization of the contrast function in an instantaneous mixture case. We extend this method to the convolutive mixture case, by exploiting the property of the Fourier transform. The resulting algorithm is shown to outperform existing techniques in terms of separation quality and computational cost.

- Sparse Methods | Pp. 317-324

Maximization of Component Disjointness: A Criterion for Blind Source Separation

Jörn Anemüller

Blind source separation is commonly based on maximizing measures related to independence of estimated sources such as mutual statistical independence assuming non-Gaussian distributions, decorrelation at different time-lags assuming spectral differences or decorrelation assuming source non-stationarity.

Here, the use of an alternative model for source separation is explored which is based on the assumption that sources emit signal energy at mutually different times. In the limiting case, this corresponds to only a single source being “active” at each point in time, resulting in mutual disjointness of source signal supports and mutual correlations of source signal envelopes. This assumption will not be fulfilled perfectly for real signals, however, by maximizing disjointness of estimated sources (under a linear mixing/demixing model) we demonstrate that source separation is nevertheless achieved when this assumptions is only partially fulfilled.

The conceptual benefits of the disjointness assumption are that (1) in certain applications it may be desirable to explain observed data in terms of mutually disjoint “parts” and (2) the method presented here preserves the special physical information assigned to amplitude zero of a signal which corresponds to the absence of energy (rather than subtracting the signal mean prior to analysis which for non zero-mean sources destroys this information).

The method of (DCA) is derived and it is shown that its update equations bear remarkable similarities with maximum likelihood independent component analysis (ICA). Sources with systematically varied degrees of disjointness are constructed and processed by DCA and Infomax and Jade ICA. Results illustrate the behaviour of DCA and ICA under these regimes with two main results: (1) DCA leads to a higher degree of separation than ICA, (2) DCA performs particularly well on positive-valued sources as long as they are at least moderately disjoint, and (3) The performance peak of ICA for zero-mean sources is achieved when sources are disjoint (but not independent).

- Sparse Methods | Pp. 325-332

Estimator for Number of Sources Using Minimum Description Length Criterion for Blind Sparse Source Mixtures

Radu Balan

In this paper I present a Minimum Description Length Estimator for number of sources in an anechoic mixture of sparse signals. The criterion is roughly equal to the sum of negative normalized maximum log-likelihood and the logarithm of number of sources. Numerical evidence supports this approach and compares favorabily to both the Akaike (AIC) and Bayesian (BIC) Information Criteria.

- Sparse Methods | Pp. 333-340