Catálogo de publicaciones - libros

Compartir en
redes sociales


Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part II

Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)

En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74693-5

ISBN electrónico

978-3-540-74695-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Face Recognition Using Parzenfaces

Zhirong Yang; Jorma Laaksonen

A novel discriminant analysis method is presented for the face recognition problem. It has been recently shown that the predictive objectives based on Parzen estimation are advantageous for learning discriminative projections if the class distributions are complicated in the projected space. However, the existing algorithms based on Parzen estimators require expensive computation to obtain the gradient for optimization. We propose here an accelerating technique by reformulating the gradient and implement its computation by matrix products. Furthermore, we point out that regularization is necessary for high-dimensional face recognition problems. The discriminative objective is therefore extended by a smoothness constraint of facial images. Our Parzen Discriminant Analysis method can be trained much faster and achieve higher recognition accuracies than the compared algorithms in experiments on two popularly used face databases.

- Pattern Recognition | Pp. 200-209

A Comparison of Features in Parts-Based Object Recognition Hierarchies

Stephan Hasler; Heiko Wersing; Edgar Körner

Parts-based recognition has been suggested for generalizing from few training views in categorization scenarios. In this paper we present the results of a comparative investigation of different feature types with regard to their suitability for category discrimination. So patches of gray-scale images were compared with SIFT descriptors and patches from the high-level output of a feedforward hierarchy related to the ventral visual pathway. We discuss the conceptual differences, resulting performance and consequences for hierarchical models of visual recognition.

- Pattern Recognition | Pp. 210-219

An Application of Recurrent Neural Networks to Discriminative Keyword Spotting

Santiago Fernández; Alex Graves; Jürgen Schmidhuber

The goal of keyword spotting is to detect the presence of specific spoken words in unconstrained speech. The majority of keyword spotting systems are based on generative hidden Markov models and lack discriminative capabilities. However, discriminative keyword spotting systems are currently based on frame-level posterior probabilities of sub-word units. This paper presents a discriminative keyword spotting system based on recurrent neural networks only, that uses information from long time spans to estimate word-level posterior probabilities. In a keyword spotting task on a large database of unconstrained speech the system achieved a keyword spotting accuracy of 84.5%.

- Pattern Recognition | Pp. 220-229

Spatiostructural Features for Recognition of Online Handwritten Characters in Devanagari and Tamil Scripts

H. Swethalakshmi; C. Chandra Sekhar; V. Srinivasa Chakravarthy

The spatiostructural features proposed for recognition of online handwritten characters refer to offline-like features that convey information about both the positional and structural (shape) characteristics of the handwriting unit. This paper demonstrates the effectiveness of representing an online handwritten stroke using spatiostructural features, as indicated by its effect on the stroke classification accuracy by a Support Vector Machine (SVM) based classifier. The study has been done on two major Indian writing systems, Devanagari and Tamil. The importance of localization information of the structural features and handling of translational variance is studied using appropriate approaches to zoning the handwritten character.

- Pattern Recognition | Pp. 230-239

An Improved Version of the Wrapper Feature Selection Method Based on Functional Decomposition

Noelia Sánchez-Maroño; Amparo Alonso-Betanzos; Beatriz Pérez-Sánchez

This paper describes an improved version of a previously developed ANOVA and Functional Networks Feature Selection method. This wrapper feature selection method is based on a functional decomposition that grows exponentially as the number of features increases. Since exponential complexity limits the scope of application of the method, a new version is proposed that subdivides this functional decomposition and increases its complexity gradually. The improved version can be applied to a broader set of data. The performance of the improved version was tested against several real datasets. The results obtained are comparable, or better, to those obtained by other standard and innovative feature selection methods.

- Pattern Recognition | Pp. 240-249

Parallel-Series Perceptrons for the Simultaneous Determination of Odor Classes and Concentrations

Gao Daqi; Sun Jianli; Li Xiaoyan

The simultaneous determination of odor classes and concentrations is solved by a kind of parallel-series perceptron models. Two groups of parallel single-output perceptrons are in series, and the former is responsible for classification, and the latter for location. The number of parallel perceptrons is equal to the number of odor classes. A multi-class learning problem is first decomposed into multiple two-class problems, and then solved by multiple parallel perceptrons, one by one. Each training subset is composed of the most necessary samples. And furthermore, some virtual samples are added to the weak side of any two-class learning subsets in order to arrive at a virtual balance. The experimental results for 4 kinds of fragrant materials show that the proposed parallel-series perceptrons with the electronic nose are effective.

- Pattern Recognition | Pp. 250-260

Probabilistic Video-Based Gesture Recognition Using Self-organizing Feature Maps

George Caridakis; Christos Pateritsas; Athanasios Drosopoulos; Andreas Stafylopatis; Stefanos Kollias

Present work introduces a probabilistic recognition scheme for hand gestures. Self organizing feature maps are used to model spatiotemporal information extracted through image processing. Two models are built for each gesture category and, along with appropriate distance metrics, produce a validated classification mechanism that performs consistently during experi-ments on acted gestures video sequences.

- Pattern Recognition | Pp. 261-270

Unbiased SVM Density Estimation with Application to Graphical Pattern Recognition

Edmondo Trentin; Ernesto Di Iorio

Classification of structured data (i.e., data that are represented as graphs) is a topic of interest in the machine learning community. This paper presents a different, simple approach to the problem of structured pattern recognition, relying on the description of graphs in terms of algebraic binary relations. Maximum-a-posteriori decision rules over relations require the estimation of class-conditional probability density functions (pdf) defined on graphs. A nonparametric technique for the estimation of the pdfs is introduced, on the basis of a factorization of joint probabilities into individual densities that are modeled, in an unsupervised fashion, via Support Vector Machine (SVM). The SVM training is accomplished applying support vector regression on an unbiased variant of the Parzen Window. The behavior of the estimation algorithm is first demonstrated on a synthetic distribution. Finally, experiments of graph-structured image recognition from the Caltech Benchmark dataset are reported, showing a dramatic improvement over the results (available in the literature) yielded by state-of-the-art connectionist models for graph processing, namely recursive neural nets and graph neural nets.

- Pattern Recognition | Pp. 271-280

Neural Mechanisms for Mid-Level Optical Flow Pattern Detection

Stefan Ringbauer; Pierre Bayerl; Heiko Neumann

This paper describes a new model for extracting large-field optical flow patterns to generate distributed representations of neural activation to control complex visual tasks such as 3D egomotion. The neural mechanisms draw upon experimental findings about the response properties and specificities of cells in areas V1, MT and MSTd along the dorsal pathway. Model V1 cells detect local motion estimates. Model MT cells in different pools are suggested to be selective to motion patterns integrating from V1 as well as to velocity gradients. Model MSTd cells considered here integrate MT gradient cells over a much larger spatial neighborhood to generate the observed pattern selectivity for expansion/contraction, rotation and spiral motion, providing the necessary input for spatial navigation mechanisms. Our model also incorporates feedback processing between areas V1-MT and MT-MSTd. We demonstrate that such a re-entry of context-related information helps to disambiguate and stabilize more localized processing along the primary motion pathway.

- Pattern Recognition | Pp. 281-290

Split–Merge Incremental LEarning (SMILE) of Mixture Models

Konstantinos Blekas; Isaac E. Lagaris

In this article we present an incremental method for building a mixture model. Given the desired number of clusters  ≥ 2, we start with a two-component mixture and we optimize the likelihood by repeatedly applying a operation. When an optimum is obtained, we add a new component to the model by splitting in two, a properly chosen cluster. This goes on until the number of components reaches a preset limiting value. We have performed numerical experiments on several data–sets and report a performance comparison with other rival methods.

- Data Clustering | Pp. 291-300