Catálogo de publicaciones - libros

Compartir en
redes sociales


Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part II

Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)

En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74693-5

ISBN electrónico

978-3-540-74695-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Least-Mean-Square Training of Cluster-Weighted Modeling

I-Chun Lin; Cheng-Yuan Liou

Aside from the Expectation-Maximization (EM) algorithm, Least-Mean-Square (LMS) is devised to further train the model parameters as a complementary training algorithm for Cluster-Weighted Modeling (CWM). Due to different objective functions of EM and LMS, the training result of LMS can be used to reinitialize CWM’s model parameters which provides an approach to mitigate local minimum problems.

- Data Clustering | Pp. 301-310

Identifying the Underlying Hierarchical Structure of Clusters in Cluster Analysis

Kazunori Iwata; Akira Hayashi

In this paper, we examine analysis of clusters of labeled samples to identify their underlying hierarchical structure. The key in this identification is to select a suitable measure of dissimilarity among clusters characterized by subpopulations of the samples. Accordingly, we introduce a dissimilarity measure suitable for measuring a hierarchical structure of subpopulations that fit the mixture model. Glass identification is used as a practical problem for hierarchical cluster analysis, in the experiments in this paper. In the experimental results, we exhibit the effectiveness of the introduced measure, compared to several others.

- Data Clustering | Pp. 311-320

Clustering Evaluation in Feature Space

Alissar Nasser; Pierre-Alexandre Hébert; Denis Hamad

Many clustering algorithms require some parameters that often are neither a priori known nor easy to estimate, like the number of classes. Measures of clustering quality can consequently be used to a posteriori estimate these values. This paper proposes such an index of clustering evaluation that deals with kernel methods like kernel-k-means. More precisely, it presents an extension of the well-known Davies & Bouldin’s index. Kernel clustering methods are particularly relevant because of their ability to deal with initially non-linearly separable clusters. The interest of the following clustering evaluation is then to get around the issue of the not explicitly known data transformation of such kernel methods. Kernel Davies & Bouldin’s index is finally used to a posteriori estimate the parameters of the kernel-k-means method applied on some toys datasets and Fisher’s Iris dataset.

- Data Clustering | Pp. 321-330

A Topology-Independent Similarity Measure for High-Dimensional Feature Spaces

Jochen Kerdels; Gabriele Peters

In the field of computer vision feature matching in high dimensional feature spaces is a commonly used technique for object recognition. One major problem is to find an adequate similarity measure for the particular feature space, as there is usually only little knowledge about the structure of that space. As a possible solution to this problem this paper presents a method to obtain a similarity measure suitable for the task of feature matching without the need for structural information of the particular feature space. As the described similarity measure is based on the topology of the feature space and the topology is generated by a growing neural gas, no knowledge about the particular structure of the feature space is needed. In addition, the used neural gas quantizes the feature vectors and thus reduces the amount of data which has to be stored and retrieved for the purpose of object recognition.

- Data Clustering | Pp. 331-340

Fuzzy Labeled Self-organizing Map with Kernel-Based Topographic Map Formation

Iván Machón González; Hilario López García

Fuzzy Labeled Self-Organizing Map is a semisupervised learning that allows the prototype vectors to be updated taking into account information related to the clusters of the data set. In this paper, this algorithm is extended to update individually the kernel radii according to Van Hulle’s approach. A significant reduction of the mean quantization error of the numerical prototype vectors is expected.

- Self-organization | Pp. 341-348

Self-organizing Maps of Spiking Neurons with Reduced Precision of Correlated Firing

Francisco J. Veredas; Luis A. Martínez; Héctor Mesa

Early studies on visual pathway circuitry demonstrated that synapses arrange to self-organize cortical orientation selectivity maps. It is still a debate how these maps are set up, so that diverse studies point to different directions to conclude about the main role played by feed-forward or intracortical recurrent connectivity. It is also a subject of discussion the way neurons communicate each other to transmit the information necessary to configure the circuits supporting the features of the central nervous system. Some studies claim for the necessity of a precise spike timing to provide effective neural codes. In this article we simulate networks consisting of three layers of integrate-and-fire neurons with feed-forward excitatory modifiable synapses that arrange to conform orientation selectivity maps. Features of receptive fields in these maps change when the precision of correlated firing decreases as an effect of increasing synaptic transmission jitters.

- Self-organization | Pp. 349-358

Visualising Class Distribution on Self-organising Maps

Rudolf Mayer; Taha Abdel Aziz; Andreas Rauber

The is a popular unsupervised neural network model which has been used successfully in various contexts for clustering data. Even though labelled data is not required for the training process, in many applications class labelling of some sort is available. A visualisation uncovering the distribution and arrangement of the classes over the map can help the user to gain a better understanding and analysis of the mapping created by the SOM, e.g. through comparing the results of the manual labelling and automatic arrangement. In this paper, we present such a visualisation technique, which smoothly colours a SOM according to the distribution and location of the given class labels. It allows the user to easier assess the quality of the manual labelling by highlighting outliers and border data close to different classes.

- Self-organization | Pp. 359-368

Self-organizing Maps with Refractory Period

Antonio Neme; Victor Mireles

Self-organizing map (SOM) has been studied as a model of map formation in the brain cortex. Neurons in the cortex present a refractory period in which they are not able to be activated, restriction that should be included in the SOM if a better description is to be achieved. Altough several works have been presented in order to include this biological restriction to the SOM, they do not reflect biological plausibility. Here, we present a modification in the SOM that allows neurons to enter a refractory period (SOM-RP) if they are the best matching unit (BMU) or if they belong to its neighborhood. This refractory period is the same for all affected neurons, which contrasts with previous models. By including this biological restriction, SOM dynamics resembles in more detail behavior shown by the cortex, such as non-radial activity patterns and long distance influence, besides the refractory period. As a side effect, two error measures are lower in maps formed by SOM-RP than in those formed by SOM.

- Self-organization | Pp. 369-378

Improving the Correlation Hunting in a Large Quantity of SOM Component Planes

Miguel A. Barreto S.; Andrés Pérez-Uribe

A technique called component planes is commonly used to visualize variables behavior with Self-Organizing Maps (SOMs). Nevertheless, when the component planes are too many the visualization becomes difficult. A methodology has been developed to enhance the component planes analysis process. This methodology improves the correlation hunting in the component planes with a tree-structured cluster representation based on the SOM distance matrix. The methodology presented here was used in the classification of similar agro-ecological variables and productivity in the sugar cane culture. Analyzing the obtained groups it was possible to extract new knowledge about the variables more related with the highest productivities.

- Self-organization | Pp. 379-388

A Dynamical Model for Receptive Field Self-organization in V1 Cortical Columns

Jörg Lücke

We present a dynamical model of processing and learning in the visual cortex, which reflects the anatomy of V1 cortical columns and properties of their neuronal receptive fields (RFs). The model is described by a set of coupled differential equations and learns by self-organizing the RFs of its computational units – sub-populations of excitatory neurons. If natural image patches are presented as input, self-organization results in Gabor-like RFs. In quantitative comparison with measurements, we find that these RFs capture statistical properties of V1 simple-cells that learning algorithms such as ICA and sparse coding fail to reproduce.

- Self-organization | Pp. 389-398