Catálogo de publicaciones - libros
Adaptive and Natural Computing Algorithms: 8th International Conference, ICANNGA 2007, Warsaw, Poland, April 11-14, 2007, Proceedings, Part II
Bartlomiej Beliczynski ; Andrzej Dzielinski ; Marcin Iwanowski ; Bernardete Ribeiro (eds.)
En conferencia: 8º International Conference on Adaptive and Natural Computing Algorithms (ICANNGA) . Warsaw, Poland . April 11, 2007 - April 14, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Programming Techniques; Computer Applications; Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Software Engineering
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-71590-0
ISBN electrónico
978-3-540-71629-7
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Evolution of Multi-class Single Layer Perceptron
Sarunas Raudys
While training single layer perceptron (SLP) in two-class situation, one may obtain seven types of statistical classifiers including minimum empirical error and support vector (SV) classifiers. Unfortunately, both classifiers cannot be obtained automatically in multi-category case. We suggest designing (-1)/2 pair-wise SLPs and combine them in a special way. Experiments using =24 class chromosome and =10 class yeast infection data illustrate effectiveness of new multi-class network of the single layer perceptrons.
- Neural Networks | Pp. 1-10
Estimates of Approximation Rates by Gaussian Radial-Basis Functions
Paul C. Kainen; Věra Kůrková; Marcello Sanguineti
Rates of approximation by networks with Gaussian RBFs with varying widths are investigated. For certain smooth functions, upper bounds are derived in terms of a Sobolev-equivalent norm. Coefficients involved are exponentially decreasing in the dimension. The estimates are proven using Bessel potentials as auxiliary approximating functions.
- Neural Networks | Pp. 11-18
Least Mean Square vs. Outer Bounding Ellipsoid Algorithm in Confidence Estimation of the GMDH Neural Networks
Marcin Mrugalski; Józef Korbicz
The paper deals with the problem of determination of the model uncertainty during the system identification with the application of the Group Method of Data Handling (GMDH) neural network. The main objective is to show how to employ the Least Mean Square (LMS) and the Outer Bounding Ellipsoid (OBE) algorithm to obtain the corresponding model uncertainty.
- Neural Networks | Pp. 19-26
On Feature Extraction Capabilities of Fast Orthogonal Neural Networks
Bartłomiej Stasiak; Mykhaylo Yatsymirskyy
The paper investigates capabilities of fast orthogonal neural networks in a feature extraction task for classification problems. Neural networks with an architecture based on the fast cosine transform, type II and IV are built and applied for extraction of features used as a classification base for a multilayer perceptron. The results of the tests show that adaptation of the neural network allows to obtain a better transform in the feature extraction sense as compared to the fast cosine transform. The neural implementation of both the feature extractor and the classifier enables integration and joint learning of both blocks.
- Neural Networks | Pp. 27-36
Neural Computations by Asymmetric Networks with Nonlinearities
Naohiro Ishii; Toshinori Deguchi; Masashi Kawaguchi
Nonlinearity is an important factor in the biological visual neural networks. Among prominent features of the visual networks, movement detections are carried out in the visual cortex. The visual cortex for the movement detection, consist of two layered networks, called the primary visual cortex (V1),followed by the middle temporal area (MT), in which nonlinear functions will play important roles in the visual systems. These networks will be decomposed to asymmetric sub-networks with nonlinearities. In this paper, the fundamental characteristics in asymmetric neural networks with nonlinearities, are discussed for the detection of the changing stimulus or the movement detection in these neural networks. By the optimization of the asymmetric networks, movement detection equations are derived. Then, it was clarified that the even-odd nonlinearity combined asymmetric networks, has the ability in the stimulus change detection and the direction of movement or stimulus, while symmetric networks need the time memory to have the same ability. These facts are applied to two layered networks, V1 and MT.
- Neural Networks | Pp. 37-45
Properties of the Hermite Activation Functions in a Neural Approximation Scheme
Bartlomiej Beliczynski
The main advantage to use Hermite functions as activation functions is that they offer a chance to control high frequency components in the approximation scheme. We prove that each subsequent Hermite function extends frequency bandwidth of the approximator within limited range of well concentrated energy. By introducing a scalling parameter we may control that bandwidth.
- Neural Networks | Pp. 46-54
Study of the Influence of Noise in the Values of a Median Associative Memory
Humberto Sossa; Ricardo Barrón; Roberto A. Vázquez
In this paper we study how the performance of a median associative memory is influenced when the values of its elements are altered by noise. To our knowledge this kind of research has not been reported until know. We give formal conditions under which the memory is still able to correctly recall a pattern of the fundamental set of patterns either from a non-altered or a noisy version of it. Experiments are also given to show the efficiency of the proposal.
- Neural Networks | Pp. 55-62
Impact of Learning on the Structural Properties of Neural Networks
Branko Šter; Ivan Gabrijel; Andrej Dobnikar
We research the impact of the learning process of neural networks (NN) on the structural properties of the derived graphs. A type of recurrent neural network is used (GARNN). A graph is derived from a NN by defining a connection between any pair od nodes having weights in both directions above a certain threshold. We measured structural properties of graphs such as characteristic path lengths (), clustering coefficients () and degree distributions (). We found that well trained networks differ from badly trained ones in both and .
- Neural Networks | Pp. 63-70
Learning Using a Self-building Associative Frequent Network
Jin-Guk Jung; Mohammed Nazim Uddin; Geun-Sik Jo
In this paper, we propose a novel framework, called a , to discover frequent itemsets and potentially frequent patterns by logical inference. We also introduce some new terms and concepts to define the , and we show the procedure of constructing the . We then describe a new method (Learning based on Associative Frequent Network) for mining frequent itemsets and potentially patterns, which are considered as a useful pattern logically over the . Finally, we present a useful application, classification with these discovered patterns from the proposed framework, and report the results of the experiment to evaluate our classifier on some data sets.
- Neural Networks | Pp. 71-79
Proposal of a New Conception of an Elastic Neural Network and Its Application to the Solution of a Two-Dimensional Travelling Salesman Problem
Tomasz Szatkiewicz
In this publication, a new conception of a neural network proposed by the author was described. It belongs to the class of self-organizing networks. The theory, structure and learning principles of the network proposed were all described. Further chapters include the application of a system of two proposed neural networks for the solution of a two-dimensional Euclides’ travelling salesman problem. The feature of the neural network system proposed is an ability to avoid unfavourable local minima. The article presents graphical results of the evaluation of the neural network system from the initialization point to the determination of the selected TSP instance.
- Neural Networks | Pp. 80-87