Catálogo de publicaciones - libros

Compartir en
redes sociales


MICAI 2005: Advances in Artificial Intelligence: 4th Mexican International Conference on Artificial Intelligence, Monterrey, Mexico, November 14-18, 2005, Proceedings

Alexander Gelbukh ; Álvaro de Albornoz ; Hugo Terashima-Marín (eds.)

En conferencia: 4º Mexican International Conference on Artificial Intelligence (MICAI) . Monterrey, Mexico . November 14, 2005 - November 18, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-29896-0

ISBN electrónico

978-3-540-31653-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

K-Dynamical Self Organizing Maps

Carolina Saavedra; Héctor Allende; Sebastián Moreno; Rodrigo Salas

Neural maps are a very popular class of unsupervised neural networks that project high-dimensional data of the input space onto a neuron position in a low-dimensional output space grid. It is desirable that the projection effectively preserves the structure of the data.

In this paper we present a hybrid model called K-Dynamical Self Organizing Maps (KDSOM) consisting of K Self Organizing Maps with the capability of growing and interacting with each other. The input space is soft partitioned by the lattice maps. The KDSOM automatically finds its structure and learns the topology of the input space clusters.

We apply our KDSOM model to three examples, two of which involve real world data obtained from a site containing benchmark data sets.

- Neural Networks | Pp. 702-711

Study of Application Model on BP Neural Network Optimized by Fuzzy Clustering

Yong He; Yun Zhang; Liguo Xiang

A back-propagation neural network is a large-scale dynamical system, most widely-used for scientific prediction. Its application was inhibited largely by the slow convergence rate and over-prolonged training time, primarily the results of inappropriate sample preprocessing for a large initial sample domain. To solve them, this paper introduced fuzzy clustering to scale-down the learning sample set, with representativeness of the whole sample domain. The established network was tested by analyzing the correlation coefficients between measured and predicted results in the least-square one-dimensional linear regression. In the application case, 50 subsamples were clustered out of the 250-sample domain of the S195 diesel engines to train the network of topological structure 8:9:2. The convergence rate was improved approximately 6.3 times. After validating the model by another untrained 10 samples, the correlation coefficients of the working power and the diesel consumption rate were respectively 0.968 and 0.986, indicating that the optimized network was applicable for data mining from a large knowledge pool.

- Neural Networks | Pp. 712-720

Application of Modified Neural Network Weights’ Matrices Explaining Determinants of Foreign Investment Patterns in the Emerging Markets

Darius Plikynas; Yusaf H. Akbar

Quantitatively examining determinants of foreign direct investment (FDI) in Central and East Europe (CEE) is an important research area. Traditional linear regression approaches have had difficulty in achieving conceptually and statistically reliable results. The key tasks addressed in this research are a neural network (NN) based (i) FDI forecasting model and (ii) nonlinear evaluation of the determinants of FDI. We have explored various modified backprop NN weights’ matrices and distinguished some nontraditional NN topologies. In terms of MSE and R-squared criteria, we found and checked relationship between modified NN input weights and FDI determinants weights. Results indicate that NN approaches better able to explain FDI determinants’ weights than traditional regression methodologies. Our findings are preliminary but offer important and novel implications for future research in this area including more detailed comparisons across sectors as well as countries over time.

- Neural Networks | Pp. 721-730

Neural Network and Trend Prediction for Technological Processes Monitoring

Luis Paster Sanchez Fernandez; Oleksiy Pogrebnyak; Cornelio Yanez Marquez

The goal of this paper is to introduce an efficient predictive supervisory method for the trending of variables of technological processes and devices, with low run-time, for periodic analysis of high frequency, relatively (periods smaller than a second). This method allows to predict the time in which a process variable will arrive to an abnormal or important values. The data obtained in real time for each variable are used to estimate the parameters of a mathematical model. This model is continuous and of first-order or second-order (critically damped, overdamped or underdamped). An optimization algorithm is used for estimating the parameters. Before performing the estimation, the most appropriate model is determined by means of a feed-forward neural network.

- Neural Networks | Pp. 731-740

Underspecified Semantics for Dependency Grammars

Alexander Dikovsky

We link generative dependency grammars meeting natural modularity requirements with underspecified semantics of intended to . We complete this link with a natural compilation of the modular dependency grammars into strongly equivalent efficiently analysed categorial dependency grammars.

- Natural Language Processing | Pp. 741-751

Distributed English Text Chunking Using Multi-agent Based Architecture

Ying-Hong Liang; Tie-Jun Zhao

The traditional English text chunking approach identifies phrases by using only one model and phrases with the same types of features. It has been shown that the limitations of using only one model are that: the use of the same types of features is not suitable for all phrases, and data sparseness may also result. In this paper, the Distributed Multi-Agent based architecture approach is proposed and applied in the identification of English phrases. This strategy put phrases into agents according to their sensitive features and identifies different phrases in parallel, where the main features are: one, easy and quick communication between phrases; two, avoidance of data sparseness. By applying and testing the approach on the public training and test corpus, the F score for arbitrary phrases identification using Distributed Multi-Agent strategy achieves 95.70% compared to the previous best F score of 94.17%.

- Natural Language Processing | Pp. 752-760

A Similarity-Based Approach to Data Sparseness Problem of Chinese Language Modeling

Jinghui Xiao; Bingquan Liu; Xiaolong Wang; Bing Li

Data sparseness problem is inherent and severe in language modeling. Smoothing techniques are usually widely used to solve this problem. However, traditional smoothing techniques are all based on statistical hypotheses without concerning about linguistic knowledge. This paper introduces semantic information into smoothing technique and proposes a similarity-based smoothing method which is based on both statistical hypothesis and linguistic hypothesis. An experiential iterative algorithm is presented to optimize system parameters. Experiment results prove that compared with traditional smoothing techniques, our method can greatly improve the performance of language model.

- Natural Language Processing | Pp. 761-769

Self-training and Co-training Applied to Spanish Named Entity Recognition

Zornitsa Kozareva; Boyan Bonev; Andres Montoyo

The paper discusses the usage of unlabeled data for Spanish Named Entity Recognition. Two techniques have been used: self-training for detecting the entities in the text and co-training for classifying these already detected entities. We introduce a new co-training algorithm, which applies voting techniques in order to decide which unlabeled example should be added into the training set at each iteration. A proposal for improving the performance of the detected entities has been made. A brief comparative study with already existing co-training algorithms is demonstrated.

- Natural Language Processing | Pp. 770-779

Towards the Automatic Learning of Idiomatic Prepositional Phrases

Sofía N. Galicia-Haro; Alexander Gelbukh

The objective of this work is to automatically determine, in an unsupervised manner, Spanish prepositional phrases of the type preposition – nominal phrase – preposition (P(NP(P) that behave in a sentence as a lexical unit and their semantic and syntactic properties cannot be deduced from the corresponding properties of each simple form, e.g., (by means of), (in order to), (with respect to). We show that idiomatic P(NP(P combinations have some statistical properties distinct from those of usual idiomatic collocations. We also explore a way to differentiate P(NP(P combinations that could perform either as a regular prepositional phrase or as idiomatic prepositional phrase.

- Natural Language Processing | Pp. 780-789

Measurements of Lexico-Syntactic Cohesion by Means of Internet

Igor A. Bolshakov; Elena I. Bolshakova

Syntactic links between content words in meaningful texts are intuitively conceived ‘normal,’ thus ensuring text cohesion. Nevertheless we are not aware on a broadly accepted Internet-based measure of cohesion between words syntactically linked in terms of Dependency Grammars. We propose to measure lexico-syntactic cohesion between content words by means of Internet with a specially introduced Stable Connection Index (. is similar to Mutual Information known in statistics, but does not require iterative evaluation of total amount of Web-pages under search engine’s control and is insensitive to both fluctuations and slow growth of raw Web statistics. Based on Russian, Spanish, and English materials, presented concentrated distributions for various types of word combinations; hence lexico-syntactic cohesion acquires a simple numeric measure. It is shown that evaluations can be successfully used for semantic error detection and correction, as well as for information retrieval.

- Natural Language Processing | Pp. 790-799