Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Neural Networks: 4th International Symposium on Neural Networks, ISNN 2007, Nanjing, China, June 3-7, 2007, Proceedings, Part I

Derong Liu ; Shumin Fei ; Zeng-Guang Hou ; Huaguang Zhang ; Changyin Sun (eds.)

En conferencia: 4º International Symposium on Neural Networks (ISNN) . Nanjing, China . June 3, 2007 - June 7, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Computer Communication Networks; Algorithm Analysis and Problem Complexity; Discrete Mathematics in Computer Science; Pattern Recognition

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72382-0

ISBN electrónico

978-3-540-72383-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Investigation on Sparse Kernel Density Estimator Via Harmony Data Smoothing Learning

Xuelei Hu; Yingyu Yang

In this paper we apply harmony data smoothing learning on a weighted kernel density model to obtain a sparse density estimator. We empirically compare this method with the least squares cross-validation (LSCV) method for the classical kernel density estimator. The most remarkable result of our study is that the harmony data smoothing learning method outperforms LSCV method in most cases and the support vectors selected by harmony data smoothing learning method are located in the regions of local highest density of the sample.

- Learning and Approximation | Pp. 1211-1220

Informative Gene Set Selection Via Distance Sensitive Rival Penalized Competitive Learning and Redundancy Analysis

Liangliang Wang; Jinwen Ma

This paper presents an informative gene set selection approach to tumor diagnosis based on the Distance Sensitive Rival Penalized Competitive Learning (DSRPCL) algorithm and redundancy analysis. Since the DSRPCL algorithm can allocate an appropriate number of clusters for an input dataset automatically, we can utilize it to classify the genes (expressed by the gene expression levels of all the samples) into certain basic clusters. Then, we apply the post-filtering algorithm to each basic gene cluster to get the typical and independent informative genes. In this way we can obtain a compact set of informative genes. To test the effectiveness of the selected informative gene set, we utilize the support vector machine (SVM) to construct a tumor diagnosis system based on the express profiles of its genes. It is shown by the experiments that the proposed method can achieve a higher diagnosis accuracy with a smaller number of informative genes and less computational complexity in comparison with the previous ones.

- Learning and Approximation | Pp. 1227-1236

Approximation Property of Weighted Wavelet Neural Networks

Shou-Song Hu; Xia Hou; Jun-Feng Zhang

A new weighted wavelet neural network is presented, And the approximation capability of such weighted wavelet neural network is also studied based on the traits of Lebesgue partition, the operator theory and the topology structure of the relatively compact set in Hilbert space. The simulation results indicate that the weighted wavelet neural network is a uniformed approximator, which can approximates the nonlinear function in compact set by arbitrary precision.

- Learning and Approximation | Pp. 1247-1254

A New Adaptive Neural Network Model for Financial Data Mining

Shuxiang Xu; Ming Zhang

Data Mining is an analytic process designed to explore data (usually large amounts of data - typically business or market related) in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. One of the most commonly used techniques in data mining, Artificial Neural Networks provide non-linear predictive models that learn through training and resemble biological neural networks in structure. This paper deals with a new adaptive neural network model: a feed-forward higher order neural network with a new activation function called neuron-adaptive activation function. Experiments with function approximation and stock market movement analysis have been conducted to justify the new adaptive neural network model. Experimental results have revealed that the new adaptive neural network model presents several advantages over traditional neuron-fixed feed-forward networks such as much reduced network size, faster learning, and more promising financial analysis.

- Learning and Approximation | Pp. 1265-1273

A Comparison of Four Data Mining Models: Bayes, Neural Network, SVM and Decision Trees in Identifying Syndromes in Coronary Heart Disease

Jianxin Chen; Yanwei Xing; Guangcheng Xi; Jing Chen; Jianqiang Yi; Dongbin Zhao; Jie Wang

Coronary heart disease (CHD) is a serious disease causing more and more morbidity and mortality. Combining western medicine and Traditional Chinese Medicine (TCM) to heal CHD becomes especially necessary for medical society today. Since western medicine faces some problems, like high cost and more side effects. TCM can be a complementary alternative to overcome these defects. Identification of what syndrome a CHD patient caught has been a challenging issue for medical society because the core of TCM is syndrome. In this paper, we carry out a large-scale clinical epidemiology to collect data with 1069 cases, each of which must be a CHD instance but may be diagnosed as different syndromes. We take blood stasis syndrome (frequency is 69%) as an example, employ four distinct kinds of data mining algorithms: Bayesian model; Neural Network; Support vector machine and Decision trees to classify the data and compare their performance. The results indicated that neural network is the best identifier with 88.6% accuracy on the holdout samples. The next is support vector machine with 82.5% accuracy, a slight higher than Bayesian model with 82.0% counterpart. The decision tree performs the worst, only 80.4%. We conclude that in identifying syndromes in CHD, neural network can provide a best insight to clinical application.

- Learning and Approximation | Pp. 1274-1279

A Concept Lattice-Based Kernel Method for Mining Knowledge in an M-Commerce System

Qiudan Li; Chunheng Wang; Guanggang Geng; Ruwei Dai

With the vast amount of mobile user information available today, mining knowledge of mobile users is getting more and more important for a mobile commerce (M-commerce) system. Vector space model (VSM) is one of the most popular methods to achieve the above goal. Unfortunately, it can not identify the latent information in the user feature space, which decreases the quality of personalized services. In this paper, we present a concept-lattice based kernel method for mining the hidden user knowledge. The main idea is to employ concept lattice for constructing item proximity matrix, and then embed it into a kernel function, which transforms the original user feature space into a user concept space, and at last, perform personalized services in the user concept space. The experimental results demonstrate that our method is more encouraging than VSM.

- Learning and Approximation | Pp. 1280-1285

A Novel Data Mining Method for Network Anomaly Detection Based on Transductive Scheme

Yang Li; Binxing Fang; Li Guo

Network anomaly detection has been a hot topic in the past years. However, high false alarm rate, difficulties in obtaining exact clean data for the modeling of normal patterns and the deterioration of detection rate because of “unclean” training set always make it not as good as we expect. Therefore, we propose a novel data mining method for network anomaly detection in this paper. Experimental results on the well-known KDD Cup 1999 dataset demonstrate it can effectively detect anomalies with high true positives, low false positives as well as with high confidence than the state-of-the-art anomaly detection methods. Furthermore, even provided with not purely “clean” data (unclean data), the proposed method is still robust and effective.

- Learning and Approximation | Pp. 1286-1292

A Novel Feature Vector Using Complex HRRP for Radar Target Recognition

Lan Du; Hongwei Liu; Zheng Bao; Feng Chen

Radar high-resolution range profile (HRRP) has received intensive attention from the radar automatic recognition (RATR) community. Since the initial phase of a complex HRRP is strongly sensitive to target position variation, which is referred to as the initial phase sensitivity, only the amplitude information in the complex HRRP, what is called the real HRRP, is used for RATR. This paper proposes a novel feature extraction method for the complex HRRP. The extracted complex feature vector contains the difference phase information between range cells but no initial phase information in the complex HRRP. The recognition algorithms, frame-template-database establishment methods and preprocessing methods used in the real HRRP-based RATR can also be applied to the proposed complex feature vector-based RATR. The recognition experiments based on measured data show that the proposed complex feature vector can obtain better recognition performance than the real HRRP if only the cell interval parameters are proper.

- Learning and Approximation | Pp. 1303-1309

Zero-Crossing-Based Feature Extraction for Voice Command Systems Using Neck-Microphones

Sang Kyoon Park; Rhee Man Kil; Young-Giu Jung; Mun-Sung Han

This paper presents zero-crossing-based feature extraction for the speech recognition using neck-microphones. One of the solutions in noise-robust speech recognition is using neck-microphones which are not affected by the environmental noises. However, neck-microphones distort the original voice signals significantly since they only capture the vibrations of vocal tracts. In this context, we consider a new method of enhancing speech features of neck-microphone signals using zero-crossings. Furthermore, for the improvement of zero-crossing features, we consider to use the statistics of two adjacent zero-crossing intervals, that is, the statistics of two samples referred to as the second order statistics. Through the simulation for speech recognition using the neck-microphone voice command system, we have shown that the suggested method provides the better performance than other approaches using conventional speech features.

- Learning and Approximation | Pp. 1318-1326

Feature Bispectra and RBF Based FM Signal Recognition

Yuchun Huang; Zailu Huang; Benxiong Huang; Shuhua Xu

Automatic communication signal (e.g., FM signal) classification and identification focus on finding the fine feature contained in the almost approximate noisy communication signal comprehensively identifying the the same or different version of transmitters in modern electronic warfare. Direct use of HOS becomes unavailable for on-line application because of its huge computation time and memory space especially in the case of high frequency FM signal. This paper presents a novel view to improve the HOS analysis efficiency by sub-sampling while preserving the noise-contaminated fine feature and eliminating the random Gaussian noise. FM signal-related feature bispectra are also introduced to translate the 2-D feature matching pattern to a 1-D one applicable for an optimal adaptive k-means iterative RBF classifier. Computer simulations show that this novel feature bispectra outperform AIB and SB in terms of computation time and recognition rate for on-line steady FM signal recognition.

- Learning and Approximation | Pp. 1336-1345