Catálogo de publicaciones - libros
Neural Information Processing: 13th International Conference, ICONIP 2006, Hong Kong, China, October 3-6, 2006, Proceedings, Part II
Irwin King ; Jun Wang ; Lai-Wan Chan ; DeLiang Wang (eds.)
En conferencia: 13º International Conference on Neural Information Processing (ICONIP) . Hong Kong, China . October 3, 2006 - October 6, 2006
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Computer Appl. in Administrative Data Processing; Information Systems Applications (incl. Internet); Database Management; Image Processing and Computer Vision
Disponibilidad
| Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
|---|---|---|---|---|
| No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-46481-5
ISBN electrónico
978-3-540-46482-2
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11893257_81
Intelligent System for Feature Extraction of Oil Slick in SAR Images: Speckle Filter Analysis
Danilo L. de Souza; Adrião D. D. Neto; Wilson da Mata
The development of automatic techniques for oil slick identification on the sea surface, captured through remote sensing images, cause a positive impact to a complete monitoring of the oceans and seas. C-band SAR (ERS-1, ERS-2, Radarsat and Envisat projects) is well adapted to detect ocean pollution because the backscatter is reduced by oil slick. This work propose a system for segmentation and feature extraction of oil slicks candidates based on techniques of digital image processing (filters, gradients, mathematical morphology) and artificial neural network (ANN). Different algorithms of speckle filtering are tested and a comparison for the considered system is presented. The process is thought to possess a level of automatization that minimizes the intervention of a human operator, being possible the processing of larger amount data. The focus of the work is to present a study detailed for feature extraction block proposed (architecture used and computational tools).
- Data Pre-processing | Pp. 729-736
doi: 10.1007/11893257_82
Feature Selection for Neural Networks Through Binomial Regression
Gecynalda Soares S. Gomes; Teresa B. Ludermir
Artificial neural networks have been an interesting alternative to use instead of classic statistical techniques, however, artificial neural networks have some disadvantages, as for example: the training process is long, the choice of topology and input variables (attributes) are difficult. This work uses three models of binomial regression (each model has a different link function) for selecting statistical significant variables for being used as input nodes on each neural network. Hybrid models were constructed, in this paper, in two steps.
- Data Pre-processing | Pp. 737-745
doi: 10.1007/11893257_83
Automated Parameter Selection for Support Vector Machine Decision Tree
Gyunghyun Choi; Suk Joo Bae
A support vector machine (SVM) provides an optimal separating hyperplane between two classes to be separated. However, the SVM gives only recognition results such as a neural network in a black-box structure. As an alternative, support vector machine decision tree (SVDT) provides useful information on key attributes while taking a number of advantages of the SVM. we propose an automated parameter selection scheme in SVDT to improve efficiency and accuracy in classification problems. Two practical applications confirm that the proposed methods has a potential in improving generalization and classification error in SVDT.
- Data Pre-processing | Pp. 746-753
doi: 10.1007/11893257_84
Message-Passing for Inference and Optimization of Real Variables on Sparse Graphs
K. Y. Michael Wong; C. H. Yeung; David Saad
The inference and optimization in sparse graphs with variables is studied using methods of statistical mechanics. Efficient distributed algorithms for the resource allocation problem are devised. Numerical simulations show excellent performance and full agreement with the theoretical results.
- Data Pre-processing | Pp. 754-763
doi: 10.1007/11893257_85
Analysis and Insights into the Variable Selection Problem
Amir F. Atiya
In many large applications a large number of input variables is initially available, and a subset selection step is needed to select the best few to be be used in the subsequent classification or regression step. The designer initially screens the inputs for the ones that have good predictive ability and that are not too much correlated with the other selected inputs. In this paper, we study how the predictive ability of the inputs, viewed individually, reflect on the performance of the group (i.e. what are the chances that as a group they perform well). We also study the effect of “irrelevant” inputs. We develop a formula for the distribution of the change in error due to adding an irrelevant input. This can be a useful reference. We also study the role of correlations and their effect on group performance. To study these issues, we first perform a theoretical analysis for the case of linear regression problems. We then follow with an empirical study for nonlinear regression models such as neural networks.
- Data Pre-processing | Pp. 764-775
doi: 10.1007/11893257_86
Dimensionality Reduction of Protein Mass Spectrometry Data Using Random Projection
Chen Change Loy; Weng Kin Lai; Chee Peng Lim
Protein mass spectrometry (MS) pattern recognition has recently emerged as a new method for cancer diagnosis. Unfortunately, classification performance may degrade owing to the enormously high dimensionality of the data. This paper investigates the use of Random Projection in protein MS data dimensionality reduction. The effectiveness of Random Projection (RP) is analyzed and compared against Principal Component Analysis (PCA) by using three classification algorithms, namely Support Vector Machine, Feed-forward Neural Networks and K-Nearest Neighbour. Three real-world cancer data sets are employed to evaluate the performances of RP and PCA. Through the investigations, RP method demonstrated better or at least comparable classification performance as PCA if the dimensionality of the projection matrix is sufficiently large. This paper also explores the use of RP as a pre-processing step prior to PCA. The results show that without sacrificing classification accuracy, performing RP prior to PCA significantly improves the computational time.
- Data Pre-processing | Pp. 776-785
doi: 10.1007/11893257_87
Fault Tolerant Training of Neural Networks for Learning Vector Quantization
Takashi Minohara
The learning vector quantization(LVQ) is a model of neural networks, and it is used for complex pattern classifications in which typical feedforward networks don’t give a good performance. Fault tolerance is an important feature in the neural networks, when they are used for critical application. Many methods for enhancing the fault tolerance of neural networks have been proposed, but most of them are for feedforward networks. There is scarcely any methods for fault tolerance of LVQ neural networks. In this paper, I proposed a dependability measure for the LVQ neural networks, and then I presented two idea, the border emphasis and the encouragement of coupling, to improve the learning algorithm for increasing dependability. The experiment result shows that the proposed algorithm trains networks so that they can achieve high dependability.
- Data Pre-processing | Pp. 786-795
doi: 10.1007/11893257_88
Clustering with a Semantic Criterion Based on Dimensionality Analysis
Wenye Li; Kin-Hong Lee; Kwong-Sak Leung
Considering data processing problems from a geometric point of view, previous work has shown that the intrinsic dimension of the data could have some semantics. In this paper, we start from the consideration of this inherent topology property and propose the usage of such a semantic criterion for clustering. The corresponding learning algorithms are provided. Theoretical justification and analysis of the algorithms are shown. Promising results are reported by the experiments that generally fail with conventional clustering algorithms.
- Data Pre-processing | Pp. 796-805
doi: 10.1007/11893257_89
Improved Clustering and Anisotropic Gradient Descent Algorithm for Compact RBF Network
Delu Zeng; Shengli Xie; Zhiheng Zhou
In the formulation of radial basis function (RBF) network, there are three factors mainly considered, i.e., centers, widths, and weights, which significantly affect the performance of the network. Within thus three factors, the placement of centers is proved theoretically and practically to be critical. In order to obtain a compact network, this paper presents an improved clustering (IC) scheme to obtain the location of the centers. What is more, since the location of the corresponding widths does affect the performance of the networks, a learning algorithms referred to as anisotropic gradient descent (AGD) method for designing the widths is presented as well. In the context of this paper, the conventional gradient descent method for learning the weights of the networks is combined with that of the widths to form an array of couple recursive equations. The implementation of the proposed algorithm shows that it is as efficient and practical as GGAP-RBF.
- Data Pre-processing | Pp. 806-813
doi: 10.1007/11893257_90
Clustering Massive High Dimensional Data with Dynamic Feature Maps
Rasika Amarasiri; Damminda Alahakoon; Kate Smith-Miles
This paper presents an algorithm based on the Growing Self Organizing Map (GSOM) called the High Dimensional Growing Self Organizing Map with Randomness (HDGSOMr) that can cluster massive high dimensional data efficiently. The original GSOM algorithm is altered to accommodate for the issues related to massive high dimensional data. These modifications are presented in detail with experimental results of a massive real-world dataset.
- Data Pre-processing | Pp. 814-823