Catálogo de publicaciones - libros
Advanced Intelligent Computing Theories and Applications. With Aspects of Theoretical and Methodological Issues: 3International Conference on Intelligent Computing, ICIC 2007 Qingdao, China, August 21-24, 2007 Proceedings
De-Shuang Huang ; Laurent Heutte ; Marco Loog (eds.)
En conferencia: 3º International Conference on Intelligent Computing (ICIC) . Qingdao, China . August 21, 2007 - August 24, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Image Processing and Computer Vision; Pattern Recognition
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-74170-1
ISBN electrónico
978-3-540-74171-8
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Text Categorization Using Distributional Clustering and Concept Extraction
Yifan He; Minghu Jiang
Text categorization (TC) has become one the most researched fields in NLP. In this paper, we try to solve the problem of TC through a 2-step feature selection approach. First we cluster the words that appear in the texts according to their distribution in categories. Then we extract concepts from these clusters, which are DEF terms in HowNet. The extraction is according to the word clusters instead of single words. This method maintains the generalization ability of concept extraction based TC and at the same time makes full use of the occurrences of new words that are not found in concept thesaurus. We test the performance of our feature selection method on the Sogou corpus for TC with an SVM classifier. Results of our experiments show that our method can improve the performance of TC in all categories.
- Natural Language Processing and Expert Systems | Pp. 720-729
Using Maximum Entropy Model to Extract Protein-Protein Interaction Information from Biomedical Literature
Chengjie Sun; Lei Lin; Xiaolong Wang; Yi Guan
Protein-Protein interaction (PPI) information play a vital role in biological research. This work proposes a two-step machine learning based method to extract PPI information from biomedical literature. Both steps use Maximum Entropy (ME) model. The first step is designed to estimate whether a sentence in a literature contains PPI information. The second step is to judge whether each protein pair in a sentence has interaction. Two steps are combined through adding the outputs of the first step to the model of the second step as features. Experiments show the method achieves a total accuracy of 81.9% in BC–PPI corpus and the outputs of the first step can effectively prompt the performance of the PPI information extraction.
- Natural Language Processing and Expert Systems | Pp. 730-737
A Decentralized Resource Discovery Based on Keywords Combinations and Node Clusters in Knowledge Grid
Hong Li; Lu Liu
The organization and discovery of grid resources are foundational and key subjects in grid research. Many research works in this field have presented the solutions to the problem, but few of them are focused on knowledge resources. This paper aims to explore how to support user’s knowledge requests submitted in form of multi-keywords. We present a decentralized resource discovery method based on keyword combinations and node clusters. In the method, hot keyword combinations are formed based on user’s knowledge requests. Then, grid nodes can be clustered according to these keyword combinations and user’s knowledge requests will be transmitted to those clusters that have high correlations with the requests.
- Intelligent Image/Document Retrievals | Pp. 738-747
Contourlet Image De-noising Based on Principal Component Analysis
Li Liu; Jianzheng Dun; Lingfeng Meng
This paper proposes a new method which utilizes noise energy, instead of noise variance, to perform image de-noising based on Principal Component Analysis in Contourlet domain. The Contourlet transform is a new extension of the wavelet transform that provides a multi-resolution and multi-direction analysis for two dimension images. Most of the existing methods for image de-noising rely on accurate estimation of noise variance. However, the estimation of noise variance is difficult in Contourlet domain. The novelty of this method is that it does not rely on the estimation of noise variance, therefore it has great value in solving real-world problems. We compared this method with the wavelet hard-thresholding and soft-thresholding methods which are commonly used in image de-noising. The experimental results show that the proposed approach can obtain better visual results and higher PSNR values, especially for the images that include mostly fine textures and contours.
- Intelligent Image/Document Retrievals | Pp. 748-756
Design of Advanced Block Matching Algorithm by Using RAVR
Hyo-Moon Cho; Jong-Hwa Lee; Myung-Kook Yang; Sang-Bock Cho
The block matching algorithm (BMA) is one of the most important processing in the video compression. Since the sub-pixel motion estimation and motion compensation are needed, the computational complexity of the BMA is increased. Recently, the sum of absolute difference (SAD) calculation is widely used for BMA but it accounted for much of the total computation of the video compression. To implement the real-time video compression, the fast algorithm for motion estimation and motion compensation based on SAD computation is needed. The partial distortion elimination (PDE) scheme is one of the most advanced methods to decrease the SAD computational complexity. The basic concept of the PDE is that if the accumulated SAD values are greater than the given accumulated SAD value then the SAD computation is stopped. Where, the given accumulated SAD value is a kind of average value. Therefore, the big problem of the PDE is that the division is needed. And, as initial accumulated SAD value is large, PDE operation becomes efficient. Thus scan order is also important in SAD computation. In this paper, we introduce the new average computation method for PDE operation without division, its mathematical modeling and architecture. The new computational method is named as RAVR (Rough Average). And we propose the advanced scan order for efficient PDE scheme based on ARVR concept. Thus, our proposed algorithm combines above two main concepts and suffers the improving SAD performance and the easy hardware implementation methods.
- Intelligent Image/Document Retrievals | Pp. 757-766
Face Image Retrieval Method Based on Improved IGA and SVM
Shuo Shi; Jiu-Zhong Li; Li Lin
Face image retrieval has particularity under the situation that the target is unknown. If the image that match some features or fit to the one in memory wants to be find out in the human image database, the request solution must be global optimum, and does not lose the optimal one. In this paper the interactive genetic algorithm ( IGA ) jincorporating with adjust function and support vector machine ( SVM ) jis put forward to keep optimum solution from lose, speed the convergence, alleviate user fatigue, improve and raise retrieval performance.
- Intelligent Image/Document Retrievals | Pp. 767-774
A Two – Block Motif Discovery Method with Improved Accuracy
Bin Kuang; Nini Rao
The accuracy of the existing methods for two - block motifs discovery is usually less than 50%, which is difficult to be increased. Based on position weight matrix (PWM) for two - block motif model, this paper proposed an improved Gibbs sampling algorithm to overcome local converged performance of original Gibbs sampling algorithm and increase the predictive accuracy by introducing motif base. The feasibility and the effectiveness of novel algorithm are verified by the real biological data through computer experiments. The results are analyzed and compared with other algorithms such as RSAT and AlignACE. The accuracy of novel algorithm is larger than 55% for two - block motifs, which is superior to that of existing methods.
- Intelligent Computing in Bioinformatics | Pp. 775-781
Estimating Aging Pattern by Aging Increment Distribution for Re-rendering of Facial Age Effects
Jianyi Liu; Nanning Zheng; Bo Chen; Jishang Wei
Simulating facial aging effects is a challenge task because of the difficulties in understanding and modeling the aging pattern. In this paper, a novel aging model called Aging Increment Distribution Function was proposed to model the age progression in the statistical appearance model space. The trajectory of face samples is learned to build the distribution function with free shape. So it has finer resolution to reveal the underlying aging pattern. Based on modeling the increment of appearance parameter, an analytical framework was formulated to re-render the given face image onto any other age within the maximum age span of training samples. In experiment, the MORPH face database was used to train the aging model, which has been further applied to re-rendering of age effects. Both aging and rejuvenating simulation results presented similar effects comparing to the real images, which verified the effectiveness of proposed method.
- Intelligent Computing in Bioinformatics | Pp. 782-791
Molecular Cancer Class Discovery Using Non-negative Matrix Factorization with Sparseness Constraint
Xiangzhen Kong; Chunhou Zheng; Yuqiang Wu; Li Shang
In cancer diagnosis and treatment, clustering based on gene expression data has been shown to be a powerful method in cancer class discovery. In this paper, we discuss the use of nonnegative matrix factorization with sparseness constraints (NMFSC), a method which can be used to learn a parts representation of the data, to analysis gene expression data. We illustrate how to choose appropriate sparseness factors in the algorithm and demonstrate the improvement of NMFSC by direct comparison with the nonnegative matrix factorization (NMF). In addition, when using it on the two well-studied datasets, we obtain pretty much the same results with the sparse non-negative matrix factorization (SNMF).
- Intelligent Computing in Bioinformatics | Pp. 792-802
The Computation of Atrial Fibrillation Chaos Characteristics Based on Wavelet Analysis
Jianrong Hou; Hui Zhao; Dan Huang
Atrial fibrillation data series show the non-linear and chaos characters in the process of time-space kinetics evolution. In the case of unknowing the fractal dimension of atrial fibrillation chaos, the process of querying the similarity of diagnosis curve figure will be affected to a certain degree. An evaluation formula of varying-time index is established by wavelet and the algorithm of varying-time index is presented, which is applied to extract the characteristics of the atrial fibrillation in this paper. The diagnosis of atrial fibrillation curve figure can be done at some resolution ratio level. The results show that the time-varying fractal dimension rises when atrial fibrillation begins, while it falls when atrial fibrillation ends. The begin and the end characteristics of atrial fibrillation can be successfully detected by means of the change of the time-varying fractal dimension. The results also indicate that the complexity of heart rate variability (HRV) decreases at the beginning of atrial fibrillation. The effectiveness of the method is validated by means of the HRV example in the end.
- Intelligent Computing in Bioinformatics | Pp. 803-809