Catálogo de publicaciones - libros

Compartir en
redes sociales


Biological and Medical Data Analysis: 7th International Symposium, ISBMDA 2006, Thessaloniki, Greece, December 7-8, 2006. Proceedings

Nicos Maglaveras ; Ioanna Chouvarda ; Vassilis Koutkias ; Rüdiger Brause (eds.)

En conferencia: 7º International Symposium on Biological and Medical Data Analysis (ISBMDA) . Thessaloniki, Greece . December 7, 2006 - December 8, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Biomedicine general; Data Mining and Knowledge Discovery; Artificial Intelligence (incl. Robotics); Information Storage and Retrieval; Probability and Statistics in Computer Science; Computational Biology/Bioinformatics

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-68063-5

ISBN electrónico

978-3-540-68065-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

A Novel Algorithm for Segmentation of Lung Images

Aamir Saeed Malik; Tae-Sun Choi

Several image segmentation techniques have been presented in the literature applied in the medical domain. However, there are few multiscale segmentation methods that can segment the medical image so that various components within the image could be separated at multiple scales. In this paper, we present a new segmentation method based on an optical transfer function implemented in the Frequency domain. With this new segmentation technique, we demonstrate that it is possible to segment the High Resolution Computed Tomographic (HRCT) images into its various components at multiple scales hence separating the information available in HRCT image. We show that the HRCT image can be segmented such that we get separate images for bones, tissues, lungs and anatomical structures within lungs. The processing is done in frequency domain using the Fast Fourier Transform and Discrete Cosine Transform. Further, we propose an algorithm for extraction of anatomical structures from the segmented image.

- Biomedical Image Analysis and Visualisation Techniques | Pp. 346-357

An Evaluation of Image Compression Algorithms for Colour Retinal Images

Gerald Schaefer; Roman Starosolski

Diabetic retinopathy is the leading cause of blindness in the adult population. Mass-screening efforts, during which high resolution images of the retina are captured, are therefore underway in order to detect the disease in its early stages. In this paper we evaluate the compression performance of several lossless image compression algorithms that could be employed in a retina Picture Archiving and Communications System to lessen the demand on computing resources. The algorithms we analyse are TIFF PackBits, Lossless JPEG, JPEG-LS, and JPEG2000 all of which are incorporated in the current DICOM standard together with the non-standard CALIC algorithm for benchmark comparison. Compression performance is evaluated in terms of compression ratio, compression speed, and decompression speed. Based on a large dataset of more than 800 colour retinal images, divided into groups according to retinal region (nasal, posterior, and temporal) and image size, JPEG-LS is found to be the most suitable compression algorithm, offering good compression ratios combined with high compression and decompression speed. Compression ratios can be further improved through the application of a reversible colour space transformation prior to compression as a second set of experiments show.

- Biomedical Image Analysis and Visualisation Techniques | Pp. 358-367

An Automated Model for Rapid and Reliable Segmentation of Intravascular Ultrasound Images

Eirini Parissi; Yiannis Kompatsiaris; Yiannis S. Chatzizisis; Vassilis Koutkias; Nicos Maglaveras; M. G. Strintzis; George D. Giannoglou

The detection of lumen and media-adventitia borders in intravascular ultrasound (IVUS) images constitutes a necessary step for accurate morphometric analyses of coronary plaques and accordingly assessment of the atherosclerotic lesion length. Aiming to tackle this issue, an automated model for lumen and media-adventitia border detection is presented, which is based on active contour models. The proposed approach enables extraction of the corresponding boundaries in sequential IVUS frames by applying an iterative procedure, in which initialization of the two contours in each frame is performed automatically, based on the segmentation of its previous frame. The above procedure is implemented through a user-friendly interface, permitting the interaction of the user when needed. The in vivo application and evaluation of our model in sequential IVUS images indicated that the proposed approach is capable of accurately and rapidly segmenting hundreds of IVUS images.

- Biomedical Image Analysis and Visualisation Techniques | Pp. 368-377

Supervised Neuro-fuzzy Clustering for Life Science Applications

Jürgen Paetz

Classification, clustering and rule generation are important tasks in multidimensional data analysis. The combination of clustering or classification with rule generation gives an explanation for the achieved results. Especially in life science applications experts are interested in explanations to understand the underlying data. The usage of supervised neuro-fuzzy systems is a suitable approach for this combined task. Not always classification labels are available for the data when considering new problem areas in life science. Since we had already used a supervised neuro-fuzzy system for some applications, our aim in the case studies was to use the same neuro-fuzzy classifier for clustering, generating understandable rules also for clusters. To do so, we added Monte-Carlo random data to the original data and performed the clustering task with the present classifier in the medical, chemical, and biological domain.

- Biomedical Data Analysis and Interpretation | Pp. 378-389

Study on Preprocessing and Classifying Mass Spectral Raw Data Concerning Human Normal and Disease Cases

Xenofon E. Floros; George M. Spyrou; Konstantinos N. Vougas; George T. Tsangaris; Konstantina S. Nikita

Mass spectrometry is becoming an important tool in biological sciences. Tissue samples or easily obtained biological fluids (serum, plasma, urine) are analysed by a variety of mass spectrometry methods, producing spectra characterized by very high dimensionality and a high level of noise. Here we address a feature exraction method for mass spectra which consists of two main steps : In the first step an algorithm for low level preprocessing of mass spectra is applied, including denoising with the Shift-Invariant Discrete Wavelet Transform (SIDWT), smoothing, baseline correction, peak detection and normalization of the resulting peak-lists. After this step, we claim to have reduced dimensionality and redundancy of the initial mass spectra representation while keeping all the meaningful features (potential biomarkers) required for disease related proteomic patterns to be identified. In the second step, the peak-lists are alligned and fed to a Support Vector Machine (SVM) which classifies the mass spectra. This procedure was applied to SELDI-QqTOF spectral data collected from normal and ovarian cancer serum samples. The classification performance was assessed for distinct values of the parameters involved in the feature extraction pipeline. The method described here for low-level preprocessing of mass spectra results in 98.3% sensitivity, 98.3% specificity and an AUC (Area Under Curve) of 0.981 in spectra classification.

- Biomedical Data Analysis and Interpretation | Pp. 390-401

Non-repetitive DNA Sequence Compression Using Memoization

K. G. Srinivasa; M. Jagadish; K. R. Venugopal; L. M. Patnaik

With increasing number of DNA sequences being discovered the problem of storing and using genomic databases has become vital. Since DNA sequences consist of only four letters, two bits are sufficient to store each base. Many algorithms have been proposed in the recent past that push the bits/base limit further. The subtle patterns in DNA along with statistical inferences have been exploited to increase the compression ratio. From the compression perspective, the entire DNA sequences can be considered to be made of two types of sequences: repetitive and non-repetitive. The repetitive parts are compressed used dictionary-based schemes and non-repetitive sequences of DNA are usually compressed using general text compression schemes. In this paper, we present a memoization based encoding scheme for non-repeat DNA sequences. This scheme is incorporated with a DNA-specific compression algorithm, , which is used for compression of DNA sequences. The results show that our method noticeably performs better than other techniques of its kind.

- Biomedical Data Analysis and Interpretation | Pp. 402-412

Application of Rough Sets Theory to the Sequential Diagnosis

Andrzej Zolnierek

Sequential classification task is typical in medical diagnosis, when the investigations of the patient’s state are repeated several times. Such situation takes place in controlling of the drug therapy efficacy. In this paper the methods of sequential classification using rough sets theory are developed and evaluated. The proposed algorithms, using the set of learning sequences, calculate the lower and upper approximations of the set of proper decision formulas and then use them to make final decision. Depending on the input data different algorithms are derived. Next, all presented algorithms were practically applied in computer-aided recognition of the human acid-base state balance and the results of comparative experimental analysis of in respect of classification accuracy are also presented and discussed.

- Biomedical Data Analysis and Interpretation | Pp. 413-422

Data Integration in Multi-dimensional Data Sets: Informational Asymmetry in the Valid Correlation of Subdivided Samples

Qing T. Zeng; Juan Pablo Pratt; Jane Pak; Eun-Young Kim; Dino Ravnic; Harold Huss; Steven J. Mentzer

Background: Flow cytometry is the only currently available high throughput technology that can measure multiple physical and molecular characteristics of individual cells. It is common in flow cytometry to measure a relatively large number of characteristics or features by performing separate experiments on subdivided samples. Correlating data from multiple experiments using certain shared features (e.g. cell size) could provide useful information on the combination pattern of the not shared features. Such correlation, however, are not always reliable. Methods: We developed a method to assess the correlation reliability by estimating the percentage of cells that can be unambiguously correlated between two samples. This method was evaluated using 81 pairs of subdivided samples of microspheres (artificial cells) with known molecular characteristics. Results: Strong correlation (R=0.85) was found between the estimated and actual percentage of unambiguous correlation. Conclusion: The correlation reliability we developed can be used to support data integration of experiments on subdivided samples.

- Biomedical Data Analysis and Interpretation | Pp. 423-432

Two-Stage Classifier for Diagnosis of Hypertension Type

Michal Wozniak

The inductive learning approach could be immensely useful as the method generating effective classifiers. This paper presents idea of constructing two-stage classifier for diagnosis of the type of hypertension (essential hypertension and five type of secondary one: fibroplastic renal artery stenosis, atheromatous renal artery stenosis, Conn’s syndrome, renal cystic disease and pheochromocystoma). The first step decides if patient suffers from essential hypertension or secondary one. This decision is made on the base on the decision of classifier obtained by boosted version of additive tree algorithm. The second step of classification decides which type of secondary hypertension patient is suffering from. The second step of classifier makes its own decision using human expert rules. The decisions of these classifiers are made only on base on blood pressure, general information and basis biochemical data.

- Decision Support Systems and Diagnostic Tools | Pp. 433-440

Handwriting Analysis for Diagnosis and Prognosis of Parkinson’s Disease

Atilla Ünlü; Rüdiger Brause; Karsten Krakow

At present, there are no quantitative, objective methods for diagnosing the Parkinson disease. Existing methods of quantitative analysis by myograms suffer by inaccuracy and patient strain; electronic tablet analysis is limited to the visible drawing, not including the writing forces and hand movements. In our paper we show how handwriting analysis can be obtained by a new electronic pen and new features of the recorded signals. This gives good results for diagnostics.

- Decision Support Systems and Diagnostic Tools | Pp. 441-450