Catálogo de publicaciones - libros

Compartir en
redes sociales


Título de Acceso Abierto

Cloud-Based Benchmarking of Medical Image Analysis

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Medical imaging

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No requiere 2017 Directory of Open access Books acceso abierto
No requiere 2017 SpringerLink acceso abierto

Información

Tipo de recurso:

libros

ISBN impreso

978-3-319-49642-9

ISBN electrónico

978-3-319-49644-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Cobertura temática

Tabla de contenidos

Automatic Multiorgan Segmentation Using Hierarchically Registered Probabilistic Atlases

Razmig Kéchichian; Sébastien Valette; Michel Desvignes

We propose a generic method for the automatic multiple-organ segmentation of 3D images based on a multilabel graph cut optimization approach which uses location likelihood of organs and prior information of spatial relationships between them. The latter is derived from shortest-path constraints defined on the adjacency graph of structures and the former is defined by probabilistic atlases learned from a training dataset. Organ atlases are mapped to the image by a fast (2+1)D hierarchical registration method based on SURF keypoints. Registered atlases are also used to derive organ intensity likelihoods. Prior and likelihood models are then introduced in a joint centroidal Voronoi image clustering and graph cut multiobject segmentation framework. Qualitative and quantitative evaluation has been performed on contrast-enhanced CT and MR images from the VISCERAL dataset.

Part IV - VISCERAL Anatomy Participant Reports | Pp. 185-201

Multiatlas Segmentation Using Robust Feature-Based Registration

Frida Fejne; Matilda Landgren; Jennifer Alvén; Johannes Ulén; Johan Fredriksson; Viktor Larsson; Olof Enqvist; Fredrik Kahl

This paper presents a pipeline which uses a multiatlas approach for multiorgan segmentation in whole-body CT images. In order to obtain accurate registrations between the target and the atlas images, we develop an adapted feature-based method which uses organ-specific features. These features are learnt during an offline preprocessing step, and thus, the algorithm still benefits from the speed of feature-based registration methods. These feature sets are then used to obtain pairwise non-rigid transformations using followed by a thin-plate spline refinement or . The fusion of the transferred atlas labels is performed using a random forest classifier, and finally, the segmentation is obtained using graph cuts with a Potts model as interaction term. Our pipeline was evaluated on 20 organs in 10 whole-body CT images at the VISCERAL Anatomy Challenge, in conjunction with the International Symposium on Biomedical Imaging, Brooklyn, New York, in April 2015. It performed best on majority of the organs, with respect to the Dice index.

Part IV - VISCERAL Anatomy Participant Reports | Pp. 203-218

Combining Radiology Images and Clinical Metadata for Multimodal Medical Case-Based Retrieval

Oscar Jimenez-del-Toro; Pol Cirujeda; Henning Müller

As part of their daily workload, clinicians examine patient cases in the process of formulating a diagnosis. These large multimodal patient datasets stored in hospitals could help in retrieving relevant information for a differential diagnosis, but these are currently not fully exploited. The VISCERAL Retrieval Benchmark organized a medical case-based retrieval algorithm evaluation using multimodal (text and visual) data from radiology reports. The common dataset contained patient CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) scans and RadLex term anatomy–pathology lists from the radiology reports. A content-based retrieval method for medical cases that uses both textual and visual features is presented. It defines a weighting scheme that combines the anatomical and clinical correlations of the RadLex terms with local texture features obtained from the region of interest in the query cases. The visual features are computed using a 3D Riesz wavelet texture analysis performed on a common spatial domain to compare the images in the analogous anatomical regions of interest in the dataset images. The proposed method obtained the best mean average precision in 6 out of 10 topics and the highest number of relevant cases retrieved in the benchmark. Obtaining robust results for various pathologies, it could further be developed to perform medical case-based retrieval on large multimodal clinical datasets.

Part V - VISCERAL Retrieval Participant Reports | Pp. 221-236

Text- and Content-Based Medical Image Retrieval in the VISCERAL Retrieval Benchmark

Fan Zhang; Yang Song; Weidong Cai; Adrien Depeursinge; Henning Müller

Text- and content-based retrieval are the most widely used approaches for medical image retrieval. They capture the similarity between the images from different perspectives: text-based methods rely on manual textual annotations or captions associated with images; content-based approaches are based on the visual content of the images themselves such as colours and textures. Text-based retrieval can better meet the high-level expectations of humans but is limited by the time-consuming annotations. Content-based retrieval can automatically extract the visual features for high-throughput processing; however, its performance is less favourable than the text-based approaches due to the gap between low-level visual features and high-level human expectations. In this chapter, we present the participation from our joint research team of USYD/HES-SO in the VISCERAL retrieval task. Five different methods are introduced, of which two are based on the anatomy–pathology terms, two are based on the visual image content and the last one is based on the fusion of the aforementioned methods. The comparison results, given the different methods indicated that the text-based methods outperformed the content-based retrieval and the fusion of text and visual contents, generated the best performance overall.

Part V - VISCERAL Retrieval Participant Reports | Pp. 237-249