Catálogo de publicaciones - libros

Compartir en
redes sociales


Document Analysis Systems VII: 7th International Workshop, DAS 2006, Nelson, New Zealand, February 13-15, 2006, Proceedings

Horst Bunke ; A. Lawrence Spitz (eds.)

En conferencia: 7º International Workshop on Document Analysis Systems (DAS) . Nelson, New Zealand . February 13, 2006 - February 15, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Database Management; Pattern Recognition; Information Storage and Retrieval; Image Processing and Computer Vision; Simulation and Modeling; Computer Appl. in Administrative Data Processing

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-32140-8

ISBN electrónico

978-3-540-32157-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Retrieval from Document Image Collections

A. Balasubramanian; Million Meshesha; C. V. Jawahar

This paper presents a system for retrieval of relevant documents from large document image collections. We achieve effective search and retrieval from a large collection of printed document images by matching image features at word-level. For representations of the words, profile-based and shape-based features are employed. A novel DTW-based partial matching scheme is employed to take care of morphologically variant words. This is useful for grouping together similar words during the indexing process.The system supports cross-lingual search using OM-Trans transliteration and a dictionary-based approach. System-level issues for retrieval (eg. scalability, effective delivery etc.) are addressed in this paper.

- Session 1: Digital Libraries | Pp. 1-12

A Semi-automatic Adaptive OCR for Digital Libraries

Sachin Rawat; K. S. Sesh Kumar; Million Meshesha; Indraneel Deb Sikdar; A. Balasubramanian; C. V. Jawahar

This paper presents a novel approach for designing a semi-automatic adaptive OCR for large document image collections in digital libraries. We describe an interactive system for continuous improvement of the results of the OCR. In this paper a semi-automatic and adaptive system is implemented. Applicability of our design for the recognition of Indian Languages is demonstrated. Recognition errors are used to train the OCR again so that it adapts and learns for improving its accuracy. Limited human intervention is allowed for evaluating the output of the system and take corrective actions during the recognition process.

- Session 1: Digital Libraries | Pp. 13-24

Contribution to the Discrimination of the Medieval Manuscript Texts: Application in the Palaeography

Ikram Moalla; Frank LeBourgeois; Hubert Emptoz; Adel M. Alimi

This work presents our first contribution to the discrimination of the medieval manuscript texts in order to assist the palaeographers to date the ancient manuscripts. Our method is based on the Spatial Grey-Level Dependence (SGLD) which measures the join probability between grey levels values of pixels for each displacement. We use the Haralick features to characterise the 15 medieval text styles. The achieved discrimination results are between 50% and 81%, which is encouraging.

Palabras clave: Personal Style; Cooccurrence Matrice; Digitize Document; Discrimination Rate; Writing Style.

- Session 2: Image Processing | Pp. 25-37

Restoring Ink Bleed-Through Degraded Document Images Using a Recursive Unsupervised Classification Technique

Drira Fadoua; Frank Le Bourgeois; Hubert Emptoz

This paper presents a new method to restore a particular type of degradation related to ancient document images. This degradation, referred to as “bleed-through”, is due to the paper porosity, the chemical quality of the ink, or the conditions of digitalization. It appears as marks degrading the readability of the document image. Our purpose consists then in removing these marks to improve readability. The proposed method is based on a recursive unsupervised segmentation approach applied on the decorrelated data space by the principal component analysis. It generates a binary tree that only the leaves images satisfying a certain condition on their logarithmic histogram are processed. Some experiments, done on real ancient document images provided by the archives of “Chatillon-Chalaronne” illustrate the effectiveness of the suggested method.

Palabras clave: Document Image; Thresholding Technique; Restoration Method; Recursive Approach; Handwritten Document.

- Session 2: Image Processing | Pp. 38-49

Networked Document Imaging with Normalization and Optimization

Hirobumi Nishida

A system architecture is presented for document imaging in an open, distributed environment over networks, where various kinds of imaging devices can be interconnected remotely. The key components are two sets of image processing operations to transform input images to (1) canonical image representations to absorb different visual appearance due to characteristics of imaging devices or image acquisition conditions (normalization), and (2) optimal image representations according to tasks and preferences of individual users (optimization). Images captured through a diversity of input devices can be delivered to remote sites through networks, and then will be used for a variety of tasks such as printing on paper sheets, browsing on displays, and editing. These diversities can be resolved systematically by placing the normalizations at an upper end (routing servers) and the optimizations at a lower end (clients) of the data flow over networks. In view of this architecture, we describe some instances of the normalizations and optimizations associated with a particular task of highly legible printing of scanned document images. Three essential algorithms are mentioned for optimizing document images: adaptive tone mapping with background cleaning, text super-resolution, and text color clustering. The optimization process is mentioned for highly legible printing, along with some other potential applications and tasks.

Palabras clave: Document Image; Background Color; Input Device; Remote Site; Canonical Representation.

- Session 2: Image Processing | Pp. 50-61

Gray-Scale Thinning Algorithm Using Local Min/Max Operations

Kyoung Min Kim; Buhm Lee; Nam Sup Choi; Gwan Hee Kang; Joong Jo Park; Ching Y. Suen

A gray-scale thinning algorithm based on local min/max operations is newly proposed. Erosion and dilation properties of local min/max operations create new ridges from the given image. Thus grey scale skeletons can be effectively obtained by accumulating such ridges. The proposed method is quite salient because it can be also applied to an unsegmented image in which objects are not specified.

Palabras clave: Grayscale Image; Grey Image; Connectivity Problem; Large Pixel; Dilation Property.

- Session 2: Image Processing | Pp. 62-70

Automated Scoring of Handwritten Essays Based on Latent Semantic Analysis

Sargur Srihari; Jim Collins; Rohini Srihari; Pavithra Babu; Harish Srinivasan

Handwritten essays are widely used in educational assessments, particularly in classroom instruction. This paper concerns the design of an automated system for performing the task of taking as input scanned images of handwritten student essays in reading comprehension tests and to produce as output scores for the answers which are analogous to those provided by human scorers. The system is based on integrating the two technologies of optical handwriting recognition (OHR) and automated essay scoring (AES). The OHR system performs several pre-processing steps such as forms removal, rule-line removal and segmentation of text lines and words. The final recognition step, which is tuned to the task of reading comprehension evaluation in a primary education setting, is performed using a lexicon derived from the passage to be read. The AES system is based on the approach of latent semantic analysis where a set of human-scored answers are used to determine scoring system parameters using a machine learning approach. System performance is compared to scoring done by human raters. Testing on a small set of handwritten answers indicate that system performance is comparable to that of automatic scoring based on manual transcription.

Palabras clave: Word Recognition; Singular Value Decomposition; Latent Semantic Analysis; Text Line; Training Corpus.

- Session 3: Handwriting 1 | Pp. 71-83

Aligning Transcripts to Automatically Segmented Handwritten Manuscripts

Jamie Rothfeder; R. Manmatha; Toni M. Rath

Training and evaluation of techniques for handwriting recognition and retrieval is a challenge given that it is difficult to create large ground-truthed datasets. This is especially true for historical handwritten datasets. In many instances the ground truth has to be created by manually transcribing each word, which is a very labor intensive process. Sometimes transcriptions are available for some manuscripts. These transcriptions were created for other purposes and hence correspondence at the word, line, or sentence level may not be available. To be useful for training and evaluation, a word level correspondence must be available between the segmented handwritten word images and the ASCII transcriptions. Creating this correspondence or alignment is challenging because the segmentation is often errorful and the ASCII transcription may also have errors in it. Very little work has been done on the alignment of handwritten data to transcripts. Here, a novel Hidden Markov Model based automatic alignment algorithm is described and tested. The algorithm produces an average alignment accuracy of about 72.8% when aligning whole pages at a time on a set of 70 pages of the George Washington collection. This outperforms a dynamic time warping alignment algorithm by about 12% previously reported in the literature and tested on the same collection.

Palabras clave: Hide Markov Model; Machine Translation; Dynamic Time Warping; Alignment Algorithm; Observation Model.

- Session 3: Handwriting 1 | Pp. 84-95

Virtual Example Synthesis Based on PCA for Off-Line Handwritten Character Recognition

Hidetoshi Miyao; Minoru Maruyama

This paper proposes a method to improve off-line character classifiers learned from examples using virtual examples synthesized from an on-line character database. To obtain good classifiers, a large database which contains a large enough number of variations of handwritten characters is usually required. However, in practice, collecting enough data is time-consuming and costly. In this paper, we propose a method to train SVM for off-line character recognition based on artificially augmented examples using on-line characters. In our method, virtual examples are synthesized from on-line characters by the following two steps: (1) applying affine transformation to each stroke of “real” characters, and (2) applying affine transformation to each stroke of artificial characters, which are synthesized on the basis of PCA. SVM classifiers are trained by using the training samples containing artificially generated patterns and real characters. We examine the effectiveness of the proposed method with respect to the recognition rates and number of support vectors of SVM through experiments involving the handwritten Japanese Hiragana character classification.

- Session 3: Handwriting 1 | Pp. 96-105

Extraction of Handwritten Text from Carbon Copy Medical Form Images

Robert Milewski; Venu Govindaraju

This paper presents a methodology for separating handwritten foreground pixels, from background pixels, in carbon copied medical forms. Comparisons between prior and proposed techniques are illustrated. This study involves the analysis of the New York State (NYS) Department of Health (DoH) Pre-Hospital Care Report (PCR) [1] which is a standard form used in New York by all Basic and Advanced Life Support pre-hospital healthcare professionals to document patient status in the emergency environment. The forms suffer from extreme carbon mesh noise, varying handwriting pressure sensitivity issues, and smudging which are further complicated by the writing environment. Extraction of handwriting from these medical forms is a vital step in automating emergency medical health surveillance systems.

Palabras clave: Carbon Paper; Foreground Pixel; Handwriting Recognition; Lexicon Size; Stroke Width.

- Session 3: Handwriting 1 | Pp. 106-116