Catálogo de publicaciones - libros
Accessing Multilingual Information Repositories: 6th Workshop of the Cross-Language Evaluation Forum, CLEF 2005, Vienna, Austria, 21-23 September, 2005, Revised Selected Papers
Carol Peters ; Fredric C. Gey ; Julio Gonzalo ; Henning Müller ; Gareth J. F. Jones ; Michael Kluck ; Bernardo Magnini ; Maarten de Rijke (eds.)
En conferencia: 6º Workshop of the Cross-Language Evaluation Forum for European Languages (CLEF) . Vienna, Austria . September 21, 2005 - September 23, 2005
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Information Storage and Retrieval; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Language Translation and Linguistics
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-45697-1
ISBN electrónico
978-3-540-45700-8
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11878773_51
AliQAn, Spanish QA System at CLEF-2005
S. Roger; S. Ferrández; A. Ferrández; J. Peral; F. Llopis; A. Aguilar; D. Tomás
Question Answering is a major research topic at the University of Alicante. For this reason, this year two groups participated in the QA@CLEF track using different approaches. In this paper we describe the work of group. This paper describes AliQAn, a monolingual open-domain Question Answering (QA) System developed in the Department of Language Processing and Information Systems at the University of Alicante for CLEF-2005 Spanish monolingual QA evaluation task. Our approach is based fundamentally on the use of syntactic pattern recognition in order to identify possible answers. Besides this, Word Sense Disambiguation (WSD) is applied to improve the system. The results achieved (overall accuracy of 33%) are shown and discussed in the paper.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 457-466
doi: 10.1007/11878773_52
20th Century Esfinge (Sphinx) Solving the Riddles at CLEF 2005
Luís Costa
Esfinge is a general domain Portuguese question answering system. It tries to take advantage of the steadily growing and constantly updated information freely available in the World Wide Web in its question answering tasks. The system participated last year for the first time in the monolingual QA track. However, the results were compromised by several basic errors, which were corrected shortly after. This year, Esfinge participation was expected to yield better results and allow experimentation with a Named Entity Recognition System, as well as try a multilingual QA track for the first time. This paper describes how the system works, presents the results obtained by the official runs in considerable detail, as well as results of experiments measuring the import of different parts of the system, by reporting the decrease in performance when the system is executed without some of its components/features.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 467-476
doi: 10.1007/11878773_53
Question Answering Experiments for Finnish and French
Lili Aunimo; Reeta Kuuskoski
This paper presents a question answering (QA) system called . approach to QA is based on question classification, semantic annotation and answer extraction pattern matching. performance is evaluated by conducting experiments in the following tasks: monolingual Finnish and French and bilingual Finnish-English QA. is the first system ever reported to perform monolingual textual QA in the Finnish language. This is also the task in which its performance is best: 23 % of all questions are answered correctly. performance in the monolingual French task is a little inferior to its performance in the monolingual Finnish task, and when compared to the other systems evaluated with the same data in the same task, its performance is near the average. In the bilingual Finnish-English task, was the only participating system, and – as is expected – its performance was inferior to those attained in the monolingual tasks.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 477-487
doi: 10.1007/11878773_54
MIRACLE’s Cross-Lingual Question Answering Experiments with Spanish as a Target Language
César de Pablo-Sánchez; Ana González-Ledesma; José Luis Martínez-Fernández; José María Guirao; Paloma Martínez; Antonio Moreno
Our second participation in CLEF-QA consited in six runs with Spanish as a target language. The source languages were Spanish, English an Italian. miraQA uses a simple representation of the question that is enriched with semantic information like typed Named Entities. Runs used different strategies for answer extraction and selection, achieving at best a 25’5% accuracy. The analysis of the errors suggests that improvements in answer selection are the most critical.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 488-491
doi: 10.1007/11878773_55
The Role of Lexical Features in Question Answering for Spanish
Manuel Pérez-Coutiño; Manuel Montes-y-Gómez; Aurelio López-López; Luis Villaseñor-Pineda
This paper describes the prototype developed in the Language Technologies Laboratory at INAOE for the Spanish monolingual QA evaluation task at CLEF 2005. The proposed approach copes with the QA task according to the type of question to solve (factoid or definition). In order to identify possible answers to factoid questions, the system applies a methodology centered in the use of lexical features. On the other hand, the system is supported by a pattern recognition method in order to identify answers to definition questions. The paper shows the methods applied at different stages of the system, with special emphasis on those used for answering factoid questions. Then the results achieved with this approach are discussed.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 492-501
doi: 10.1007/11878773_56
Cross-Language French-English Question Answering Using the DLT System at CLEF 2005
Richard F. E. Sutcliffe; Michael Mulcahy; Igal Gabbay; Aoife O’Gorman; Darina Slattery
This paper describes the main components of the system built by the DLT Group at Limerick for participation in the QA Task at CLEF. The document indexing we used was again sentence-by-sentence but this year the Lucene Engine was adopted. We also experimented with retrieval query expansion using Local Context Analysis. Results were broadly similar to last year.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 502-509
doi: 10.1007/11878773_57
Finding Answers to Indonesian Questions from English Documents
Mirna Adriani; Rinawati
We present a report on our participation in the Indonesian-English question-answering task of the 2005 Cross-Language Evaluation Forum (CLEF). In this work we translated an Indonesian query set into English using a commercial machine translation tool called We used linguistic tools to find the answer to a question. The answer is extracted from a relevant passage and is identified as having the relevant tagging as the query.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 510-516
doi: 10.1007/11878773_58
BulQA: Bulgarian–Bulgarian Question Answering at CLEF 2005
Kiril Simov; Petya Osenova
This paper describes the architecture of a Bulgarian– Bulgarian question answering system — . The system relies on a partially parsed corpus for answer extraction. The questions are also analyzed partially. Then on the basis of the analysis some queries to the corpus are created. After the retrieval of the documents that potentially contain the answer, each of them is further processed with one of several additional grammars. The grammar depends on the question analysis and the type of the question. At present these grammars can be viewed as patterns for the type of questions, but our goal is to develop them further into a deeper parsing system for Bulgarian.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 517-526
doi: 10.1007/11878773_59
The Query Answering System PRODICOS
Laura Monceaux; Christine Jacquin; Emmanuel Desmontils
In this paper, we present the PRODICOS query answering system which was developed by the TALN team from the LINA institute. We present the various modules constituting our system and for each of them the evaluation is shown. Afterwards, for each of them, the evaluation is put forward to justify the results obtained. Then, we present the main improvement based on the use of semantic data.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 527-534
doi: 10.1007/11878773_60
The CLEF 2005 Cross–Language Image Retrieval Track
Paul Clough; Henning Müller; Thomas Deselaers; Michael Grubinger; Thomas M. Lehmann; Jeffery Jensen; William Hersh
This paper outlines efforts from the 2005 CLEF cross– language image retrieval campaign (ImageCLEF). Aim of the CLEF track is to explore the use of both text and content–based retrieval methods for cross–language image retrieval. Four tasks were offered in ImageCLEF: ad–hoc retrieval from an historic photographic collection, ad–hoc retrieval from a medical collection, an automatic image annotation task, and a user–centered (interactive) evaluation task. 24 research groups from a variety of backgrounds and nationalities (14 countries) participated in ImageCLEF. This paper presents the ImageCLEF tasks, submissions from participating groups and a summary of the main findings.
- Part V. Cross-Language Retrieval In Image Collections (ImageCLEF) | Pp. 535-557