Catálogo de publicaciones - libros
Accessing Multilingual Information Repositories: 6th Workshop of the Cross-Language Evaluation Forum, CLEF 2005, Vienna, Austria, 21-23 September, 2005, Revised Selected Papers
Carol Peters ; Fredric C. Gey ; Julio Gonzalo ; Henning Müller ; Gareth J. F. Jones ; Michael Kluck ; Bernardo Magnini ; Maarten de Rijke (eds.)
En conferencia: 6º Workshop of the Cross-Language Evaluation Forum for European Languages (CLEF) . Vienna, Austria . September 21, 2005 - September 23, 2005
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Information Storage and Retrieval; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Language Translation and Linguistics
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-45697-1
ISBN electrónico
978-3-540-45700-8
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11878773_31
Use of Free On-Line Machine Translation for Interactive Cross-Language Question Answering
Angel Zazo; Carlos G. Figuerola; José Luis A. Berrocal; Viviana Fernández Marcial
Free on-line machine translation systems are employed more and more by Internet users. In this paper we have explored the use of these systems for Cross-Language Question Answering, in two aspects: in the formulation of queries and in the presentation of information. Two topic-document language pairs were used, Spanish-English and Spanish-French. For each of these, two groups of users were created, depending on the level of reading skills in document language. When machine translation of the queries was used directly in the search, the number of correct answers was quite high. Users only corrected 8% of the translations proposed. As regards the possibility of using machine translation to translate into Spanish the text passages shown to the user, we expected the search of the users with little knowledge of the target language to improve notably, but we found that this possibility was of little help in finding the correct answers for the questions posed in the experiment.
- Part III. Interactive Cross-Language Information Retrieval (iCLEF) | Pp. 263-272
doi: 10.1007/11878773_32
“How Much Context Do You Need?”: An Experiment About the Context Size in Interactive Cross-Language Question Answering
Borja Navarro; Lorenza Moreno-Monteagudo; Elisa Noguera; Sonia Vázquez; Fernando Llopis; Andrés Montoyo
The main topic of this paper is the context size needed for an efficient Interactive Cross-language Question Answering system. We compare two approaches: the first one (baseline system) shows the user whole passages (maximum context: 10 sentences). The second one (experimental system) shows only a clause (minimum context). As cross-language system, the main problem is that the language of the question (Spanish) and the language of the answer context (English) are different. The results show that large context is better. However, there are specific relations between the context size and the knowledge about the language of the answer: users with poor level of English prefer context with few words.
- Part III. Interactive Cross-Language Information Retrieval (iCLEF) | Pp. 273-282
doi: 10.1007/11878773_33
UNED at iCLEF 2005: Automatic Highlighting of Potential Answers
Víctor Peinado; Fernando López-Ostenero; Julio Gonzalo; Felisa Verdejo
In this paper, we describe UNED’s participation in the iCLEF 2005 track. We have compared two strategies for finding an answer using an interactive question answering system: i) a search system over full documents and ii) a search system over passages (document’s paragraphs). We have added an interesting feature to both system in order to facilitate reading: the possibility to enable/disable the highlighting of named entities such as proper nouns, temporal references and numbers likely to contain the right answer.
Our Document Searcher obtained better overall accuracy (0.53 vs. 0.45) but our subjects found browsing passages simpler and faster. However, most of them presented a similar search behavior (regarding time consumption, confidence in their answers and query refinements) using both systems. All our users considered helpful the highlighting of named entities and they all made extensive use of this possibility as a quick way of discriminating between relevant and non relevant documents and finding a valid answer.
- Part III. Interactive Cross-Language Information Retrieval (iCLEF) | Pp. 283-292
doi: 10.1007/11878773_34
Effect of Connective Functions in Interactive Image Retrieval
Julio Villena-Román; Raquel M. Crespo-García; José Carlos González Cristóbal
This paper presents the participation of the MIRACLE team at the ImageCLEF 2005 interactive search task, in which we compare the efficiency of AND monolingual queries (which have to be precise and use the exact vocabulary, which may be difficult in a specialised search task) versus relevanceguided OR bilingual queries (a fuzzier and noisier search but which doesn’t require precise vocabulary and exact translations). User preferences and strategies in the context of cross-lingual interactive image retrieval are also analysed.
- Part III. Interactive Cross-Language Information Retrieval (iCLEF) | Pp. 293-296
doi: 10.1007/11878773_35
Using Concept Hierarchies in Text-Based Image Retrieval: A User Evaluation
Daniela Petrelli; Paul Clough
This paper describes our results from the image retrieval task of iCLEF 2005 based on a comparative user evaluation of two interfaces: one displaying search results as a list; the other organising retrieved images into a hierarchy of concepts displayed on the interface as an interactive menu. Based on a known-item retrieval task, data was analysed with respect to effectiveness, efficiency and user satisfaction. Effectiveness and efficiency were calculated at both the set cut-off time of 5 minutes, and the time after finding the target image (final time). Results showed the list was marginally more effective than the menu at 5 minutes, but the two were equal at final time indicating the menu requires more time to be used effectively. The list was more efficient at both 5 minutes and final time (difference not statistically significant) and users preferred using the menu indicating this could be a potentially interesting and engaging feature for image retrieval.
- Part III. Interactive Cross-Language Information Retrieval (iCLEF) | Pp. 297-306
doi: 10.1007/11878773_36
Overview of the CLEF 2005 Multilingual Question Answering Track
Alessandro Vallin; Bernardo Magnini; Danilo Giampiccolo; Lili Aunimo; Christelle Ayache; Petya Osenova; Anselmo Peñas; Maarten de Rijke; Bogdan Sacaleanu; Diana Santos; Richard Sutcliffe
The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise. Overall results showed a general increase in performance in comparison to last year. The best performing monolingual system irrespective of target language answered 64.5% of the questions correctly (in the monolingual Portuguese task), while the average of the best performances for each target language was 42.6%. The cross-language step instead entailed a considerable drop in performance. In addition to accuracy, the organisers also measured the relation between the correctness of an answer and a system’s stated confidence in it, showing that the best systems did not always provide the most reliable confidence score. We provide an overview of the 2005 QA track, detail the procedure followed to build the test sets and present a general analysis of the results.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 307-331
doi: 10.1007/11878773_37
A Fast Forward Approach to Cross-Lingual Question Answering for English and German
Robert Strötgen; Thomas Mandl; René Schneider
This paper describes the development of a question answering system for mono-lingual and cross-lingual tasks for English and German. We developed the question answering system from a document and retrieval perspective. The system consists of question and answering taxonomies, named entity recognition, term expansion modules, a multi-lingual search engine based on Lucene and a passage extraction and ranking component. The overall architecture and heuristics applied during development are described. We discuss the results at CLEF 2005 and show potential future work.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 332-336
doi: 10.1007/11878773_38
The ŒDipe System at CLEF-QA 2005
Romaric Besançon; Mehdi Embarek; Olivier Ferret
This article presents Œdipe, the question answering system that was used by the LIC2M for its participation in the CLEF-QA 2005 evaluation. The LIC2M participates more precisely in the monolingual track dedicated to the French language. The main characteristic of Œdipe is its simplicity: it mainly relies on the association of a linguistic pre-processor that normalizes words and recognizes named entities and the principles of the Vector Space model.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 337-346
doi: 10.1007/11878773_39
An XML-Based System for Spanish Question Answering
David Tomás; José L. Vicedo; Maximiliano Saiz; Rubén Izquierdo
As Question Answering is a major research topic at the University of Alicante, this year two separate groups participated in the QA@CLEF track using different approaches. This paper describes the work of group. Thinking of future developments, we have designed a modular framework based on XML that will easily let us integrate, combine and test system components based on different approaches. In this context, several modifications have been introduced, such as a new machine learning based question classification module. We took part in the monolingual Spanish task.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 347-350
doi: 10.1007/11878773_40
A Logic Programming Based Approach to QA@CLEF05 Track
Paulo Quaresma; Irene Rodrigues
In this paper the methodology followed to build a question-answering system for the Portuguese language is described. The system modules are built using computational linguistic tools such as: a Portuguese parser based on constraint grammars for the syntactic analysis of the documents sentences and the user questions; a semantic interpreter that rewrites sentences syntactic analysis into discourse representation structures in order to obtain the corpus documents and user questions semantic representation; and finally, a semantic/pragmatic interpreter in order to obtain a knowledge base with facts extracted from the documents using ontologies (general and domain specific) and logic inference. This article includes the system evaluation under the CLEF’05 question and answering track.
- Part IV. Multiple Language Question Answering (QA@CLEF) | Pp. 351-360