Catálogo de publicaciones - libros

Compartir en
redes sociales


Words and Intelligence I: Selected Papers by Yorick Wilks

Khurshid Ahmad ; Christopher Brewster ; Mark Stevenson (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Language Translation and Linguistics; Computational Linguistics; Artificial Intelligence (incl. Robotics); Semantics; Philosophy of Language

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-1-4020-5284-2

ISBN electrónico

978-1-4020-5285-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer 2007

Tabla de contenidos

Text Searching with Templates

Yorick Wilks

In this paper we present a new decision procedure for the satisfiability of Linear Arithmetic Logic (LAL), i.e. boolean combinations of propositional variables and linear constraints over numerical variables. Our approach is based on the well known integration of a propositional SAT procedure with theory deciders, enhanced in the following ways.

First, our procedure relies on an solver for linear arithmetic, that is able to exploit the fact that it is repeatedly called to analyze sequences of increasingly large sets of constraints. Reasoning in the theory of LA interacts with the boolean top level by means of a stack-based interface, that enables the top level to add constraints, set points of backtracking, and backjump, without restarting the procedure from scratch at every call. Sets of inconsistent constraints are found and used to drive backjumping and learning at the boolean level, and theory atoms that are consequences of the current partial assignment are inferred.

Second, the solver is : a satisfying assignment is constructed by reasoning at different levels of abstractions (logic of equality, real values, and integer solutions). Cheaper, more abstract solvers are called first, and unsatisfiability at higher levels is used to prune the search. In addition, theory reasoning is partitioned in different clusters, and tightly integrated with boolean reasoning.

We demonstrate the effectiveness of our approach by means of a thorough experimental evaluation: our approach is competitive with and often superior to several state-of-the-art decision procedures.

Pp. 1-7

Decidability and Natural Language

Yorick Wilks

In this paper we present a new decision procedure for the satisfiability of Linear Arithmetic Logic (LAL), i.e. boolean combinations of propositional variables and linear constraints over numerical variables. Our approach is based on the well known integration of a propositional SAT procedure with theory deciders, enhanced in the following ways.

First, our procedure relies on an solver for linear arithmetic, that is able to exploit the fact that it is repeatedly called to analyze sequences of increasingly large sets of constraints. Reasoning in the theory of LA interacts with the boolean top level by means of a stack-based interface, that enables the top level to add constraints, set points of backtracking, and backjump, without restarting the procedure from scratch at every call. Sets of inconsistent constraints are found and used to drive backjumping and learning at the boolean level, and theory atoms that are consequences of the current partial assignment are inferred.

Second, the solver is : a satisfying assignment is constructed by reasoning at different levels of abstractions (logic of equality, real values, and integer solutions). Cheaper, more abstract solvers are called first, and unsatisfiability at higher levels is used to prune the search. In addition, theory reasoning is partitioned in different clusters, and tightly integrated with boolean reasoning.

We demonstrate the effectiveness of our approach by means of a thorough experimental evaluation: our approach is competitive with and often superior to several state-of-the-art decision procedures.

Pp. 9-27

The Stanford Machine Translation Project

Yorick Wilks

This paper describes a system of semantic analysis and generation, programmed in LISP 1.5 and designed to pass from paragraph-length input in English to French via an interlingual representation. A wide class of English input forms is covered, with a vocabulary initially restricted to a few hundred words. The distinguishing features of the translation system are: It translates phrase by phrase, with facilities for reordering phrases and establishing essential semantic connectivities between them. These constitute the interlingual representation to be translated. This matching is done without the explicit use of a conventional syntax analysis. The French output strings are generated without the explicit use of a generative grammar. This is done by means of stereotypes: strings of French words, and functions evaluating to French words, which are attached to English word senses in the dictionary and built into the interlingual representation by the analysis routines

Pp. 29-59

An Intelligent Analyzer and Understander of English

Yorick Wilks

Pp. 61-82

A Preferential, Pattern-Seeking, Semantics for Natural Language Inference

Yorick Wilks

The paper describes the way in which a Preference Semantics system for natural language analysis and generation tackles a difficult class of anaphoric inference problems: those requiring either analytic (conceptual) knowledge of a complex sort, or requiring weak inductive knowledge of the course of events in the real world. The method employed converts all available knowledge to a canonical template form and endeavors to create chains of non-deductive inferences from the unknowns to the possible referents. Its method for this is consistent with the overall principle of ‘‘semantic preference’’ used to set up the original meaning representation

Pp. 83-102

Good and Bad Arguments About Semantic Primitives

Yorick Wilks

The paper surveys arguments from linguistics, artificial intelligence and philosophy about . It concentrates discussion on arguments of Charniak, Hayes, Putnam and Bobrow and Winograd; and suggests that many of the arguments against semantic primitives are based on no clear views about what the defenders are arguing . The proponents of semantic primitives must share blame for this, as well as for supporting these entities by a range of highly specious arguments. However, the paper claims that, provided primitives are supported only by weak and commonsensical hypotheses, they can continue to play a valuable role in the analysis and processing of meaning

Pp. 103-139

Making Preferences More Active

Yorick Wilks

The paper discusses the incorporation of richer semantic structures into the Preference Semantics system: they are called pseudo-texts and capture something of the information expressed in one type of frame proposed by Minsky (q.v.). However, they are in a format, and subject to rules of inference, consistent with earlier accounts of this system of language analysis and understanding. Their use is discussed in connection with the phenomenon of extended use: sentences where the semantic preferences are broken. It is argued that such situations are the norm and not the exception in normal language use, and that a language understanding system must give some general treatment of them. A notion of sense projection is proposed, leading on to an alteration of semantic formulas (word sense representations) in the face of unexpected context by drawing information from the pseudo texts. A possible implementation is described, based on a new semantic parser for the Preference Semantics system, which would cope with extended use by the methods suggested and answer questions about the process of analysis itself. It is argued that this would be a good context in which to place a language understander (rather than that of question-answering about a limited area of the real world, as is normal) and, moreover, that the sense projection mechanisms suggested would provide a test-bed on which the usefulness of frames for language understanding could be realistically assessed

Pp. 141-166

Providing Machine Tractable Dictionary Tools

Yorick Wilks; Dan Fass; Cheng-ming Guo; James E. McDonald; Tony Plate; Brian M. Slator

Machine readable dictionaries (s) contain knowledge about language and the world essential for tasks in natural language processing (). However, this knowledge, collected and recorded by lexicographers for human readers, is not presented in a manner for s to be used directly for NLP tasks. What is badly needed are machine dictionaries (s): s transformed into a format usable for . This paper discusses three different but related large-scale computational methods to transform s into s. The used is (LDOCE). The three methods differ in the amount of knowledge they start with and the kinds of knowledge they provide. All require some handcoding of initial information but are largely automatic. Method I, a statistical approach, uses the least handcoding. It generates ‘‘relatedness’’ networks for words in LDOCE and presents a method for doing partial word sense disambiguation. Method II employs the most handcoding because it develops and builds lexical entries for a very carefully controlled defining vocabulary of 2,000 word senses (1,000 words). The payoff is that the method will provide an containing highly structured semantic information. Method III requires the handcoding of a grammar and the semantic patterns used by its parser, but not the handcoding of any lexical material. This is because the method builds up lexical material from sources wholly within LDOCE. The information extracted is a set of sources of information, individually weak, but which can be combined to give a strong and determinate linguistic data base

Pp. 167-216

Belief Ascription, Metaphor, and Intensional Identification

Afzal Ballim; Yorick Wilks; John Barnden

This article discusses the extension of , an algorithm derived for belief ascription, to the areas of intensional object identification and metaphor. ViewGen represents the beliefs of agents as explicit, partitioned proposition sets known as environments. Environments are convenient, even essential, for addressing important pragmatic issues of reasoning. The article concentrates on showing that the transfer of information in metaphors, intensional object identification, and ordinary, nonmetaphorical belief ascription can all be seen as different manifestations of a single environment-amalgamation process. The article also briefly discusses the extension of ViewGen to speech-act processing and the addition of a heuristic-based, relevance-determination procedure, and justifies the partitioning approach to belief ascription

Pp. 217-253

Stone Soup and the French Room

Yorick Wilks

The paper argues that the IBM statistical approach to machine translation has done rather better after a few years than many sceptics believed it could. However, it is neither as novel as its proponents suggest nor is it making claims as clear and simple as they would have us believe. The performance of the purely statistical system (and we discuss what that phrase could mean) has not equaled the performance of SYSTRAN. More importantly, the system is now being shifted to a hybrid that incorporates much of the linguistic information that it was initially claimed by IBM would not be needed for MT. Hence, one might infer that its own proponents do not believe ‘‘pure’’ statistics sufficient for MT of a usable quality. In addition to real limits on the statistical method, there are also strong economic limits imposed by their methodology of data gathering. However, the paper concludes that the IBM group have done the field a great service in pushing these methods far further than before, and by reminding everyone of the virtues of empiricism in the field and the need for large scale gathering of data

Pp. 255-265