Catálogo de publicaciones - libros

Compartir en
redes sociales


Term Rewriting and Applications: 16th International Conference, RTA 2005, Nara, Japan, April 19-21, 2005, Proceedings

Jürgen Giesl (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-25596-3

ISBN electrónico

978-3-540-32033-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin/Heidelberg 2005

Tabla de contenidos

Universal Algebra for Termination of Higher-Order Rewriting

Makoto Hamana

We show that the structures of binding algebras and Σ-monoids by Fiore, Plotkin and Turi are sound and complete models of Klop’s Combinatory Reduction Systems (CRSs). These algebraic structures play the same role of universal algebra for term rewriting systems. Restricting the algebraic structures to the ones equipped with well-founded relations, we obtain a complete characterisation of terminating CRSs. We can also naturally extend the characterisation to rewriting on meta-terms by using the notion of Σ-monoids.

Pp. 135-149

Quasi-interpretations and Small Space Bounds

Guillaume Bonfante; Jean-Yves Marion; Jean-Yves Moyen

Quasi-interpretations are an useful tool to control resources usage of term rewriting systems, either time or space. They not only combine well with path orderings and provide characterizations of usual complexity classes but also give hints in order to optimize the program. Moreover, the existence of a quasi-interpretation is decidable.

In this paper, we present some more characterizations of complexity classes using quasi-interpretations. We mainly focus on small space-bounded complexity classes. On one hand, by restricting quasi-interpretations to sums (that is allowing only affine quasi-interpretations), we obtain a characterization of . On the other hand, a strong tiering discipline on programs together with quasi-interpretations yield a characterization of .

Lastly, we give two new characterizations of : in the first, the quasi-interpretation has to be strictly decreasing on each rule and in the second, some linearity constraints are added to the system but no assumption concerning the termination proof is made.

Pp. 150-164

A Sufficient Completeness Reasoning Tool for Partial Specifications

Joe Hendrix; Manuel Clavel; José Meseguer

We present the Maude sufficient completeness tool, which explicitly supports sufficient completeness reasoning for partial conditional specifications having sorts and subsorts and with domains of functions defined by conditional memberships. Our tool consists of two main components: (i) a sufficient completeness analyzer that generates a set of proof obligations which if discharged, ensures sufficient completeness; and (ii) Maude’s inductive theorem prover (ITP) that is used as a backend to try to automatically discharge those proof obligations.

Pp. 165-174

Tyrolean Termination Tool

Nao Hirokawa; Aart Middeldorp

This paper describes the Tyrolean Termination Tool ( in the sequel), the successor of the Tsukuba Termination Tool [12]. We describe the differences between the two and explain the new features, some of which are not (yet) available in any other termination tool, in some detail. is a tool for automatically proving termination of rewrite systems based on the dependency pair method of Arts and Giesl [3]. It produces high-quality output and has a convenient web interface. The tool is available at

incorporates several new improvements to the dependency pair method. In addition, it is now possible to run the tool in on a of rewrite systems. Moreover, besides ordinary (first-order) rewrite systems, the tool accepts simply-typed applicative rewrite systems which are transformed into ordinary rewrite systems by the recent method of Aoto and Yamada [2].

In the next section we describe the differences between the semi automatic mode and the Tsukuba Termination Tool. Section 3 describes the fully automatic mode. In Section 4 we show a termination proof of a simply-typed applicative system obtained by . In Section 5 we describe how to input a collection of rewrite systems and how to interpret the resulting output. Some implementation details are given in Section 6. The final section contains a short comparison with other tools for automatically proving termination.

Pp. 175-184

Call-by-Value Is Dual to Call-by-Name – Reloaded

Philip Wadler

We consider the relation of the dual calculus of Wadler(2003) to the -calculus of Parigot (1992). We give translations from the -calculus into the dual calculus and back again. The translations form an equational correspondence as defined by Sabry and Felleisen (1993). In particular, translating from to dual and then ‘reloading’ from dual back into yields a term equal to the original term. Composing the translations with duality on the dual calculus yields an involutive notion of duality on the -calculus. A previous notion of duality on the -calculus has been suggested by Selinger (2001), but it is not involutive.

This paper uses color to clarify the relation of types and terms, and of source and target calculi. If the URL below is not in blue please download the color version from or google ‘wadler dual reloaded’.

Pp. 185-203

-Calculus and Duality: Call-by-Name and Call-by-Value

Jérôme Rocheteau

Under the extension of Curry-Howard’s correspondence to classical logic, Gentzen’s NK and LK systems can be seen as syntax-directed systems of simple types respectively for Parigot’s -calculus and Curien-Herbelin’s -calculus. We aim at showing their computational equivalence. We define translations between these calculi. We prove simulation theorems for an undirected evaluation as well as for call-by-name and call-by-value evaluations.

Pp. 204-218

Reduction in a Linear Lambda-Calculus with Applications to Operational Semantics

Alex Simpson

We study beta-reduction in a linear lambda-calculus derived from Abramsky’s linear combinatory algebras. Reductions are classified depending on whether the redex is in the computationally active part of a term (“surface” reductions) or whether it is suspended within the body of a thunk (“internal” reductions). If surface reduction is considered on its own then any normalizing term is strongly normalizing. More generally, if a term can be reduced to surface normal form by a combined sequence of surface and internal reductions then every combined reduction sequence from the term contains only finitely many surface reductions. We apply these results to the operational semantics of , a second-order linear lambda-calculus with recursion, introduced by Bierman, Pitts and Russo, for which we give simple proofs that call-by-value, call-by-name and call-by-need contextual equivalences coincide.

Pp. 219-234

Higher-Order Matching in the Linear Lambda Calculus in the Absence of Constants Is NP-Complete

Ryo Yoshinaka

A lambda term is linear if every bound variable occurs exactly once. The same constant may occur more than once in a linear term. It is known that higher-order matching in the linear lambda calculus is NP-complete (de Groote 2000), even if each unknown occurs exactly once (Salvati and de Groote 2003). Salvati and de Groote (2003) also claim that the interpolation problem, a more restricted kind of matching problem which has just one occurrence of just one unknown, is NP-complete in the linear lambda calculus. In this paper, we correct a flaw in Salvati and de Groote’s (2003) proof of this claim, and prove that NP-hardness still holds if we exclude constants from problem instances. Thus, multiple occurrences of constants do not play an essential role for NP-hardness of higher-order matching in the linear lambda calculus.

Pp. 235-249

Localized Fairness: A Rewriting Semantics

José Meseguer

Fairness is a rich phenomenon: we have weak and strong fairness, and many different of those concepts: transition fairness, object/process fairness, actor fairness, position fairness, and so on, associated with specific or , but lacking a common theoretical framework. This work uses as a common theoretical framework for fairness. A common thread tying together the different fairness variants is the notion of : fairness must often be localized to specific entities in a system. For systems specified as rewrite theories localization can be formalized by making explicit the in a rule corresponding to the items that must be localized. In this way, localized fairness becomes a notion, that can be easily specialized to model a very wide range of fairness phenomena. After formalizing these concepts and proving basic results, the paper studies in detail both a relative and an absolute LTL semantics for rewrite theories with localized fairness requirements, and shows that it is always possible to pass from the relative to the absolute semantics by means of a theory transformation. This allows using a standard LTL model checker to check properties under fairness assumptions.

Pp. 250-263

Partial Inversion of Constructor Term Rewriting Systems

Naoki Nishida; Masahiko Sakai; Toshiki Sakabe

Partial-inversion compilers generate programs which compute some unknown inputs of given programs from a given output and the rest of inputs whose values are already given. In this paper, we propose a partial-inversion compiler of constructor term rewriting systems. The compiler automatically generates a conditional term rewriting system, and then unravels it to an unconditional system. To improve the efficiency of inverse computation, we show that innermost strategy is usable to obtain all solutions if the generated system is right-linear.

Pp. 264-278