Catálogo de publicaciones - libros

Compartir en
redes sociales


Theoretical Aspects of Computing: ICTAC 2007: 4th International Colloquium, Macau, China, September 26-28, 2007. Proceedings

Cliff B. Jones ; Zhiming Liu ; Jim Woodcock (eds.)

En conferencia: 4º International Colloquium on Theoretical Aspects of Computing (ICTAC) . Macau, China . September 26, 2007 - September 28, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-75290-5

ISBN electrónico

978-3-540-75292-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Algebraic Semantics for Compensable Transactions

Jing Li; Huibiao Zhu; Jifeng He

This paper presents the algebraic semantics of a novel transactional language -calculus. This language focuses on modeling long running transactions in terms of compensable transactions, showing how the compensations can be orchestrated to ensure atomicity. The typical operators of sequential and parallel compositions are redefined so that the corresponding compensations will be activated in a suitable order whenever some failure occurs in later stage. In addition, we investigate more transactional operators, such as speculative choice, exception handling, alternative forwarding and programmable compensation. The wise use of these constructs is rather helpful to set up a flexible and effective business process. We present a clear algebraic semantics for -calculus and derive its operational semantics mechanically based on a given derivation strategy. This work provides a foundation for optimization and implementation of this language.

Pp. 306-321

Axiomatizing Extended Temporal Logic Fragments Via Instantiation

Wanwei Liu; Ji Wang; Wei Dong; Huowang Chen

ETLs are temporal logics employing -automata as temporal connectives. This paper presents sound and complete axiom systems for ETL, ETL, and ETL, respectively. Axioms and rules reflecting temporal behaviors of looping, finite and repeating automaton connectives are provided. Moreover, by encoding temporal operators into automaton connectives and instantiating the axioms and rules relating to automaton connectives, one may derive axiom systems for given ETL fragments.

Pp. 322-336

Deciding Weak Bisimilarity of Normed Context-Free Processes Using Tableau

Xinxin Liu; Haiyan Chen

Deciding strong and weak bisimilarity of context-free proce-sses are challenging because of the infinite nature of the state space of such processes. Deciding weak bisimilarity is harder since the usual decomposition property which holds for strong bisimilarity fails. Hirshfeld proposed the notion of bisimulation tree to prove that weak bisimulation is decidable for totally normed BPA and BPP processes. Suggested by his idea of decomposition, in this paper we present a tableau method for deciding weak bisimilarity of totally normed context-free processes. Compared with Hirshfeld’s bisimulation tree method, our method is more intuitive and more direct.

Pp. 337-350

Linear Context Free Languages

Roussanka Loukanova

In this paper, I present the class of linear context free languages (LCFLs) with a class of non-deterministic one-way two-head (read only) automata, called non-deterministic linear automata (NLA). At the begining of the work of an NLA, the reading heads are installed under the opposite ends of the given input string. Then, each head can move in one direction only, the left head from left to right, while the right head from right to left. I give formal definitions of two non-deterministic versions of the linear automata, without and with -transitions, and their corresponding computations. Deterministic linear automata model unambiguous structural analysers, while the class of the languages recognized by them does not coincide with the class of deterministic linear languages recognized by deterministic push-down machines. I compare the linear automata with other models of LCFLs. In particular, I consider a subclass of unambiguous linear context-free languages and define corresponding linear automata serving as efficient parsing tools for them, in deterministic and non-deterministic variants.

Pp. 351-365

FM for FMS: Lessons Learned While Applying Formal Methods to the Study of Flexible Manufacturing Systems

Andrea Matta; Matteo Rossi; Paola Spoletini; Dino Mandrioli; Quirico Semeraro; Tullio Tolio

In the past few years two research groups of Politecnico di Milano, whose activities were centered on Formal Methods for the production of critical software and on industrial manufacturing systems, respectively, have carried out a joint research project that, among other things, aimed at introducing the use of formal methods in the design and analysis phases of industrial production systems (and especially of so-called Flexible Manufacturing Systems, FMSs) as a complementary tool to the ones used in the current practice of the field. This paper reports on the challenges that the research groups faced during the project, and on the lessons that have been learned in the process.

Pp. 366-380

On Equality Predicates in Algebraic Specification Languages

Nakamura Masaki; Futatsugi Kokichi

The execution of OBJ algebraic specification languages is based on the term rewriting system (TRS), which is an efficient theory to perform equational reasoning. We focus on the equality predicate implemented in OBJ languages. The equality predicate is used to test the equality of given terms by TRS. Unfortunately, it is well known that the current execution engine of OBJ languages with the equality predicate is not sound. To solve this problem, we define a modular term rewriting system (MTRS), which is suitable for the module system of OBJ languages, and propose a new equality predicate based on MTRS.

Pp. 381-395

Data-Distributions in Theory

Virginia Niculescu

theory is well suited to express recursive, data-parallel algorithms. Its abstractness is very high and ensures simple and correct design of parallel programs. We try to reconcile this high level of abstraction with performance by introducing data-distributions into this theory. One advantage of formally introducing distributions is that it allows us to evaluate costs, depending on the number of available processors, which is considered as a parameter. The analysis of the possible distributions for a certain function may also lead to an improvement in the design decisions. Another important advantage is that after the introduction of data-distributions, mappings on real parallel architectures with limited number of processing elements can be analyzed. Case studies for Fast Fourier transform and rank-sorting are given.

Pp. 396-409

Quasi-interpretation Synthesis by Decomposition

Guillaume Bonfante; Jean-Yves Marion; Romain Péchoux

Quasi-interpretation analysis belongs to the field of and has shown its interest to deal with resource analysis of first-order functional programs, which are terminating or not. In this paper, we tackle the issue of program decomposition wrt quasi-interpretations analysis. For that purpose, we use the notion of modularity. Firstly, modularity decreases the complexity of the quasi-interpretation search algorithms. Secondly, modularity increases the intentionality of the quasi-interpretation method, that is the number of captured programs. Finally, we take advantage of modularity conditions to extend smoothly quasi-interpretations to higher-order programs.

We study the modularity of quasi-interpretations through the notions of constructor-sharing and hierarchical unions of programs. We show that, in both cases, the existence of quasi-interpretations is no longer a modular property. However, we can still certify the complexity of programs by showing, under some restrictions, that the size of the values computed by a program remains polynomially bounded by the inputs size.

Pp. 410-424

Composing Transformations to Optimize Linear Code

Thomas Noll; Stefan Rieger

We study the effect of an optimizing algorithm for straight–line code which first constructs a directed acyclic graph representing the given program and then generates code from it. We show that this algorithm produces optimal code with respect to the classical transformations known as Constant Folding, Common Subexpression Elimination, and Dead Code Elimination. In contrast to the former, the latter are also applicable to iterative code containing loops. We can show that the graph–based algorithm essentially corresponds to a combination of the three classical optimizations in conjunction with Copy Propagation. Thus, apart from its theoretical importance, this result is relevant for practical compiler design as it potentially allows to exploit the optimization potential of the graph–based algorithm for non–linear code as well.

Pp. 425-439

Building Extended Canonizers by Graph-Based Deduction

Silvio Ranise; Christelle Scharff

We consider the problem of efficiently building extended canonizers, which are capable of solving the uniform word problem for some first-order theories. These reasoning artifacts have been introduced in previous work to solve the lack of modularity of Shostak combination schema while retaining its efficiency. It is known that extended canonizers can be modularly combined to solve the uniform word problem in unions of theories. Unfortunately, little is known about efficiently implementing such canonizers for component theories, especially those of interest for verification like, e.g., those of uninterpreted function symbols or lists. In this paper, we investigate this problem by adapting and combining work on rewriting-based decision procedures for satisfiability in first-order theories and SER graphs, a graph-based method defined for abstract congruence closure. Our goal is to build graph-based extended canonizers for theories which are relevant for verification. Based on graphs our approach addresses implementation issues that were lacking in previous rewriting-based decision procedure approaches and which are important to argue the viability of extended canonizers.

Pp. 440-454