Catálogo de publicaciones - libros

Compartir en
redes sociales


Variations on Constants: Flow Analysis of Sequential and Parallel Programs

Markus Müller-Olm

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-45385-7

ISBN electrónico

978-3-540-45386-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

1. Introduction

Markus Müller-Olm

is one of the most widely used optimizations in practice (cf. [2, 30, 56]). Its goal is to replace expressions that always yield a unique constant value at run-time by this value. This transformation can both speed up execution and reduce code size by replacing a computation or memory access by a load-constant instruction. Often constant propagation enables powerful further program transformations. An example is branch elimination: if the condition guarding a branch of a conditional can be identified as being constantly false, the whole code in this branch is dynamically unreachable and can be removed.

Pp. 1-11

2. A Hierarchy of Constants

Markus Müller-Olm

Constant propagation aims at detecting expressions in programs that always yield a unique constant value at run-time. Replacing constant expressions by their value is one of the most widely used optimizations in practice (cf. [2, 30, 56]). Unfortunately, the constant propagation problem is undecidable even if the interpretation of branches is completely ignored, like in the common model of non-deterministic flow graphs where every program path is considered executable.

Pp. 13-29

3. Deciding Constants by Effective Weakest Preconditions

Markus Müller-Olm

One goal of classifying the complexity of weakened versions of program-analysis problems is to uncover potential for more precise analysis algorithms. As witnessed by the white space in Table 2.2, three questions remained open in the complexity classification of the previous chapter: there is no result for Presburger must-constants and there are no upper bounds for polynomial must-constants and Presburger may-constants. In this chapter we provide answers for two of these questions that uncover algorithmic potential. We show that Presburger must-constants can be detected in polynomial time and that polynomial must-constants are decidable by developing corresponding algorithms.

Pp. 31-52

4. Limits of Parallel Flow Analysis

Markus Müller-Olm

Automatic analysis of parallel programs is known as a notoriously hard problem. A well-known obstacle is the so-called : the number of (control) states of a parallel program grows exponentially with the number of parallel components. Therefore, most practical flow analysis algorithms of concurrent programs conservatively approximate the effects arising from interference of different threads in order to achieve efficiency.

Pp. 53-79

5. Parallel Flow Graphs

Markus Müller-Olm

In Chapter 4 we have seen that copy-constant detection is undecidable for parallel programs with procedures , a quite common idealization. However, in many execution scenarios for concurrent programs this assumption is hardly realistic (see Chapter 6). Thus, it is interesting to investigate whether these results still hold without the assumption of atomic execution.

Pp. 81-99

6. Non-atomic Execution

Markus Müller-Olm

The idealization that assignments execute atomically is quite common in the literature on program verification as well as in the theoretical literature on flow analysis of parallel programs. However, in a multi-processor environment where a number of concurrently executing processors share a common memory this assumption is hardly realistic. In such an environment two threads of control may well interfere while each of them is in the process of executing an assignment. The reason is that assignments are broken into smaller instructions before execution.

Pp. 101-109

7. Dependence Traces

Markus Müller-Olm

We can indirectly detect copy constants and eliminate faint code on the basis of the following information: given a program point and a variable of interest; when control is at another program point , which variables y may influence the value of at ? Clearly, this information can be derived from the set of bridging runs from to and we have a constraint system characterizing this set (cf. Chapter 5.5). We would like to compute the above information by means of a precise and effective abstract interpretation. Before we start with the technical development we give a brief overview.

Pp. 111-143

8. Detecting Copy Constants and Eliminating Faint Code

Markus Müller-Olm

In this chapter we show that we can detect copy constants and eliminate faint code in parallel flow graphs completely–relative to the non-atomic semantics of Chapter 6. The basic idea is to evaluate the constraint system for bridging runs from Chapter 5 over the complete lattice (AD,) from the previous chapter and to exploit this information.

Pp. 145-152

9. Complexity in the Non-atomic Scenario

Markus Müller-Olm

In the previous chapter, we have seen that we can detect all copy constants and eliminate faint code completely in parallel programs, if we abandon the assumption that base statements execute atomically. The presented algorithms run in exponential time, which raises the question whether there are also efficient algorithms for these problems. In this chapter we show that the answer is ‘no’, unless P=NP. In the conclusions of this monographs, Chapter 10, we sketch possible remedies and discuss directions of future research that may lead to algorithms of practical interest.

Pp. 153-160

10. Conclusion

Markus Müller-Olm

For fundamental recursion-theoretic reasons, program analyzers are doomed to give only approximate answers. By applying abstractions to programs, we can come to precisely defined, weaker analysis problems that can be solved exactly. By classifying such problems with the means provided by the theory of computational complexity, we hope to shed light on the trade-off between efficiency and precision for approximate analyzers and to uncover potential for more precise analysis algorithms.

Pp. 161-164