Catálogo de publicaciones - libros
Compiler Construction: 14th International Conference, CC 2005, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2005 Proceedings
Rastislav Bodik (eds.)
En conferencia: 14º International Conference on Compiler Construction (CC) . Edinburgh, UK . April 4, 2005 - April 8, 2005
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2005 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-25411-9
ISBN electrónico
978-3-540-31985-6
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2005
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2005
Tabla de contenidos
When Abstraction Fails
Andreas Zeller
Reasoning about programs is mostly deduction: the reasoning from the abstract model to the concrete run. Deduction is useful because it allows us to predict properties of future runs—up to the point that a program will never fail its specification. However, even such a 100% correct program may still show a problem: the specification itself may be problematic, or deduction required us to abstract away some relevant property. To handle such problems, deduction is not the right answer—especially in a world where programs reach a complexity that makes them indistinguishable from natural phenomena. Instead, we should enrich our portfolio by methods proven in natural sciences, such as observation, induction, and in particular experimentation. In my talk, I will show how systematic experimentation automatically reveals the causes of program failures—in the input, in the program state, or in the program code.
- Invited Talk | Pp. 1-9
Source-Level Debugging for Multiple Languages with Modest Programming Effort
Sukyoung Ryu; Norman Ramsey
We present techniques that enable source-level debugging for multiple languages at the cost of only modest programming effort. The key idea is to avoid letting debugging requirements constrain the internal structure of the compiler. Constraints are minimized primarily by hiding the source-language type system and target-machine representations from the debugger. This approach enables us to support a new language and compiler while reusing existing elements: a multi-language, multi-platform debugger; the compiler’s implementation of source-language types and expressions; information already present in the compiler’s private data structures; and our , which helps the compiler meet its obligations to the debugger without exposing language-dependent details. We evaluate our approach using two case studies: the production compiler and an instructional compiler for MiniJava.
- Compilation | Pp. 10-26
Compilation of Generic Regular Path Expressions Using C++ Class Templates
Luca Padovani
Various techniques for the navigation and matching of data structures using path expressions have been the subject of extensive investigations. No matter whether such techniques are based on type information, indexing, automata, it is desirable to synthesize implementations automatically, starting from a high-level description of the path expressions to be traversed.
In this paper we present a library of C++ templates for the representation of regular path expressions and their compilation into efficient backtracking algorithms. The resulting code can be used to implement visitors, pattern matchers, node collectors on regular paths over possibly heterogeneous, linked data structures.
The point of the paper is on the path compilation technique, which was inspired by a continuation-passing, functional semantics of the path expressions. We rely on some peculiar aspects of C++ templates to create a compilation framework that closely follows the given semantics.
- Compilation | Pp. 27-42
XML Goes Native: Run-Time Representations for
Vladimir Gapeyev; Michael Y. Levin; Benjamin C. Pierce; Alan Schmitt
is a lightweight extension of C offering native support for statically typed XML processing. XML trees are built-in values in , and static analysis of the trees manipulated by programs is part of the ordinary job of the typechecker. “Tree grep” pattern matching is used to investigate and transform XML trees. ’s surface syntax and type system are tightly integrated with those of C. Beneath the hood, however, an implementation of must address a number of issues common to any language supporting a declarative style of XML processing (e.g., , etc.). In particular, it must provide representations for XML tags, trees, and textual data that use memory efficiently, support efficient pattern matching, allow maximal sharing of common substructures, and permit separate compilation. We analyze these representation choices in detail and describe the solutions used by the compiler.
- Compilation | Pp. 43-58
Boosting the Performance of Multimedia Applications Using SIMD Instructions
Weihua Jiang; Chao Mei; Bo Huang; Jianhui Li; Jiahua Zhu; Binyu Zang; Chuanqi Zhu
Modern processors’ multimedia extensions (MME) provide SIMD ISAs to boost the performance of typical operations in multimedia applications. However, automatic vectorization support for them is not very mature. The key difficulty is how to vectorize those SIMD-ISA-supported idioms in source code in an efficient and general way. In this paper, we introduce a powerful and ex-tendable recognition engine to solve this problem, which only needs a small amount of rules to recognize many such idioms and generate efficient SIMD in-structions. We integrated this engine into the classic vectorization framework and obtained very good performance speedup for some real-life applications.
- Parallelism | Pp. 59-75
Task Partitioning for Multi-core Network Processors
Robert Ennals; Richard Sharp; Alan Mycroft
Network processors (NPs) typically contain multiple concurrent processing cores. State-of-the-art programming techniques for NPs are invariably low-level, requiring programmers to partition code into concurrent tasks early in the design process. This results in programs that are hard to maintain and hard to port to alternative architectures. This paper presents a new approach in which a high-level program is separated from its partitioning into concurrent tasks. Designers write their programs in a high-level, domain-specific, architecturally-neutral language, but also provide a separate Architecture Mapping Script (AMS). An AMS specifies semantics-preserving transformations that are applied to the program to re-arrange it into a set of tasks appropriate for execution on a particular target architecture. We (i) describe three such transformations: pipeline introduction, pipeline elimination and queue multiplexing; and (ii) specify when each can be safely applied.
As a case study we describe an IP packet-forwarder and present an AMS script that partitions it into a form capable of running at 3Gb/s on an Intel IXP2400 Network Processor.
- Parallelism | Pp. 76-90
Experiences with Enumeration of Integer Projections of Parametric Polytopes
Sven Verdoolaege; Kristof Beyls; Maurice Bruynooghe; Francky Catthoor
Many compiler optimization techniques depend on the ability to calculate the number of integer values that satisfy a given set of linear constraints. This count (the enumerator of a parametric polytope) is a function of the symbolic parameters that may appear in the constraints. In an extended problem (the “integer projection” of a parametric polytope), some of the variables that appear in the constraints may be existentially quantified and then the enumerated set corresponds to the projection of the integer points in a parametric polytope.
This paper shows how to reduce the enumeration of the integer projection of parametric polytopes to the enumeration of parametric polytopes. Two approaches are described and experimentally compared. Both can solve problems that were considered very difficult to solve analytically.
- Parallelism | Pp. 91-105
Generalized Index-Set Splitting
Christopher Barton; Arie Tal; Bob Blainey; José Nelson Amaral
This paper introduces (ISS), a technique that splits a loop containing several conditional statements into several loops with less complex control flow. Contrary to the classic technique, ISS splits loops when the conditional is loop variant. ISS uses an (IST) to identify the structure of the conditionals in the loop and to select which conditionals should be eliminated. This decision is based on an estimation of the code growth for each splitting: a greedy algorithm spends a pre-determined code growth budget. ISTs separate the decision about which splits to perform from the actual code generation for the split loops. The use of ISS to improve a loop fusion framework is then discussed. ISS opportunity identification in the SPEC2000 benchmark suite and three other suites demonstrate that ISS is a general technique that may benefit other compilers.
- Parallelism | Pp. 106-120
Age-Oriented Concurrent Garbage Collection
Harel Paz; Erez Petrank; Stephen M. Blackburn
Generational collectors are well known as a tool for shortening pause times incurred by garbage collection and for improving garbage collection efficiency. In this paper, we investigate how to best use generations with on-the-fly collectors. On-the-fly collectors run concurrently with the program threads and induce very short program pauses. Thus, the motivation for incorporating generations is focused at improving the throughput; pauses do not matter, since they are already very short. We propose a new collection approach, denoted collection, for exploiting the generational hypothesis to obtain better efficiency. This approach is particularly useful when reference counting is used to collect the old generation, yielding a highly efficient and non-obtrusive on-the-fly collector. Finally, an implementation is provided demonstrating how the age-oriented collector outperforms both the non-generational and the generational collectors’ efficiency.
- Memory Management | Pp. 121-136
Optimizing C Multithreaded Memory Management Using Thread-Local Storage
Yair Sade; Mooly Sagiv; Ran Shaham
Dynamic memory management in C programs can be rather costly. Multithreading introduces additional synchronization overhead of C memory management functions (, ). In order to reduce this overhead, we extended Hoard — a state of the art memory allocator with the ability to allocate thread-local storage. Experimental results using the tool show runtime saving of up to 44% for a set of memory management benchmarks.
To allow transparent usage of thread-local storage, we develop a compile-time algorithm, which conservatively detects allocation sites that can be replaced by thread-local allocations. Our static analysis is sound, i.e., every detected thread-local storage is indeed so, although we may fail to identify opportunities for allocating thread-local storage. Technically, we reduce the problem of estimating thread-local storage to the problem of escape analysis and provide an efficient escape analysis for C. We solve the problem of escape analysis for C using existing points-to analysis algorithms. Our solution is parameterized by the points-to information. We empirically evaluated the solution with two different methods for computing points-to information. The usage of scalable points-to analysis algorithms and the fact that our reduction is efficient, guarantees that our static analysis technique is scalable.
- Memory Management | Pp. 137-155