Catálogo de publicaciones - libros

Compartir en
redes sociales


Compiler Construction: 14th International Conference, CC 2005, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2005 Proceedings

Rastislav Bodik (eds.)

En conferencia: 14º International Conference on Compiler Construction (CC) . Edinburgh, UK . April 4, 2005 - April 8, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-25411-9

ISBN electrónico

978-3-540-31985-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

An Efficient On-the-Fly Cycle Collection

Harel Paz; Erez Petrank; David F. Bacon; Elliot K. Kolodner; V. T. Rajan

A reference-counting garbage collector cannot reclaim unreachable cyclic structures of objects. Therefore, reference-counting collectors either use a backup tracing collector infrequently, or employ a cycle collector to reclaim cyclic structures. We propose a new cycle collector, i.e., one that runs concurrently with the program threads, imposing negligible pauses (of around 1ms) on a multiprocessor.

Our new collector combines the state-of-the-art cycle collector [5] with the sliding-views collectors [20, 2]. The use of sliding views for cycle collection yields two advantages. First, it drastically reduces the of cycle candidates, which in turn, drastically reduces the required to record and trace these candidates. Therefore, a large improvement in cycle collection efficiency is obtained. Second, it eliminates the theoretical termination problem that appeared in the previous concurrent cycle collector. There, a rare race may delay the reclamation of an unreachable cyclic structure forever. The sliding-views cycle collector guarantees reclamation of all unreachable cyclic structures.

The proposed collector was implemented on the Jikes RVM and we provide measurements including a comparison between the use of backup tracing and the use of cycle collection with reference counting. To the best of our knowledge, such a comparison has not been reported before.

- Memory Management | Pp. 156-171

Data Slicing: Separating the Heap into Independent Regions

Jeremy Condit; George C. Necula

In this paper, we present a formal description of , which is a type-directed program transformation technique that separates a program’s heap into several independent regions. Pointers within each region mirror the structure of pointers in the original heap; however, each field whose type is a base type (e.g., the integer type) appears in only one of these regions. In addition, we discuss several applications of data slicing. First, data slicing can be used to add extra fields to existing data structures without compromising backward compatibility; the CCured project uses data slicing to preserve library compatibility in instrumented programs at a reasonable performance cost. Data slicing can also be used to improve locality by separating “hot” and “cold” fields in an array of data structures, and it can be used to protect sensitive data by separating “public” and “private” fields. Finally, data slicing can serve as a refactoring tool, allowing the programmer to split data structures while automatically updating the code that manipulates them.

- Program Transformations | Pp. 172-187

A Compiler-Based Approach to Data Security

F. Li; G. Chen; M. Kandemir; R. Brooks

With the proliferation of personal electronic devices and embedded systems, personal and financial data is more easily accessible. As a consequence, we also observe a proliferation of techniques that attempt to illegally access sensitive data without proper authorization. Due to the severe financial and social ramifications of such data leakage, the need for secure memory has become critical. However, working with secure memories can have performance, power, and code size overheads since accessing a secure memory involves additional overheads for encryption/decryption and/or password checks. In addition, an application code may need to be restructured to work under such a memory system. In this paper, we propose a compiler-directed strategy to generate code for a secure memory based embedded architecture. The idea is to let the programmer mark certain data elements, called the seed elements, as secure (i.e., need to be stored in secure memory), and let the compiler determine the remaining secure elements automatically. We also address the problem of code size increase due to our strategy. The experimental results obtained through simulations clearly show that the proposed approach is effective in reducing the total secure memory size. The results also indicate that it is possible to reduce the resulting code size increase by clustering accesses to secure memory.

- Program Transformations | Pp. 188-203

Composing Source-to-Source Data-Flow Transformations with Rewriting Strategies and Dependent Dynamic Rewrite Rules

Karina Olmos; Eelco Visser

Data-flow transformations used in optimizing compilers are also useful in other programming tools such as code generators, aspect weavers, domain-specific optimizers, and refactoring tools. These applications require source-to-source transformations rather than transformations on a low-level intermediate representation. In this paper we describe the composition of source-to-source data-flow transformations in the program transformation language Stratego. The language supports the high-level specification of transformations by means of rewriting strategy combinators that allow a natural modeling of data- and control-flow without committing to a specific source language. Data-flow facts are propagated using dynamic rewriting rules. In particular, we introduce the concept of dependent dynamic rewrite rules for modeling the dependencies of data-flow facts on program entities such as variables. The approach supports the combination of analysis and transformation, the combination of multiple transformations, the combination with other types of transformations, and the correct treatment of variable binding constructs and lexical scope to avoid free variable capture.

- Program Transformations | Pp. 204-220

Verification of Source Code Transformations by Program Equivalence Checking

K. C. Shashidhar; Maurice Bruynooghe; Francky Catthoor; Gerda Janssens

Typically, a combination of manual and automated transformations is applied when algorithms for digital signal processing are adapted for energy and performance-efficient embedded systems. This poses severe verification problems. Verification becomes easier after converting the code into dynamic single-assignment form (DSA). This paper describes a method to prove equivalence between two programs in DSA where subscripts to array variables and loop bounds are (piecewise) affine expressions. For such programs, geometric modeling can be used and it can be shown, for groups of elements at once, that the outputs in both programs are the same function of the inputs.

- Program Transformations | Pp. 221-236

Hob: A Tool for Verifying Data Structure Consistency

Patrick Lam; Viktor Kuncak; Martin Rinard

This tool demonstration presents Hob, a system for verifying data structure consistency for programs written in a general-purpose programming language. Our tool enables the focused application of multiple communicating static analyses to different modules in the same program. Using our tool throughout the program development process, we have successfully identified several bugs in both specifications and implementations of programs.

- Tool Demonstrations | Pp. 237-241

Jazz: A Tool for Demand-Driven Structural Testing

Jonathan Misurda; Jim Clause; Juliya Reed; Bruce R. Childers; Mary Lou Soffa

Software testing to produce reliable and robust software has become vitally important. Testing is a process by which quality can be assured through the collection of information about software. While testing can improve software quality, current tools typically are inflexible and have high overheads, making it a challenge to test large projects. We describe a new scalable and flexible tool, called Jazz, that uses a demand-driven structural testing approach. Jazz has a low overhead of only 17.6% for branch testing.

- Tool Demonstrations | Pp. 242-245

Tiger – An Interpreter Generation Tool

Kevin Casey; David Gregg; M. Anton Ertl

(Trinity Interpreter GEneratoR) is a new interpreter generator tool along the lines of , but with significant improvements in flexibility and feedback. Support for important new features such as instruction specialisation, replication and improved analysis of code at runtime are presented. A simple ‘C’ virtual machine imported into is used for demonstration purposes. Various realistic benchmarks (such as sorting and Davis-Putnam backtracking algorithms) are used to show the utility of these new features in .

- Tool Demonstrations | Pp. 246-249

CodeSurfer/x86—A Platform for Analyzing x86 Executables

Gogul Balakrishnan; Radu Gruian; Thomas Reps; Tim Teitelbaum

CodeSurfer/x86 is a prototype system for analyzing x86 executables. It uses a static-analysis algorithm called (VSA) to recover intermediate representations that are similar to those that a compiler creates for a program written in a high-level language. A major challenge in building an analysis tool for executables is in providing useful information about operations involving memory. This is difficult when symbol-table and debugging information is absent or untrusted. CodeSurfer/x86 overcomes these challenges to provide an analyst with a powerful and flexible platform for investigating the properties and behaviors of potentially malicious code (such as COTS components, plugins, mobile code, worms, Trojans, and virus-infected code) using (i) CodeSurfer/x86’s GUI, (ii) CodeSurfer/x86’s scripting language, which provides access to all of the intermediate representations that CodeSurfer/x86 builds for the executable, and (iii) GrammaTech’s Path Inspector, which is a tool that uses a sophisticated pattern-matching engine to answer questions about the flow of execution in a program.

- Tool Demonstrations | Pp. 250-254

A Study of Type Analysis for Speculative Method Inlining in a JIT Environment

Feng Qian; Laurie Hendren

Method inlining is one of the most important optimizations for JIT compilers in Java virtual machines. In order to increase the number of inlining opportunities, a type analysis can be used to identify monomorphic virtual calls. In a JIT environment, the compiler and type analysis must also handle dynamic class loading properly because class loading can invalidate previous analysis results and invalidate some speculative inlining decisions. To date, a very simple type analysis, class hierarchy analysis (CHA), has been used successfully in JIT compilers for speculative inlining with invalidation techniques as backup.

This paper seeks to determine if more powerful dynamic type analyses could further improve inlining opportunities in a JIT compiler. To achieve this goal we developed a general type analysis framework which we have used for designing and implementing dynamic versions of several well-known static type analyses, including CHA, RTA, XTA and VTA. Surprisingly, the simple dynamic CHA is nearly as good as an ideal type analysis for inlining calls. There is little room for further improvement. On the other hand, only a reachability-based interprocedural type analysis (VTA) is able to capture the majority of monomorphic calls.

- Pointer Analysis | Pp. 255-270