Catálogo de publicaciones - libros

Compartir en
redes sociales


Automated Technology for Verification and Analysis: 5th International Symposium, ATVA 2007 Tokyo, Japan, October 22-25, 2007 Proceedings

Kedar S. Namjoshi ; Tomohiro Yoneda ; Teruo Higashino ; Yoshio Okamura (eds.)

En conferencia: 5º International Symposium on Automated Technology for Verification and Analysis (ATVA) . Tokyo, Japan . October 22, 2007 - October 25, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-75595-1

ISBN electrónico

978-3-540-75596-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Policies and Proofs for Code Auditing

Nathan Whitehead; Jordan Johnson; Martín Abadi

Both proofs and trust relations play a role in security decisions, in particular in determining whether to execute a piece of code. We have developed a language, called BCIC, for policies that combine proofs and trusted assertions about code. In this paper, using BCIC, we suggest an approach to code auditing that bases auditing decisions on logical policies and tools.

- Invited Talks | Pp. 1-14

Recent Trend in Industry and Expectation to DA Research

Atsushi Hasegawa

In 1970s and early 1980s, functional verification in semiconductor design mean gate level logic simulation. Equivalence check between logic circuits and layout design had been done with human eyes and colored pencils. Layout rule checkers helped to point out layout errors, ex. spacing rule errors.

Progress of design tools and methodology have been helped to increase numbers of transistors on silicon chips, following Moore’s law steadily. Improvement of productivity allowed us to tackle larger design problem. Verification and test tools and methodology played important role in such improvement of productivity.

In the industry, the progress of design methodology was slower than academic research of DA technology. But the daily works of engineers are totally different from 1970s and 1980s.

In late 1980s, schematic editors and workstations released engineers from pencils and erasers to draw schematics, removed operator jobs to type netlists based on hand written schematics. Eliminating human transformation from hand written schematics to text base netlists reducing bugs dramatically. In both cases, design teams applied functional verification on the netlists. Schematic editors reducing numbers of finding typo in verification.

Late 80s and early 90s, there were many tools and notion forms to describe functionality of design blocks. Those were aiming to reduce writing efforts for same functionality, such as numbers of lines to define the behavior of blocks. The notion forms were also aiming easy understanding of design, boolean formulas, functional tables, state machine notation in several different format.

Finally two major description languages Verilog and VHDL became popular in semiconductor industry. Great progress of synthesis technology makes design result using those languages became faster, smaller, and reliable compare to design result using ordinal handcrafted gate level schematics. The early version of synthesis tools had poor performance. The generated gate netlists were slower and had more gates. Several years later, the synthesis tools became more mature. Our company shifted from proprietary synthesis tools using restricted C language to commercial Verilog language synthesis tools in mid 90s.

For functional validation, we did RTL level simulation and gate level simulation. Gate level simulation also checked timing with extracted gate netlists and parasitical parameters from layout data just before tape out.

- Invited Talks | Pp. 15-16

Toward Property-Driven Abstraction for Heap Manipulating Programs

K. L. McMillan

Automated abstraction refinement methods have shown significant promise in analyzing low-level software, such as operating system device drivers, and other control-oriented codes. For example, the SLAM toolkit from Microsoft research [1] has proved effective in finding control errors (such as illegal use of kernel API functions) in real-world device driver codes. SLAM is based on predicate abstraction, using a counterexample-based abstraction refinement heuristic. This gives it the ability to focus the abstraction on state predicates that are relevant to the proof (or falsification) of a given property. This ability allows SLAM and similar tools to scale to real codes of moderate size, albeit only in the case when the property is fairly shallow, in the sense that it requires only a small amount of information about the program’s state to prove it.

- Invited Talks | Pp. 17-18

Branching vs. Linear Time: Semantical Perspective

Sumit Nain; Moshe Y. Vardi

The discussion in the computer-science literature of the relative merits of linear- versus branching-time frameworks goes back to early 1980s. One of the beliefs dominating this discussion has been that the linear-time framework is not expressive enough semantically, making linear-time logics lacking in expressiveness. In this work we examine the branching-linear issue from the perspective of process equivalence, which is one of the most fundamental notions in concurrency theory, as defining a notion of process equivalence essentially amounts to defining semantics for processes. Over the last three decades numerous notions of process equivalence have been proposed. Researchers in this area do not anymore try to identify the “right” notion of equivalence. Rather, focus has shifted to providing taxonomic frameworks, such as “the linear-branching spectrum”, for the many proposed notions and trying to determine suitability for different applications.

We revisit here this issue from a fresh perspective. We postulate three principles that we view as fundamental to any discussion of process equivalence. First, we borrow from research in denotational semantics and take observational equivalence as the primary notion of equivalence. This eliminates many testing scenarios as either too strong or too weea. Second, we require the description of a process to fully specify all relevant behavioral aspects of the process. Finally, we require observable process behavior to be reflected in its input/output behavior. Under these postulates the distinctions between the linear and branching semantics tend to evaporate. As an example, we apply these principles to the framework of transducers, a classical notion of state-based processes that dates back to the 1950s and is well suited to hardware modeling. We show that our postulates result in a unique notion of process equivalence, which is trace based, rather than tree based.

- Invited Talks | Pp. 19-34

Mind the Shapes: Abstraction Refinement Via Topology Invariants

Jörg Bauer; Tobe Toben; Bernd Westphal

Dynamic Communication Systems (DCS) are infinite state systems where an unbounded number of processes operate in an evolving communication topology. For automated verification of properties of DCS, finitary abstractions based on exploiting symmetry can be employed. However, these abstractions give rise to spurious behaviour that often inhibits to successfully prove relevant properties.

In this paper, we propose to combine a particular finitary abstraction with global system invariants obtained by abstract interpretation. These system invariants establish an over-approximation of possible communication topologies occurring at runtime, which can be used to identify and exclude spurious behaviour introduced by the finitary abstraction, which is thereby refined. Based on a running example of car platooning, we demonstrate that our approach allows to verify temporal DCS properties that no technique in isolation is able to prove.

- Regular Papers | Pp. 35-50

Complete SAT-Based Model Checking for Context-Free Processes

Geng-Dian Huang; Bow-Yaw Wang

A complete SAT-based model checking algorithm for context-free processes is presented. We reduce proof search in local model checking to Boolean satisfiability. Bounded proof search can therefore be performed by SAT solvers. Moreover, the completion of proof search is reduced to Boolean unsatisfiability and hence can be checked by SAT solvers. By encoding the local model checking algorithm in [13], SAT solvers are able to verify properties in the universal fragment of alternation-free -calculus formula on context-free processes.

- Regular Papers | Pp. 51-65

Bounded Model Checking of Analog and Mixed-Signal Circuits Using an SMT Solver

David Walter; Scott Little; Chris Myers

This paper presents a bounded model checking algorithm for the verification of (AMS) circuits using a (SMT) solver. The systems are modeled in , a hardware description language for AMS circuits. In this model, system safety properties are specified as assertion statements. The VHDL-AMS description is compiled into (LHPNs) in which analog values are modeled as continuous variables that can change at rates in a bounded range and digital values are modeled using Boolean signals. The verification method begins by transforming the LHPN model into an SMT formula composed of the initial state, the transition relation unrolled for a specified number of iterations, and the complement of the assertion in each set of state variables. When this formula evaluates to true, this indicates a violation of the assertion and an error trace is reported. This method has been implemented and preliminary results are promising.

- Regular Papers | Pp. 66-81

Model Checking Contracts – A Case Study

Gordon Pace; Cristian Prisacariu; Gerardo Schneider

Contracts are agreements between distinct parties that determine rights and obligations on their signatories, and have been introduced in order to reduce risks and to regulate inter-business relationships. In this paper we show how a conventional contract can be written in the contract language , model the contract and verify properties of the model using the NuSMV model checking tool.

- Regular Papers | Pp. 82-97

On the Efficient Computation of the Minimal Coverability Set for Petri Nets

Gilles Geeraerts; Jean-François Raskin; Laurent Van Begin

The (MCS) of a Petri net is a finite representation of the downward-closure of its reachable markings. The minimal coverability set allows to decide several important problems like coverability, semi-liveness, place boundedness, etc. The classical algorithm to compute the MCS constructs the Karp&Miller tree [8]. Unfortunately the K&M tree is often huge, even for small nets. An improvement of this K&M algorithm is the Minimal Coverability Tree (MCT) algorithm [1], which has been introduced 15 years ago, and implemented since then in several tools such as Pep [7]. Unfortunately, we show in this paper that the MCT is flawed: it might compute an under-approximation of the reachable markings. We propose a new solution for the efficient computation of the MCS of Petri nets. Our experimental results show that this new algorithm behaves much better in practice than the K&M algorithm.

- Regular Papers | Pp. 98-113

Analog/Mixed-Signal Circuit Verification Using Models Generated from Simulation Traces

Scott Little; David Walter; Kevin Jones; Chris Myers

Formal and semi-formal verification of analog/mixed-signal circuits is complicated by the difficulty of obtaining circuit models suitable for analysis. We propose a method to generate a formal model from simulation traces. The resulting model is conservative in that it includes all of the original simulation traces used to generate it plus additional behavior. Information obtained during the model generation process can also be used to refine the simulation and verification process.

- Regular Papers | Pp. 114-128