Catálogo de publicaciones - libros

Compartir en
redes sociales


Título de Acceso Abierto

Tools and Algorithms for the Construction and Analysis of Systems: Tools and Algorithms for the Construction and Analysis of Systems

Parte de: Theoretical Computer Science and General Issues

En conferencia: 24º International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS) . Thessaloniki, Greece . April 14, 2018 - April 20, 2018

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

authentication; computer science; computer software selection and evaluation; cryptography; data privacy; formal logic; formal methods; formal specification; internet; privacy; program compilers; programming languages; security systems; semantics; separation logic; software engineering; verification; world wide web

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No requiere 2018 Directory of Open access Books acceso abierto
No requiere 2018 SpringerLink acceso abierto

Información

Tipo de recurso:

libros

ISBN impreso

978-3-319-89959-6

ISBN electrónico

978-3-319-89960-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Tabla de contenidos

Automating Deductive Verification for Weak-Memory Programs

Alexander J. Summers; Peter Müller

Writing correct programs for weak memory models such as the C11 memory model is challenging because of the weak consistency guarantees these models provide. The first program logics for the verification of such programs have recently been proposed, but their usage has been limited thus far to manual proofs. Automating proofs in these logics via first-order solvers is non-trivial, due to features such as higher-order assertions, modalities and rich permission resources.

In this paper, we provide the first encoding of a weak memory program logic using existing deductive verification tools. Our work enables, for the first time, the (unbounded) verification of C11 programs at the level of abstraction provided by the program logics; the only necessary user interaction is in the form of specifications written in the program logic.

We tackle three recent program logics: Relaxed Separation Logic and two forms of Fenced Separation Logic, and show how these can be encoded using the Viper verification infrastructure. In doing so, we illustrate several novel encoding techniques which could be employed for other logics. Our work is implemented, and has been evaluated on examples from existing papers as well as the Facebook open-source Folly library.

- Deductive Verification | Pp. 190-209

Property Checking Array Programs Using Loop Shrinking

Shrawan Kumar; Amitabha Sanyal; R. Venkatesh; Punit Shah

Most verification tools find it difficult to prove properties of programs containing loops that process arrays of large or unknown size. These methods either fail to abstract the array at the right granularity and are therefore limited in precision or scalability, or they attempt to synthesize an appropriate invariant that is quantified over the elements of the array, a task known to be difficult. In this paper, we present a different approach based on a notion called , in which an array processing loop is transformed to a loop of much smaller bound that processes only a few non-deterministically chosen elements of the array. The result is a finite state program with a drastically reduced state space that can be analyzed by bounded model checkers. We show that the proposed transformation is an over-approximation, i.e. if the transformed program is correct, so is the original. In addition, when applicable, the method is impervious to the size or existence of the bound of the array. As an assessment of usefulness, we tested a tool based on our method on the category of SV-COMP 2017 benchmarks. After excluding programs with feature not handled by our tool, we could successfully verify 87 of the 93 remaining programs.

- Software Verification and Optimisation | Pp. 213-231

Invariant Synthesis for Incomplete Verification Engines

Daniel Neider; Pranav Garg; P. Madhusudan; Shambwaditya Saha; Daejun Park

We propose a framework for synthesizing inductive invariants for incomplete verification engines, which soundly reduce logical problems in undecidable theories to decidable theories. Our framework is based on the counter-example guided inductive synthesis principle (CEGIS) and allows verification engines to communicate to guide invariant synthesis. We show precisely how the verification engine can compute such non-provability information and how to build effective learning algorithms when invariants are expressed as Boolean combinations of a fixed set of predicates. Moreover, we evaluate our framework in two verification settings, one in which verification engines need to handle quantified formulas and one in which verification engines have to reason about heap properties expressed in an expressive but undecidable separation logic. Our experiments show that our invariant synthesis framework based on non-provability information can both effectively synthesize inductive invariants and adequately strengthen contracts across a large suite of programs.

- Software Verification and Optimisation | Pp. 232-250

Accelerating Syntax-Guided Invariant Synthesis

Grigory Fedyukovich; Rastislav Bodík

We present a fast algorithm for syntax-guided synthesis of inductive invariants which combines enumerative learning with inductive-subset extraction, leverages counterexamples-to-induction and interpolation-based bounded proofs. It is a variant of a recently proposed probabilistic method, called , which is however less dependent on heuristics than its predecessor. We present an evaluation of the new algorithm on a large set of benchmarks and show that it exhibits a more predictable behavior than its predecessor, and it is competitive to the state-of-the-art invariant synthesizers based on Property Directed Reachability.

- Software Verification and Optimisation | Pp. 251-269

Daisy - Framework for Analysis and Optimization of Numerical Programs (Tool Paper)

Eva Darulova; Anastasiia Izycheva; Fariha Nasir; Fabian Ritter; Heiko Becker; Robert Bastian

Automated techniques for analysis and optimization of finite-precision computations have recently garnered significant interest. Most of these were, however, developed independently. As a consequence, reuse and combination of the techniques is challenging and much of the underlying building blocks have been re-implemented several times, including in our own tools. This paper presents a new framework, called Daisy, which provides in a single tool the main building blocks for accuracy analysis of floating-point and fixed-point computations which have emerged from recent related work. Together with its modular structure and optimization methods, Daisy allows developers to easily recombine, explore and develop new techniques. Daisy’s input language, a subset of Scala, and its limited dependencies make it furthermore user-friendly and portable.

- Software Verification and Optimisation | Pp. 270-287

Oink: An Implementation and Evaluation of Modern Parity Game Solvers

Tom van Dijk

Parity games have important practical applications in formal verification and synthesis, especially to solve the model-checking problem of the modal mu-calculus. They are also interesting from the theory perspective, as they are widely believed to admit a polynomial solution, but so far no such algorithm is known. In recent years, a number of new algorithms and improvements to existing algorithms have been proposed. We implement a new and easy to extend tool Oink, which is a high-performance implementation of modern parity game algorithms. We further present a comprehensive empirical evaluation of modern parity game algorithms and solvers, both on real world benchmarks and randomly generated games. Our experiments show that our new tool Oink outperforms the current state-of-the-art.

- Model Checking | Pp. 291-308

More Scalable LTL Model Checking via Discovering Design-Space Dependencies ()

Rohit Dureja; Kristin Yvonne Rozier

Modern system design often requires comparing several models over a large design space. Different models arise out of a need to weigh different design choices, to check core capabilities of versions with varying features, or to analyze a future version against previous ones. Model checking can compare different models; however, applying model checking off-the-shelf may not scale due to the large size of the design space for today’s complex systems. We exploit relationships between different models of the same (or related) systems to optimize the model-checking search. Our algorithm, , preprocesses the design space and checks fewer model-checking instances, e.g., using . It automatically prunes the search space by reducing both the number of models to check, and the number of LTL properties that need to be checked for each model in order to provide the complete model-checking verdict for every individual model-property pair. We formalize heuristics that improve the performance of . We demonstrate the scalability of by extensive experimental evaluation, e.g., by checking 1,620 real-life models for NASA’s NextGen air traffic control system. Compared to checking each model-property pair individually, is up to 9.4 faster.

- Model Checking | Pp. 309-327

Generation of Minimum Tree-Like Witnesses for Existential CTL

Chuan Jiang; Gianfranco Ciardo

An advantage of model checking is its ability to generate witnesses or counterexamples. Approaches exist to generate small or minimum witnesses for simple unnested formulas, but no existing method guarantees minimality for general nested ones. Here, we give a definition of witness size, use edge-valued decision diagrams to recursively compute the minimum witness size for each subformula, and describe a general approach to build minimum tree-like witnesses for existential CTL. Experimental results show that for some models, our approach is able to generate minimum witnesses while the traditional approach is not.

- Model Checking | Pp. 328-343

From Natural Projection to Partial Model Checking and Back

Gabriele Costa; David Basin; Chiara Bodei; Pierpaolo Degano; Letterio Galletta

Specification decomposition is a theoretically interesting and practically relevant problem for which two approaches were independently developed by the control theory and verification communities: and . In this paper we show that, under reasonable assumptions, natural projection reduces to partial model checking and, when cast in a common setting, the two are equivalent. Aside from their theoretical interest, our results build a bridge whereby the control theory community can reuse algorithms and results developed by the verification community. In addition, we present an algorithm and a tool for the partial model checking of finite-state automata that can be used as an alternative to natural projection.

- Model Checking | Pp. 344-361

ICE-Based Refinement Type Discovery for Higher-Order Functional Programs

Adrien Champion; Tomoya Chiba; Naoki Kobayashi; Ryosuke Sato

We propose a method for automatically finding refinement types of higher-order function programs. Our method is an extension of the framework of Garg for finding invariants. In addition to the usual positive and negative samples in machine learning, their framework uses , which consist of pairs (, ) such that if satisfies an invariant, so does . From these constraints, infers invariants effectively. We observe that the implication constraints in the original framework are not suitable for finding invariants of recursive functions with multiple function calls. We thus generalize the implication constraints to those of the form , which means that if all of satisfy an invariant, so does . We extend their algorithms for inferring likely invariants from samples, verifying the inferred invariants, and generating new samples. We have implemented our method and confirmed its effectiveness through experiments.

- Machine Learning | Pp. 365-384