Catálogo de publicaciones - libros
Título de Acceso Abierto
Programming Languages and Systems: Programming Languages and Systems
Parte de: Theoretical Computer Science and General Issues Series
En conferencia: 27º European Symposium on Programming (ESOP) . Thessaloniki, Greece . April 16, 2018 - April 19, 2018
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
artificial intelligence; computer architecture; computer software selection and evaluation; distributed computer systems; formal logic; formal model; java; model checking; processors; program compilers; program verification; programming language; semantics; separation logic; software engineering specifications; theorem proving; type; systems verification
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No requiere | 2018 | Directory of Open access Books | ||
No requiere | 2018 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-319-89883-4
ISBN electrónico
978-3-319-89884-1
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2018
Tabla de contenidos
Program Verification by Coinduction
Brandon Moore; Lucas Peña; Grigore Rosu
We present a novel program verification approach based on coinduction, which takes as input an operational semantics. No intermediates like program logics or verification condition generators are needed. Specifications can be written using any state predicates. We implement our approach in Coq, giving a certifying language-independent verification framework. Our proof system is implemented as a single module imported unchanged into language-specific proofs. Automation is reached by instantiating a generic heuristic with language-specific tactics. Manual assistance is also smoothly allowed at points the automation cannot handle. We demonstrate the power and versatility of our approach by verifying algorithms as complicated as Schorr-Waite graph marking and instantiating our framework for object languages in several styles of semantics. Finally, we show that our coinductive approach subsumes reachability logic, a recent language-independent sound and (relatively) complete logic for program verification that has been instantiated with operational semantics of languages as complex as C, Java and JavaScript.
- Program Verification | Pp. 589-618
Velisarios: Byzantine Fault-Tolerant Protocols Powered by Coq
Vincent Rahli; Ivana Vukotic; Marcus Völp; Paulo Esteves-Verissimo
Our increasing dependence on complex and critical information infrastructures and the emerging threat of sophisticated attacks, ask for extended efforts to ensure the correctness and security of these systems. Byzantine fault-tolerant state-machine replication (BFT-SMR) provides a way to harden such systems. It ensures that they maintain correctness and availability in an application-agnostic way, provided that the replication protocol is correct and at least out of replicas survive arbitrary faults. This paper presents Velisarios, a logic-of-events based framework implemented in Coq, which we developed to implement and reason about BFT-SMR protocols. As a case study, we present the first machine-checked proof of a crucial safety property of an implementation of the area’s reference protocol: PBFT.
- Program Verification | Pp. 619-650
Evaluating Design Tradeoffs in Numeric Static Analysis for Java
Shiyi Wei; Piotr Mardziel; Andrew Ruef; Jeffrey S. Foster; Michael Hicks
Numeric static analysis for Java has a broad range of potentially useful applications, including array bounds checking and resource usage estimation. However, designing a scalable numeric static analysis for real-world Java programs presents a multitude of design choices, each of which may interact with others. For example, an analysis could handle method calls via either a top-down or bottom-up interprocedural analysis. Moreover, this choice could interact with how we choose to represent aliasing in the heap and/or whether we use a relational numeric domain, e.g., convex polyhedra. In this paper, we present a family of abstract interpretation-based numeric static analyses for Java and systematically evaluate the impact of 162 analysis configurations on the DaCapo benchmark suite. Our experiment considered the precision and performance of the analyses for discharging array bounds checks. We found that top-down analysis is generally a better choice than bottom-up analysis, and that using access paths to describe heap objects is better than using summary objects corresponding to points-to analysis locations. Moreover, these two choices are the most significant, while choices about the numeric domain, representation of abstract objects, and context-sensitivity make much less difference to the precision/performance tradeoff.
- Program Analysis and Automated Verification | Pp. 653-682
An Abstract Interpretation Framework for Input Data Usage
Caterina Urban; Peter Müller
Data science software plays an increasingly important role in critical decision making in fields ranging from economy and finance to biology and medicine. As a result, errors in data science applications can have severe consequences, especially when they lead to results that look plausible, but are incorrect. A common cause of such errors is when applications erroneously ignore some of their input data, for instance due to bugs in the code that reads, filters, or clusters it.
In this paper, we propose an abstract interpretation framework to automatically detect unused input data. We derive a program semantics that precisely captures data usage by abstraction of the program’s operational trace semantics and express it in a constructive fixpoint form. Based on this semantics, we systematically derive static analyses that automatically detect unused input data by fixpoint approximation.
This clear design principle provides a framework that subsumes existing analyses; we show that secure information flow analyses and a form of live variables analysis can be used for data usage, with varying degrees of precision. Additionally, we derive a static analysis to detect single unused data inputs, which is similar to dependency analyses used in the context of backward program slicing. Finally, we demonstrate the value of expressing such analyses as abstract interpretation by combining them with an existing abstraction of compound data structures such as arrays and lists to detect unused chunks of the data.
- Program Analysis and Automated Verification | Pp. 683-710
Higher-Order Program Verification via HFL Model Checking
Naoki Kobayashi; Takeshi Tsukada; Keiichi Watanabe
There are two kinds of higher-order extensions of model checking: HORS model checking and HFL model checking. Whilst the former has been applied to automated verification of higher-order functional programs, applications of the latter have not been well studied. In the present paper, we show that various verification problems for functional programs, including may/must-reachability, trace properties, and linear-time temporal properties (and their negations), can be naturally reduced to (extended) HFL model checking. The reductions yield a sound and complete logical characterization of those program properties. Compared with the previous approaches based on HORS model checking, our approach provides a more uniform, streamlined method for higher-order program verification.
- Program Analysis and Automated Verification | Pp. 711-738
Quantitative Analysis of Smart Contracts
Krishnendu Chatterjee; Amir Kafshdar Goharshady; Yaron Velner
Smart contracts are computer programs that are executed by a network of mutually distrusting agents, without the need of an external trusted authority. Smart contracts handle and transfer assets of considerable value (in the form of crypto-currency like Bitcoin). Hence, it is crucial that their implementation is bug-free. We identify the utility (or expected payoff) of interacting with such smart contracts as the basic and canonical quantitative property for such contracts. We present a framework for such quantitative analysis of smart contracts. Such a formal framework poses new and novel research challenges in programming languages, as it requires modeling of game-theoretic aspects to analyze incentives for deviation from honest behavior and modeling utilities which are not specified as standard temporal properties such as safety and termination. While game-theoretic incentives have been analyzed in the security community, their analysis has been restricted to the very special case of stateless games. However, to analyze smart contracts, stateful analysis is required as it must account for the different program states of the protocol. Our main contributions are as follows: we present (i) a simplified programming language for smart contracts; (ii) an automatic translation of the programs to state-based games; (iii) an abstraction-refinement approach to solve such games; and (iv) experimental results on real-world-inspired smart contracts.
- Program Analysis and Automated Verification | Pp. 739-767
Session-Typed Concurrent Contracts
Hannah Gommerstadt; Limin Jia; Frank Pfenning
In sequential languages, dynamic contracts are usually expressed as boolean functions without externally observable effects, written within the language. We propose an analogous notion of concurrent contracts for languages with session-typed message-passing concurrency. Concurrent contracts are partial identity processes that monitor the bidirectional communication along channels and raise an alarm if a contract is violated. Concurrent contracts are session-typed in the usual way and must also satisfy a transparency requirement, which guarantees that terminating compliant programs with and without the contracts are observationally equivalent. We illustrate concurrent contracts with several examples. We also show how to generate contracts from a refinement session-type system and show that the resulting monitors are redundant for programs that are well-typed.
- Session Types and Concurrency | Pp. 771-798
A Typing Discipline for Statically Verified Crash Failure Handling in Distributed Systems
Malte Viering; Tzu-Chun Chen; Patrick Eugster; Raymond Hu; Lukasz Ziarek
A key requirement for many distributed systems is to be resilient toward partial failures, allowing a system to progress despite the failure of some components. This makes programming of such systems daunting, particularly in regards to avoiding inconsistencies due to failures and asynchrony. This work introduces a formal model for crash failure handling in asynchronous distributed systems featuring a lightweight coordinator, modeled in the image of widely used systems such as ZooKeeper and Chubby. We develop a typing discipline based on multiparty session types for this model that supports the specification and static verification of multiparty protocols with explicit failure handling. We show that our type system ensures subject reduction and progress in the presence of failures. In other words, in a well-typed system even if some participants crash during execution, the system is guaranteed to progress in a consistent manner with the remaining participants.
- Session Types and Concurrency | Pp. 799-826
On Polymorphic Sessions and Functions
Bernardo Toninho; Nobuko Yoshida
This work exploits the logical foundation of session types to determine what kind of type discipline for the -calculus can exactly capture, and is captured by, -calculus behaviours. Leveraging the proof theoretic content of the soundness and completeness of sequent calculus and natural deduction presentations of linear logic, we develop the first and processes-as-functions and functions-as-processes encodings between a polymorphic session -calculus and a linear formulation of System F. We are then able to derive results of the session calculus from the theory of the -calculus: (1) we obtain a characterisation of inductive and coinductive session types via their algebraic representations in System F; and (2) we extend our results to account for and passing, entailing strong normalisation.
- Session Types and Concurrency | Pp. 827-855
Concurrent Kleene Algebra: Free Model and Completeness
Tobias Kappé; Paul Brunet; Alexandra Silva; Fabio Zanasi
Concurrent Kleene Algebra (CKA) was introduced by Hoare, Moeller, Struth and Wehrman in 2009 as a framework to reason about concurrent programs. We prove that the axioms for CKA with bounded parallelism are complete for the semantics proposed in the original paper; consequently, these semantics are the free model for this fragment. This result settles a conjecture of Hoare and collaborators. Moreover, the technique developed to this end allows us to establish a Kleene Theorem for CKA, extending an earlier Kleene Theorem for a fragment of CKA.
- Session Types and Concurrency | Pp. 856-882