Catálogo de publicaciones - libros

Compartir en
redes sociales


Testing of Software and Communicating Systems: 19th IFIP TC6/WG6.1 International Conference, TestCom 2007, 7th International Workshop, FATES 2007, Tallinn, Estonia, June 26-29, 2007. Proceedings

Alexandre Petrenko ; Margus Veanes ; Jan Tretmans ; Wolfgang Grieskamp (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-73065-1

ISBN electrónico

978-3-540-73066-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© IFIP International Federation for Information Processing 2007

Tabla de contenidos

TPLan-A Notation for Expressing Test Purposes

Stephan Schulz; Anthony Wiles; Steve Randall

To this day test purposes are predominately defined in practice using natural language. This paper describes a more formal approach based on a notation which has been recently developed and standardized at the European Telecommunications Standards Institute (ETSI) called TPLan. We introduce here the motivation and main concepts behind this new notation, and share our experiences gathered from its application in the development of standardized test specifications. We also discuss how TPLan can help to make test development as a whole more efficient – especially in the context of suite based test specification.

Pp. 292-304

Testing Nondeterministic Finite State Machines with Respect to the Separability Relation

Natalia Shabaldina; Khaled El-Fakih; Nina Yevtushenko

In this paper, we propose a fault model and a method for deriving complete test suites for nondeterministic FSMs with respect to the separability relation. Two FSMs are separable if there exists an input sequence such that the sets of output responses of these FSMs to the sequence do not intersect. In contrast to the well-known reduction and equivalence relations, the separability relation can be checked when the «all weather conditions» assumption does not hold for a nondeterministic Implementation Under Test (IUT). A (complete) test suite derived from the given (nondeterministic) FSM specification using the separability relation can detect every IUT that is separable from the given specification after applying each test case only once. Two algorithms are proposed for complete test derivation without the explicit enumeration of all possible implementations. The first algorithm can be applied when the set of possible implementations is the set of all complete nondeterministic submachines of a given mutation machine. The second algorithm is applied when the upper bound on the number of states of an IUT is known.

Pp. 305-318

Learning and Integration of Parameterized Components Through Testing

Muzammil Shahbaz; Keqin Li; Roland Groz

We investigate the use of parameterized state machine models to drive integration testing, in the case where the models of components are not available beforehand. Therefore, observations from tests are used to learn partial models of components, from which further tests can be derived for integration. We have extended previous algorithms to the case of finite state models with predicates on input parameters and observable non-determinism. We also propose a new strategy where integration tests can be derived from the data collected during the learning process. Our work typically addresses the problem of assembling telecommunication services from black box COTS.

Pp. 319-334

An EFSM-Based Passive Fault Detection Approach

Hasan Ural; Zhi Xu

Extended Finite State Machine (EFSM)-based passive fault detection involves modeling the system under test (SUT) as an EFSM , monitoring the input/output behaviors of the SUT, and determining whether these behaviors relate to faults within the SUT. We propose a new approach for EFSM-based passive fault detection which randomly selects a state in and checks whether there is a trace in starting from this state which is compatible with the observed behaviors. If a compatible trace is found, we determine that observed behaviors are not sufficient to declare the SUT to be faulty; otherwise, we check another unchecked state. If all the states have been checked and no compatible trace is found, we declare that the SUT is faulty. We use a Hybrid method in our approach which combines the use of both Interval Refinement and Simplex methods to improve the performance of passive fault detection.

Pp. 335-350

Test Data Variance as a Test Quality Measure: Exemplified for TTCN-3

Diana Vega; Ina Schieferdecker; George Din

Test effectiveness is a central quality aspect of a test specification which reflects its ability to demonstrate system quality levels and to discover system faults. A well-known approach for its estimatation is to determine coverage metrics for the system code or system model. However, often these are not available as such but the system interface only, which basically define structural aspects of the stimuli and responses to the system.

Therefore, this paper focuses on the idea of using test data variance analysis as another analytical approach to determine test quality. It presents a method for the quantitative evaluation of structural and semantical variance of test data. Test variance is defined as the test data distribution over the system interface data domain. It is expected that the more the test data varies, the better the system is tested by a given test suite. The paper instantiates this method for black-box test specifications written in TTCN-3 and the structural analysis of send templates. Distance metrics and similarity relations are used to determine the data variance.

Pp. 351-364

Model-Based Testing of Optimizing Compilers

Sergey Zelenov; Sophia Zelenova

We describe a test development method, named OTK, that is aimed at optimizing compiler testing. The OTK method is based on constructing a model of optimizer’s input data. The method allows developing tests targeted to testing a chosen optimizer. A formal data model is constructed on the basis of an abstract informal description of an algorithm of the optimizer under test. In the paper, we consider in detail the process of analyzing an optimization algorithm and building a formal model. We also consider in outline the other part of the method, test selection and test running. The OTK method has been successfully applied in several case studies, including test development for several different optimizing compilers for modern architectures.

Pp. 365-377