Catálogo de publicaciones - libros
Testing Commercial-off-the-Shelf Components and Systems
Sami Beydeda ; Volker Gruhn (eds.)
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Software Engineering/Programming and Operating Systems; Software Engineering; Management of Computing and Information Systems
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2005 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-21871-5
ISBN electrónico
978-3-540-27071-3
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2005
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2005
Cobertura temática
Tabla de contenidos
Basic Concepts and Terms
Sami Beydeda; Volker Gruhn (eds.)
Detecting programming errors in component-based systems can be difficult and very expensive. Due to the trend in outsourcing the development of software, companies are deploying more third-party software for mission critical systems. Hence, there is an increased need to assess the robustness and security of software. The traditional way of supporting component-oriented programming is to provide components in the form of shared libraries. In this chapter we describe a tool called HEALERS that can detect programming errors and security vulnerabilities in third-party libraries without source code access. A major advantage of our approach is that it is highly automated and could be used by component users who have little knowledge about the internals of a library.
Pp. 1-14
Context of the Book
Sami Beydeda; Volker Gruhn (eds.)
In this chapter, we first survey current component technologies and discuss the features they inherit. Quality assurance (QA) characteristics of component systems and the life cycle of component-based software development (CBSD) are also addressed. Based on the characteristics of the life cycle, we propose a QA model for CBSD. The model covers the eight main processes in component-based software systems (CBS) development. A Component-based Program Analysis and Reliability Evaluation (ComPARE) environment is established for evaluation and prediction of quality of components. ComPARE provides a systematic procedure for predicting the quality of software components and assessing the reliability of the final system developed using CBSD. Using different quality prediction techniques, ComPARE has been applied to a number of component-based programs. The prediction results and the effectiveness of the quality prediction models for CBSD were outlined in this chapter.
Pp. 15-30
Testing Polymorphic Behavior of Framework Components
Benjamin Tyler; Neelam Soundarajan
An object-oriented framework is often the key component in building products for a given application area. Given such a framework, an application developer needs only to provide definitions suited to the needs of his or her product for the methods. With appropriate initializations, the calls to the hook methods made by the methods defined in the framework will then be dispatched to the definitions provided by the developer, thus customizing the behavior of the template methods. Specifying and testing such a framework, in particular, specifying and testing its polymorphic behavior that enables such customization, presents some challenges. We discuss these and develop ways to address them.
Part I - Testing Components Context-Independently | Pp. 33-54
COTS Component Testing through Built-In Test
Franck Barbier
Commercial software components have a typical characteristic, which is that they admit no variation. As a result, a component user has to cope with closed, even rigid, software units. In the case of interfaces, this is a sound characteristic because client/server coupling is well-defined, stabilized, and as low as possible. For implementation, this raises the problem of trustworthiness about a component’s actual behavior, its quality of service and its integration capability. In the spirit of a component marketplace, the key actions are choosing and replacing components, upgrading individual components and resultant COTS-based systems, and some inevitable customization. Taking competition into account, vendors may gain much credit by offering technical support for the building of slightly amended versions that permit evaluation, tuning, and more. The ability to endow Commercial Off-The-Shelf (COTS) components with Built-In Test (BIT) material to increase component testability and configurability is the major theme of this chapter. At deployment time, we may thus have new releases called BIT components. Moreover, error correction, functional enhancement, and Quality of Service (QoS) improvement, are also considered in this chapter through the fact that BIT components are relevant software artifacts to organize and instrument a software quality circle between reusers and providers: users’ feedbacks are BIT outputs that help and guide vendors in the construction of new versions.
Part I - Testing Components Context-Independently | Pp. 55-70
COTS Component Testing through Aspect-Based Metadata
Alejandra Cechich; Macario Polo
Documentation is one of the key issues in Component-Based System Development that emphasizes the creation of models of testing information. Hence, testing based on these models might point out missing entities — such as methods — or inconsistencies in the treatment of entities for testing. Documentation influences component specification as well as specific information for testing. For example, Hybertson in the chapter ‘Using a Specification Approach to Facilitate Component Testing’ declares that more precise and complete specifications will enable and simplify component testing, and Memon in the chapter ‘A Process and Role-Based Taxonomy of Techniques to Make Testable COTS Components’ presents some examples in the use of meta-information applied to regression testing. There are many other examples of techniques that enrich documentation by structuring and classifying information. Along those lines, aspect information has been used to help implement better component interfaces and to encode knowledge of a component’s capability.
In this chapter, we introduce an aspect-based categorization of information for testing. This information is added as a component’s metadata to generate aspect-dependent test cases. We will illustrate the proposed technique and its supporting tool, which allows us to add testing metadata to Enterprise JavaBeans (EJB) components.
Part I - Testing Components Context-Independently | Pp. 71-88
Automatic Testing of Exception Handling Code
Christof Fetzer; Karin Högstedt; Pascal Felber
Robust applications have to gracefully handle runtime errors that occur during program execution. Recent studies have shown, however, that code related to error handling is the most likely to contain software bugs: first, because it is almost never exercised under normal operation; and second, because errors modify the control flow of the program and render error handling code inherently complex. It is thus of utmost importance for software robustness to carefully and systematically test error handling code.
In this chapter, we describe automated tools to exercise and validate error handling code in modern programming languages that support exceptions, such as C++ and Java. We first introduce the notion of “failure atomicity,” which informally states that a software component must remain in a consistent state after an exception has been thrown for the error to be recoverable. This property is essential, because it constitutes a prerequisite for using the component in a robust application. We then propose automated mechanisms based on code instrumentation for injecting exceptions in a program and verify consistent operation of the error handling code. Experiments illustrate the usefulness of our techniques with real-world applications.
Part I - Testing Components Context-Independently | Pp. 89-105
A Process and Role-Based Taxonomy of Techniques to Make Testable COTS Components
Atif M. Memon
An important activity in component-based software development is the testing of each component in its context of use. Since a component developer cannot anticipate all possible applications of the component, this task is usually left to the component user. What makes testing of component-based software especially challenging for the user is the lack of source code and partial knowledge of the working of the component. To ensure high quality of the overall software, the component user must perform testing tasks, i.e., generate test cases, create test oracles, evaluate coverage, and perform regression testing, all without access to the component’s source code or complete semantic knowledge. The developer of the component can facilitate the execution of these testing tasks by augmenting the component appropriately with information that improves its testability. This chapter describes techniques that can be employed by developers to build testable COTS components. A taxonomy of these techniques based on the phases of software testing and the role of the component developer/user is presented, followed by a description of an important subset of the following techniques: (1) State Machine Models, (2) Built-In Tests, (3) Built-In Contract Testing, (4) Interface Probing, (5) Traceable Components, (6) Metacontent, (7) Binary Reverse Engineering, (8) Fault Injection Support, (9) Testable Beans, (10) Retro Components, (11) Component Test Bench, and (12) Self-Testing Components. Finally, a discussion summing up all the techniques is presented.
Part II - Testing Components in the Context of a System | Pp. 109-140
Evaluating the Integrability of COTS Components — Software Product Family Viewpoint
Anne Immonen; Eila Niemelä; Mari Matinlassi
COTS (Commercial-Off-The-Shelf) components are increasingly used in product family-based software engineering. Within product families, components are assembled using a disciplined process and a common product family architecture. However, the black-box nature of COTS components and insufficient component documentation make the integration of components difficult. Successful component integration requires that the component match the functional, quality, and system requirements and interoperate with other components of the systems family. Ensuring component integrability is an important task, especially within product families, where the ineffective use of COTS components can cause extensive and long-term problems. This chapter discusses the characteristics of architecture, components, and product families that affect the integrability of COTS components, as well as the evaluation techniques that an integrator can use in the assessment of the integration capability of COTS components.
Part II - Testing Components in the Context of a System | Pp. 141-167
A User-Oriented Framework for Component Deployment Testing
Andrea Polini; Antonia Bertolino
The capability to test an externally acquired software component deployed within the target application environment can help establish the compliance of a candidate component to the customer’s expectations, and helps “increase trust.” To this end, in this chapter we introduce the CDT framework that is conceived to ease the planning and the execution of test suites by the component user. We discuss the main challenges raised by the new component-based software production methodology and how the proposed framework can facilitate the evaluation of candidate components. Notably, CDT permits codifying a component deployment test suite in the early stages of development without reference to any (yet unknown) real implementation, and, later, during the integration stage, adapting and executing the test suite on a found candidate component.
Part II - Testing Components in the Context of a System | Pp. 169-194
Modeling and Implementation of Built-In Contract Tests
Hans-Gerhard Gross; Ina Schieferdecker; George Din
Built-in contract testing is based on the idea of building tests directly into components, so that each component can assess whether it will be integrated into a suitable environment, and the environment can assess whether a newly integrated component will be acceptable.
Component technologies such as CCM, .NET, or EJB are more and more being supported by model-based approaches like the OMG’s Model Driven Architecture (MDA). The idea is to gain considerable momentum in the development of component-based architectures through high-level modeling and automatic code generation. However, the emphasis with these approaches is currently more on system design and development and not as much on system validation and testing, so the expected reductions in component-based development in terms of time and effort can only be realized in component and application construction and not during test development. Lengthy and costly in-situ verification and acceptance testing, that is mainly still performed manually, undermines the benefits of these modern development approaches.
This chapter demonstrates how built-in contract testing can be integrated with and made to supplement automatic approaches to derive application testing from system models, represent them on the model level, and generate executable tests from these models. This model-based testing approach increases the degree of automation in generating and realizing built-in contract tests; therefore, it also increases the quality of the built-in tests and reduces the resources required for developing them.
Part II - Testing Components in the Context of a System | Pp. 195-211