Developing a Testing Maturity Model: Part I

Ilene Burnstein, Taratip Suwannasart, and C.R. Carlson,

llinois Institute of Technology


Testing is a critical component of the software development process. Organizations have not fully realized their potential for supporting the development of high quality software products. To address this issue we are building a Testing Maturity Model (TMM) to serve as a guide to organizations focusing on test process assessment and improvement.

Testing is a critical component of a mature software development process. It is one of the most challenging and costly process activities, and in its fullest definition provides strong support for the production of quality software. In spite of its vital role in software development, existing maturity models have not adequately addressed testing issues nor has the nature of a mature testing process been well defined. We are developing a Testing Maturity Model to address deficiencies these areas. The TMM will complement the Software Engineering Institute's Capability Maturity Model (CMM) by specifically addressing issues important to test managers, test specialists, and software quality assurance staff. The TMM will contain a set of maturity levels through which an organization can progress toward testing process maturity, a set of recommended practices at each level of maturity, and an assessment model that will allow organizations to evaluate and improve their testing process.

Approach to Model Development

The TMM will be used by

There are several existing process evaluation and assessment models, including the CMM [1-5], ISO 9001 [6], Bootstrap [7], and SPICE [8]. However, none of these models focus primarily on the testing process. The CMM, which has widespread use, does not adequately address testing issues. For example, in the CMM

Because of the important role of testing in software process and product quality and the limitations of existing process assessment models, we have focused our research efforts in the direction of a TMM. We propose the following components to support the objectives of TMM development:

The general requirements for model development are as follows: the model must be acceptable to the software development community and be based on agreed software engineering principles and practices. It must allow for the development of testing process maturity in structured step-wise phases that follow natural process evolution. There must also be a support mechanism for test process assessment and improvement. Currently, four sources serve as the principal inputs to TMM development: the Capability Maturity Model [1-5], Gelperin and Hetzel's Evolutionary Testing Model [9], current industrial testing practices [10], and Beizer's Progressive Phases of a Testers' Mental Model [11].

The Capability Maturity Model

The CMM is a comprehensive process evaluation and improvement model developed by the Software Engineering Institute (SEI) that has been widely accepted and applied by the software industry [1-5]. Research at the SEI and its industrial partners has resulted in refinement of the fundamental concepts of process maturity. Support for process maturity growth is achieved through the hierarchy of levels in the CMM. A process maturity level is identified as one of the five levels of the CMM. An individual level is composed of a set of process goals, each of which stabilize an important component of the software process. Each level provides the foundation to progress to the next level. All of the levels in the CMM have an internal structure (except Level 1), that consists of key process areas organized into common features. The key process areas indicate where the organization must focus to achieve maturity at the given level. The common features specify key practices that when accomplished support the achievements required of the key process areas.

We have borrowed several features from the CMM and applied them to TMM development. Like the CMM, the TMM uses the concept of maturity levels as a script for process evaluation and improvement. The TMM levels have a structural framework as do the levels in the CMM. We have added a component called the "critical views" to our framework so that the testing process's key participants are included in process maturity growth. Both models require that all of the capabilities at each lower level be included in succeeding levels. To support the self-assessment process, the TMM will also use the questionnaire and interview evaluation approach of the CMM. Not only is the TMM structurally similar to the CMM, it must be viewed and utilized as a complement to the CMM, since mature testing processes depend on general process maturity. We have established relationships between the two models that reflect this dependency. Subsequent sections of this article show how the levels and goals for both models overlap.

The Evolutionary Stages of Testing

A TMM should reflect the evolutionary pattern of testing process maturity growth documented over the last several decades. We have used the historical model provided in a key paper by Gelperin and Hetzel as the foundation for historical level differentiation in the TMM [9]. Their model describes phases and test goals for the periods of the 1950s through the 1990s.

The initial period in their model is described as "Debugging-Oriented." During that period most software development organizations had not clearly differentiated between testing and debugging. Testing was viewed as an activity to help remove bugs. In the "Demonstration-Oriented" period, a primary testing goal was to demonstrate that the software satisfied its specification. Testing and debugging were still linked in efforts to detect, locate, and correct faults. The "Destruction-Oriented" period focused on testing as an activity to detect implementation faults. Debugging was a set of separate activities needed to locate and correct faults. In the "Evaluation-Oriented" period, testing became an activity that was integrated into the software lifecycle. The value of review activities was recognized. The view of testing was broadened and its goals were to detect requirements, design and implementation faults. The Gelperin-Hetzel historical model is culminated by what they call a "Prevention-Oriented" period, which reflects the optimizing Level 5 of both the CMM and the TMM. The scope of testing is broadly defined and includes review activities. A primary testing goal is to prevent requirements, design, and implementation faults. Review activities now support test planning, test design, and product evaluation.

Industry Practices and the Beizer Model

A survey of current industrial practices has also provided input to TMM level definition [10]. It illustrates the best and worst testing environments in the software industry of 1993 and has allowed us to extract realistic benchmarks by which to evaluate and improve testing practices. We also have incorporated into the TMM concepts associated with Beizer's evolutionary model of the individual testers' thinking process, which in many ways parallels the testing maturity growth pattern described in the Gelperin-Hetzel model [9,11]. Its influence on TMM development is based on the premise that a mature testing organization is built on the skills, abilities, and attitudes of individuals that work within it.

Defining Testing Process Maturity

Paulk, Weber, et al., have compared and contrasted the behavioral characteristics of immature and mature software organizations and have described a set of fundamental concepts underlying software process maturity [1]. We use their basic set of concepts to define testing process maturity. Extrapolating from Paulk, a mature testing process is managed, measured, monitored, and effective. In our description of test process maturity we interpret the meaning of the term "managed" in its broadest sense to include planning, staffing, directing, controlling, and organizing components [12]. In addition, a mature testing process is applied on an institution-wide basis, is supported by management, and is a part of the organizational culture. Finally, the mature testing process is well-understood and has the capability of continuous growth and improvement.

The attributes of a mature testing process are described below, each supporting one or more of the process maturity criteria described in the CMM. We believe that a mature testing process has the following:

A set of defined testing policies. There is a set of well-defined and documented testing policies that is applied throughout the organization. The testing policies are supported by upper management, institutionalized, and integrated into the organizational culture.

A test planning process. There is a well-defined and documented test planning process used throughout the organization that allows for the specification of test objectives and goals, test resource allocation, test designs, test cases, test schedules, test costs, and test tasks. Test plans reflect the risks of failures; time and resources are allocated accordingly.

A test lifecycle. The test process is broad-based and includes activities in addition to execution-based testing. There is a well-defined test lifecycle with a set of phases and activities that is integrated into the software lifecycle. The test lifecycle encompasses all broad-based testing activities; for example, test planning, test plan reviews, test design, implementation of test-related software, and maintenance of test work products. It is applied to all projects.

A test group. There is an independent testing group. The position of tester is defined and supported by upper management. Instruction and training opportunities exist to educate and motivate the test staff.

A test process improvement group. There is a group devoted to test process improvement. They can be a part of a general process improvement group, a software quality assurance group, or a component of the test group. Since the test process is well defined and measured, the test improvement group can exert leadership to fine-tune the process, apply incremental improvement techniques, and evaluate their impact.

A set of test-related metrics. The organization has a measurement program. A set of test-related metrics is defined, data is collected and analyzed with automated support. The metrics are used to support the appropriate actions needed for test process improvement.

Tools and equipment. Appropriate tools are available to assist the testing group with testing tasks and to collect and analyze test-related data. The test process improvement group provides the leadership to evaluate potential tools and oversees the technology transfer issues associated with integrating the tools into the organizational environment.

Controlling and tracking. The test process is monitored and controlled by the test managers to track progress, take actions when problems occur, and evaluate performance and capability. Quantitative techniques are used to analyze the testing process and to determine test process capability and effectiveness.

Product quality control. Statistical methods are used for testing to meet quality standards. "Stop testing" criteria are quantitative. Product quality is monitored, defects are tracked, and causal analysis is applied for defect prevention.

The V-Model and Testing Maturity

A mature testing process requires a broadening of the traditional definition of what is considered to be a testing activity. The expanded definition of testing includes reviews, audits, walk-throughs, and inspectionsactivities considered to be a part of verification and validation (V&V) processes [13]. It also requires integration of all these activities into the software lifecycle. This allows quality to be built into the software from the beginning of the software lifecycle.

The integration of test-related activities into the software lifecycle is represented by the Modified V-Model of software development, as described by Daich et al. [14]. This model illustrates how to integrate the specification and design of unit, integration, system, and acceptance tests into the software lifecycle. We have extended this model to include an expanded set of typical V&V activities as shown on Figure 1. The Extended/Modified (E/M) V-Model illustrates the software development lifecycle with review and audit activities, test development, and test execution activities taking place throughout the lifecycle. It provides support for testing capabilities at the higher levels of the TMM.


Figure 1. The Extended/Modified V-Model.

The left side of the E/M V-Model shows the requirements, design, and coding phases and the test-related activities that need to be performed after these stages. For example, after the design phase, a design review can be held, and integration tests can be specified and designed. The right side of the model shows the test-related activities that need to be performed once the code is completed. Included are execution-based activities such as unit and integration testing as well as reviews and audits of the test plans that precede the actual execution.

Components of the TMM

The TMM has two major components. Each of these has several subcomponents, which are described as follows:

The Set of Levels

The TMM has a set of well-defined levels. Each prescribes a position in the testing maturity hierarchy. The characteristics of each level are described in terms of testing capability and organizational goals. Each level, with the exception of Level 1, has a structure. The structure consists of

Future versions of the TMM will also have a set of appropriate metrics and tools that will be associated with each level [14, 15].

The Assessment Model

The assessment model is currently under development and will be composed of the following items:

The Questionnaire: This will contain questions that are designed to determine a level of testing maturity. The questions will relate to the maturity goals and process issues described at each level. They will help determine to what extent the organization has mechanisms in place to achieve those goals and for resolving the maturity issues.

The Assessment Procedure: Assessing a testing process to determine its maturity level is dependent on interviews with key personnel and responses to the questionnaire. The assessment procedure will give the assessment team guidelines on who to interview and how to collect, organize, analyze, and interpret the data collected from questionnaires and personal interviews. A procedure for determining maturity levels from the analyzed results will be part of the assessment procedure. A reporting mechanism will support dissemination of results and recommendations for test process improvement, with high priority items identified.

The Team Selection and Training Procedure: The assessment procedure will be carried out by a trained assessment team internal to the organization being assessed. A training procedure will be in place to instruct personnel in test process assessment using the TMM. The training program will include instruction in maturity concepts, maturity models, the principals upon which the TMM levels are built, the internal structure of the levels, interviewing techniques, and an in-depth understanding of the assessment procedure. A set of qualifications for team membership and team leadership will be established.


Figure 2. The Framework of the TMM.

The Model Structure: A Framework for the Levels

The TMM is characterized by five testing maturity levels within a framework of goals, subgoals, activities, tasks, and responsibilities. The model structure is shown in Figure 2. Each level implies a specific testing maturity. With the exception of Level 1, several maturity goals, which identify key process areas, are indicated at each level. The maturity goals identify testing improvement goals that must be addressed to achieve maturity at that level. To be placed at a level, an organization must satisfy the maturity goals at that level. This requirement will be reflected in the questionnaire for process assessment.

Each maturity goal is supported by one or more maturity subgoals (MSG). The MSGs specify less abstract objectives, and they define the scope, boundaries, and needed accomplishments for a particular level.

The maturity subgoals are achieved through a group of activities and tasks with responsibilities (ATR). The ATRs address implementation and organizational adaptation issues at a specific level. Activities and tasks are defined in terms of actions that must be performed at a given level to improve testing capability; they are linked to organizational commitments. Responsibilities are assigned for these activities and tasks to three groups that we believe represent the key participants in the testing process: managers, developers and testers, and users and clients. In the model they are referred to as "the three critical views." Definition of their roles is essential in developing a maturity framework. The manager's view involves commitment and ability to perform activities and tasks related to improving testing capability. The developer and tester's view encompasses technical activities and tasks that, when applied, constitute mature testing practices. The users and client's view is defined as a cooperating or supporting view. The developers and testers work with client and user groups on quality-related activities and tasks that concern user-oriented needs. The focus is on soliciting client/user support, consensus, and participation in activities such as requirements analysis, usability testing, and acceptance test planning.

The Operational Framework of the TMM

We conclude this first article on the TMM with a brief introduction to its five-level structure. The level names are as follows:

These levels reflect the evolution of the testing process from one that is chaotic and undefined to one that is measured and optimizable. This is compatible with the overall CMM maturity growth hierarchy. The levels also reflect the five periods in the evolution of testing model proposed by Gelperin and Hetzel, which was discussed earlier in this article [9]. The correspondence can be briefly described as follows: Level 1 of the TMM is related to the "Debugging-Oriented" period of the Gelperin/Hetzel model; Level 2, Phase Definition, has characteristics of both the "Demonstration" and the "Destruction-Oriented" period; Level 3, Integration, is related to Gelperin and Hetzel's "Evaluation-Oriented" period; and Levels 4 and 5 reflect the trends and practices shown in the "Evaluation-Oriented" period and the growth in maturity of the "Prevention-Oriented" period.

In addition, the TMM levels parallel Beizers' five-phase model of an individual tester's maturity growth. Beizer describes an attitudinal progression that begins with model Phase 0 in which the tester is unable to distinguish between debugging and testing. The progression culminates in Phase 4 where testers view themselves as disciplined professionals whose task is to support the development of highly testable, low risk software [11].

Summary

This first article has described our approach to building a Testing Maturity Model. The major components of the model and an outline of its level structure have also been presented. The second article of this series will provide a more in-depth view of the model. It will focus on the inner structure of the TMM, its framework of maturity goals, subgoals and activities, tasks and responsibilities. The relationship between the CMM and the TMM will be discussed as well as issues relating to their concurrent application by a software development organization. We also will propose a Testers' Tool Workbench to support test process maturity growth. u

About the Authors

Ilene Burnstein holds a doctorate from Illinois Institute of Technology. She is an associate professor of computer science at Illinois Institute of Technology and co-director of the Center for Software Engineering. Her research interests include testing process issues, testing techniques, and automated program recognition and debugging.

Ilene Burnstein
Illinois Institute of Technology
Computer Science Department
10 West 31st Street
Chicago, IL 60616-3793
Voice: 312-567-5155
Fax: 312-567-5067
E-mail: burnstein@minna.iit.edu

Taratip Suwanassart holds a doctorate in computer science from Illinois Institute of Technology. Her research interests are in test management, test process improvement, software metrics, and data modeling.

C.R. Carlson holds a doctorate from the University of Iowa. He is a professor and chairman of the computer science department at Illinois Institute of Technology. His research interests include object-oriented modeling, design and query languages, and software process issues.

References

1. Paulk, M., C. Weber, B. Curtis, and M. Chrissis, The Capability Maturity Model Guideline for Improving the Software Process, Addison-Wesley, Reading, Mass., 1995.

2. Humphrey, W., Managing the Software Process, Addison-Wesley, Reading, Mass., 1989.

3. Humphrey, W., T. Snyder, and R. Willis, "Software Process Improvement at Hughes Aircraft," IEEE Software, July 1991, pp. 11-23.

4. Paulk, M., B. Curtis, M. Chrissis, and C. Weber, "Capability Maturity Model, Version 1.1," IEEE Software, July 1993, pp. 18-27.

5. Paulk, M., et al., "Key Practices of the Capability Maturity Model, Version 1.1," Technical Report CMU/SEI-93-TR-25, Software Engineering Institute, Pittsburgh, Pa., 1993.

6. Coallier, F., "How ISO 9001 Fits into the Software World," IEEE Software, January 1994, pp. 98-100.

7. Bicego, A., and D. Kuvaja, "Bootstrap, Europe's Assessment Method," IEEE Software, May 1993, pp. 93-95.

8. Paulk, M. and M. Konrad, "An Overview of ISO's SPICE Project," American Programmer, Feb., 1994, pp. 16-20.

9. Gelperin, D., and B. Hetzel, "The Growth of Software Testing," CACM, Vol. 31, No. 6, 1988, pp. 687-695.

10. Durant, J., Software Testing Practices Survey Report, Software Practices Research Center, Technical Report, TR5-93, May 1993.

11. Beizer, B., Software System Testing Techniques, 2d ed., Van Nostrand Reinhold, New York, 1990.

12. Thayer, R., "Software Engineering Project Management, A Top Down View," IEEE Tutorial, Software Engineering Project Management, R. Thayer, ed., IEEE Computer Society Press, Los Alamitos, Calif., 1990, pp. 15-53.

13. Hetzel, B., The Complete Guide to Software Testing, 2d ed., QED Information Sciences, Inc., Wellesley, Mass., 1990.

14. Daich, G., G. Price, B. Ragland, and M. Dawood, Software Test Technologies Report, August 1994, Software Technology Support Center, Hill Air Force Base, Utah, August 1994.

15. Pfleeger, S. and C. McGowan, "Software Metrics in the Process Maturity Framework," Journal of Systems and Software, Vol. 12, No. 3, July 1990, pp. 255-262.


This is the first of two articles that describe the development of the Testing Maturity Model. The second will offer an expanded view of the model, including its internal structure of maturity goals, maturity subgoals, and activities, tasks, and responsibilities. A Tester's Tool Workbench will also be proposed.