Developing a Testing Maturity Model, Part II

Ilene Burnstein
Taratip Suwannasart
C.R. Carlson
Illinois Institute of Technology



Ilene Burnstein

C.R. Carlson

This article expands the level structure of the Testing Maturity Model (TMM). The maturity goals, subgoals, and the set of activities, tasks, and responsibilities associated with each level are shown. These are necessary to apply the TMM to test process assessment and improvement. Also discussed are automated tool support for test process maturity in the form of a Testers' Workbench and the nature of the relationship between the Capability Maturity Model and the Testing Maturity Model.

This section describes behavioral characteristics of organizations at the five Testing maturity Model (TMM) levels. We give an informal view of each level and a perspective on maturity growth and maturity deficiencies as an organization progresses through the TMM hierarchy.

Behavioral Characteristics of the TMM Levels

Level 1 - Initial: Testing is a chaotic process; it is ill-defined and not distinguished from debugging. Tests are developed in an ad hoc way after coding is done. Testing and debugging are interleaved to get the bugs out of the software. The objective of testing is to show that the software works [1]. Software products are released without quality assurance. There is a lack of resources, tools, and properly trained staff. This type of organization would be on Level 1 of the Capability Maturity Model (CMM) developed by the Software Engineering Institute. There are no maturity goals at this level.

Level 2 - Phase Definition: Testing is separated from debugging and is defined as a phase that follows coding. It is a planned activity; however, test planning at Level 2 may occur after coding for reasons related to the immaturity of the test process. For example, at Level 2 there is the perception that all testing is execution-based and dependent on the code, and therefore it should be planned only when the code is complete.

The primary goal of testing at this level of maturity is to show that the software meets its specifications [2]. Basic testing techniques and methods are in place. Many quality problems at this TMM level occur because test planning occurs late in the software lifecycle. In addition, defects propagate into the code from the requirements and design phases, as there are no review programs that address this important issue. Post-code, execution-based testing is still considered the primary testing activity.

Level 3 - Integration: Testing is no longer a phase that follows coding; it is integrated into the entire software lifecycle. Organizations can build on the test planning skills they have acquired at Level 2. Unlike Level 2, planning for testing at TMM Level 3 begins at the requirements phase and continues throughout the lifecycle supported by a version of the V-model [3]. Test objectives are established with respect to the requirements based on user and client needs and are used for test case design and success criteria. There is a test organization, and testing is recognized as a professional activity. There is a technical training organization with a testing focus. Basic tools support key testing activities. Although organizations at this level begin to realize the important role of reviews in quality control, there is no formal review program, and reviews do not yet take place across the lifecycle. A test measurement program has not yet been established to qualify process and product attributes.

Level 4 - Management and Measurement: Testing is a measured and quantified process. Reviews at all phases of the development process are now recognized as testing and quality control activities. Software products are tested for quality attributes such as reliability, usability, and maintainability. Test cases from all projects are collected and recorded in a test case database to test case reuse and regression testing. Defects are logged and given a severity level. Deficiencies in the test process are now often due to the lack of a defect prevention philosophy and the porosity of automated support for the collection, analysis, and dissemination of test-related metrics.

Level 5 - Optimization, Defect Prevention, and Quality Control: Because of the infrastructure provided by the attainment of maturity goals at Levels 1 through 4 of the TMM, the testing process is now said to be defined and managedits cost and effectiveness can be monitored. At Level 5, there are mechanisms that fine-tune and continuously improve testing. Defect prevention and quality control are practiced. The testing process is driven by statistical sampling, measurements of confidence levels, trustworthiness, and reliability. There is an established procedure to select and evaluate testing tools. Automated tools totally support the running and rerunning of test cases, providing support for test case design, maintenance of test-related items, defect collection and analysis, and the collection, analysis, and application of test-related metrics.

Maturity Goals at the TMM Levels

The operational framework of the TMM provides a sequence of hierarchical levels that contain the maturity goals, subgoals, activities and tasks, and responsibilities that define the testing capabilities of an organization at a particular level. They identify the areas where an organization must focus to improve its testing process. The hierarchy of testing maturity goals is shown in Figure 1. This section describes the maturity goals for all levels except Level 1, which has no maturity goals.


Figure 1. TMM Maturity Goals by Level.

Level 2 - Phase Definition

At Level 2 of the TMM, an organization has defined a testing phase in the software lifecycle that follows coding. It is planned and repeatable over all software projects. It is separated from debugging, which is an unplanned activity. The maturity goals at Level 2 follow.

1. Develop Testing and Debugging Goals

This means an organization must clearly distinguish between the processes of testing and debugging. The goals, tasks, activities, and tools for each must be identified. Responsibilities for each must be assigned. Management must develop plans and policies to accommodate and institutionalize both processes. The separation of these two processes is essential for testing maturity growth, since they are different in goals, methods, and psychology. Testing at this level is now a planned activity and therefore can be managed, whereas debugging cannot. Maturity subgoals that support this goal include

2. Initiate a Test Planning Process

Planning is essential for a process that is to be repeatable, defined, and managed. Test planning involves stating objectives, analyzing risks, outlining strategies, and developing test design specifications and test cases. In addition, the test plan must address the allocation of resources and the responsibilities for testing on the unit, integration, system, and acceptance levels. Maturity subgoals that support this goal include

3. Institutionalize Basic Testing Techniques and Methods

To improve test process capability, basic testing techniques and methods must be applied across the organization. How and when these techniques and methods are to be applied and any basic tool support for them should be clearly specified. Examples of basic techniques and methods are black-box and white-box testing strategies, use of a requirements validation matrix, and the division of execution-based testing into subphases such as unit, integration, system, and acceptance testing. Maturity subgoals that support this goal include

Level 3 - Integration

This critical maturity level is essential for building quality into software products early in the software lifecycle. At this level, the testing phase is no longer just a phase that follows coding. Instead, it is expanded into a set of well-defined activities that are integrated into the software lifecycle. All of the lifecycle phases have testing activities associated with them. Integration support is provided by the institutionalization of some variation of the V-model, which associates testing activities with lifecycle phases, such as requirements and design [3]. At this level, management supports the formation and training of a software test groupspecialists responsible for testing. This group also acts as a liaison with the users or clients, ensuring their participation in the testing process. Basic testing tools now support institutionalized test techniques and methods. Technical and managerial staff are beginning to realize the value of review activities as a tool for defect detection and quality assurance. The maturity goals at Level 3 follow.

1. Establish a Software Test Organization

A software test organization is created to identify a group of people who are responsible for testing. Because testing in its fullest sense has a great influence on product quality, and because testing consists of complex activities usually performed under tight schedules and high pressure, management realizes it is necessary to have a well-trained and dedicated group of specialists in charge of this process. The test group is responsible for test planning, test execution and recording, test-related standards, test metrics, the test database, test reuse, test tracking, and evaluation. Maturity subgoals that support this goal include

2. Establish a Technical Training Program

A technical training program ensures a skilled staff is available to the testing group. Testers must be properly trained so they can perform their jobs efficiently and effectively. At Level 3 of the TMM, the staff is trained in test planning, testing methods, standards, techniques, and tools. At the higher levels of the TMM they learn how to define, collect, analyze, and apply test-related metrics. The training program also prepares the staff for the review process, instructing review leaders and instituting channels for user participation in the testing and review processes. Training includes in-house courses, self-study, mentoring programs, and support for attendance at academic institutions. Maturity subgoals that support this goal include

3. Integrate Testing into the Software Lifecycle

Management and technical staff now recognize that test process maturity and software product quality require that testing activities be conducted in parallel with all lifecycle phases. Test planning is now initiated early in the lifecycle. A variation of the V-model is used by the testers and developers. User input to the testing process is solicited through established channels for several of the testing phases. Maturity subgoals that support this goal include

4. Control and Monitor the Testing Process

According to R. Thayer, management consists of five principal activities: planning, directing, staffing, controlling, and organizing [4]. Level 2 of the TMM introduces planning capability to the testing process. In addition to staffing, directing, and organizing capabilities, Level 3 introduces several controlling and monitoring activities. These provide visibility and ensure the testing process proceeds according to plan. When actual activities deviate from test plans, management can take effective action to correct the deviations and return to test plan goals. Test progress is determined by comparing the actual test work products, test effort, costs, and schedule to the test plan. Support for controlling and monitoring comes from standards for test products, test milestones, test logs, test-related contingency plans, and test metrics that can be used to evaluate test progress and test effectiveness. Maturity subgoals that support this goal include

Level 4 - Management and Measurement

The principal focus areas at this level are to broaden the definition of a "testing activity" and to accurately measure the testing process. Controlling and monitoring functions can now be fully supported by an established test measurement program. Staffing activities are supported by a training program. The definition of a testing activity is expanded to include reviews, inspections and walk-throughs at all phases of the lifecycle. Software work products as well as test-related work products such as test plans, test designs, and test procedures are all reviewed. This expanded definition of testing covers activities typically categorized as verification and validation activities. The primary goal of this broadened set of testing operations is to uncover defects occurring in all phases of the lifecycle and to uncover them as early as possible. Review results and defect data are saved as a part of project history. Test cases and procedures are stored for reuse and regression testing. The Extended/Modified V-Model illustrates the integration of these activities and provides support for several TMM Level 4 maturity goals. It was described in the first article of this series and in previous publications [1, 5]. The maturity goals at Level 4 follow.

1. Establish an Organization-Wide Review Program

At TMM Level 3, an organization integrates testing activities into the software lifecycle. At Level 4, this integration is augmented by the establishment of a review program. Reviews are conducted at all phases of the lifecycle to identify, catalog, and remove defects from software work products and to test work products early and effectively. Maturity subgoals that support this goal include

2. Establish a Test Measurement Program

A test measurement program is essential in the evaluation of the quality of the testing process, to assess the productivity of the testing personnel and the effectiveness of the testing process, and for test process improvement. Test measurements are vital in the monitoring and controlling of the testing process. A test measurement program must be carefully planned and managed. Test data to be collected must be identified; how they are to be used and by whom must be decided. Measurement data for every test lifecycle phase must be specified. Measurements include those related to test progress, test costs, data on errors and defects, and product measures such as software reliability. Maturity subgoals that support this goal include

3. Software Quality Evaluation

One purpose for software quality evaluation at this TMM level is to relate software quality issues to the adequacy of the testing process. Software quality evaluation involves defining measurable quality attributes and defining quality goals to evaluate software work products. Quality goals are tied to testing process adequacy because a mature testing process must lead to software that is at least correct, reliable, usable, maintainable, portable, and secure. Maturity subgoals that support this goal include

Level 5 - Optimization, Defect Prevention, and Quality Control

Several test-related objectives are at the highest level of the TMM. Organizations at this level test to ensure the software satisfies its specification and is reliable and that the organization can establish a certain level of confidence in its reliability. There are tests to detect and prevent faults. Prevention applies to requirements, design, and implementation faults. Because the testing process is now repeatable, defined, managed, and measured, it can be fine-tuned and continuously improved. Tool support is available to collect and analyze test-related data. Test planning and test execution also have tool support. Through achievement of the maturity goals at TMM Levels 1 through 4, the testing process is now planned, organized, staffed, controlled, and directed [5]. Management provides leadership and motivation and supports the infrastructure necessary for the continual improvement of product and process quality. Test-related measurements help suggest and evaluate test process improvement procedures, methods, tools, and activities. Changes can be monitored and managed. The maturity goals at Level 5 follow.

1. Application of Process Data for Defect Prevention

Mature organizations are able to learn from their history. Following this philosophy, organizations at the highest level of the TMM record defects, analyze defect patterns, and identify root causes of errors. Techniques such as Pareto diagrams are used [6]. Actions plans are developed, actions are taken to prevent defect recurrence, and there is a mechanism to track action progress. At TMM Level 5, defect prevention is applied across all projects and across the organization. A defect prevention team is responsible for defect prevention activities, interacting with developers to apply defect prevention activities throughout the lifecycle. Maturity subgoals that support this goal include

2. Quality Control

At TMM Level 4, organizations focus on testing for a group of quality-related attributes such as correctness, security, portability, interoperabiilty, usability, and maintainability. At TMM Level 5, organizations use statistical sampling, measurements of confidence levels, trustworthiness, and reliability goals to drive the testing process. The testing group and the software quality assurance (SQA) group consist of quality leaders who work with software designers and implementators to incorporate techniques and tools that reduce defects and improve software quality. Automated tools support the running and rerunning of test cases and defect collection and analysis. Usage modeling is used to perform statistical testing [7, 8]. The cost to achieve quality goals is measured relative to the cost of not testing for quantitative quality goals. Subgoals that support statistical quality control include

3. Test Process Optimization

At the highest level of the TMM, the testing process is subject to continuous improvement across projects and across the organization. The test process is quantified and can be fine-tuned so that capability growth is an ongoing process. An organizational infrastructure exists to support this continual growth. This infrastructure, which consists of policies, standards, training, facilities, tools, and organizational structures, has been put in place through the goal achievement processes that constitute the TMM hierarchy. Optimizing the testing process involves

Maturity subgoals needed for test process optimization include

An Example Set of Activities, Tasks, and Responsibilities

The structure of the TMM is such that each maturity goal (MG) is supported by several maturity subgoals (MSG). The MSGs are achieved by a set of activities, tasks, and responsibilities (ATRs) assigned to the three groups that play key roles in the testing process: managers, developers and testers, and user and client groups. Managers and developers and testers are responsible for development, implementation and organizational adaptation of the policies, plans, standards, practices, and organizational structures associated with the testing process. They receive support or consensus or both for these tasks and responsibilities from user and client groups. The relationship between the MGs, MSGs, and ATRs is shown in Figure 2. The following paragraphs describe one set of ATRs as an example. The complete set of MSGs and ATRs is described in detail by T. Suwannasart [9].


Figure 2. The Framework of the TMM.

The following example is from Level 2 of the TMM - Phase Definition. One of the maturity goals at this level is "Initiate a Test Planning Process." The complete set of maturity subgoals for this goal follow:

The activities, tasks, and responsibilities for these subgoals are described below. The three critical views are represented in the description. There are two important issues that should be noted.

The Managers' View

The Developers' View

The Users and Clients View

Correlation Between the CMM and TMM

We are developing the TMM as a complement to the CMM. We envision that organizations interested in assessing and improving their testing capabilities are likely to be involved in general software process improvement. To have directly corresponding levels in both maturity models would logically simplify these two parallel process improvement drives. But this is not entirely the case since both the CMM and the TMM level structures are based on the individual historical maturity growth patterns of the processes they represent. However, the testing process is a subset of the overall software development process; therefore, its maturity growth needs support from the KPAs associated with general process growth [10,11,12]. For this reason, we recommend that any organization that wishes to improve its testing process by use of the TMM first commit to improving its overall software development process by application of the CMM.

Our research shows that an organization striving to reach the "ith" level of the TMM must be at least at the "ith" level of the CMM. In many cases, a given TMM level needs specific support from KPAs in its corresponding CMM level and the CMM level beneath it. These KPAs should be addressed either prior to or in parallel with the TMM maturity goals. A brief summary of support from the CMM level required for TMM goal achievement is shown in Table 1. A detailed description of the correlation between the two models is given by T. Suwannasart [9].

Table 1. Examples of Support for TMM Maturity Levels from CMM Key Process Areas.
TMM Level CMM Level Supporting Key Process Areas
2 2 Requirements management, project planning, and software configuration management.
3 2 Project tracking and oversight and SQA goals.
3 3 Organization process focus, organization process definition, and training programs.
4 3 Peer reviews.
4 4 Software quality management and quantitative process management.
5 5 Process change management, technology change management, and defect prevention.

A Proposal for a Tester's Workbench

We argue that testing process maturity growth should be supported by automated tools. We are developing a "Tester's Workbench" to provide that support. Suggestions for TMM tool support have their origins in a Software Test Technologies Report published by Software Technology Support Center [3]. The workbench will be built from a basic set of testing tools for use at the lower TMM levels. Tools are added as the organization progresses through the maturity levels. Below we describe a group of tools suggested for an organization at TMM Level 2. The tools support that level's maturity goals, such as test planning, incorporation of basic testing techniques, testing as an execution-based activity, and the separation of testing and debugging.

Project Planning Tools: At Level 2 of the TMM, testing is becoming a planned activity. This implies a commitment to completion of testing activities. A project planning tool will aid the project manager in defining test activities, allocating time, money, resources, and personnel to the testing process.

Capture and Replay Tools: These tools automatically record test inputs from a keyboard or other device and replay them for subsequent testing when changes are made to the software. The tool user is able to prepare test scripts and use them with the recorded inputs to replay the tests under automated control. Program errors can be detected by using an associated comparator tool that can compare current screens, dialogs, and files with results recorded from previous tests. This type of tool is invaluable for regression testing and has a positive effect on productivity.

Simulators and Emulators: These are hardware or software components that can substitute for missing, unavailable, or valued systems components during the testing process.

Syntax and Semantic Analyzers: Because code reviews are not a planned activity at Level 2 of the TMM, these tools are useful for detecting code anomalies. Use of these tools before the execution of test cases will reduce errors and test time.

Debugging Tools: Because testing and debugging are recognized as separate activities, simple tools can be purchased to support debugging. Staff responsible for debugging duties can be trained to use these tools. A useful debugging tool allows the user to set breakpoints, examine memory during execution, and step through a set of instructions.

Tool support for higher levels of the TMM includes test generation tools, metrics collection tools such as complexity and size measurers, coverage and frequency analyzers, defect and change trackers, and statistical tools for defect analysis and defect prevention.

Model Evaluation and the Need for Industrial Partners

The CMM was developed by a group at the Software Engineering Institute with extensive feedback from government and industry [10,11,12]. The initial release of the CMM was reviewed and used by the software development community in the early 1990s. Since then, the CMM has been widely accepted by the software industry for process evaluation and self-improvement. Thus, it has paved the way for the development and application of multilevel maturity growth models rooted in similar design concepts. Based on the success of the CMM, we believe our approach to testing process assessment and improvement through the TMM structure is sound. However, the experiences of the Software Engineering Institute in developing the CMM illustrate that feedback from industry is vital for TMM assessment and application. It must reflect industry needs.

Summary and Future Research

Software systems are playing an increasingly important role in our society. It is imperative that we address quality issues that relate to both the software development process and the software product. This series of articles has focused on process; they have described the development of a TMM that is a complement to the CMM. It is designed to help software development organizations evaluate and improve their testing processes. Testing is used in its broadest sense to encompass review and execution-based testing activities. We believe that improving the testing process through application of the TMM maturity criteria will have a positive impact on software quality.

In this series of articles we have discussed our approach to developing the TMM, the model framework, the behavioral characteristics of the five TMM levels, and its internal framework. Required support from CMM KPAs was also described. Plans for the development of an assessment model were discussed, and we have presented a proposal for a Testers' Workbench to be associated with the TMM. We continue to refine the model, develop the assessment process, design the Testers' Workbench, and identify a set of metrics to be associated with TMM levels. We also seek to involve industrial partners in model development, application, and validation.

References
1. Burnstein, I., T. Suwannasart, and C. R. Carlson, "Developing a Testing Maturity Model: Part I," Crosstalk, STSC, Hill Air Force Base, Utah, August 1996, pp. 21-24.
2. Gelperin, D. and B. Hetzel, "The Growth of Software Testing," CACM, Vol. 31, No. 6, 1988, pp. 687-695.
3. Daich, G., G. Price, B. Ragland, and M. Dawood, Software Test Technologies Report, August 1994, STSC, Hill Air Force Base, Utah, August 1994.
4. Thayer, R., "Software Engineering Project Management, A Top Down View," IEEE Tutorial, Software Engineering Project Management, IEEE Computer Society Press, Los Alamitos, Calif., 1990, pp. 15-53.
5. Burnstein, I., T. Suwannasart, and C.R. Carlson, "The Development of a Testing Maturity Model," Proceedings of the Ninth International Quality Week Conference, San Francisco, May 21-24, 1996.
6. McCabe, T. and G. Schulmeyer, "The Pareto Principle Applied to Software Quality Assurance," Handbook of Software Quality Assurance, G. Schulmeyer and J. McManus, eds., 2d ed., Van Nostrand Reinhold, New York, 1992, pp. 225-254.
7. Cho, C., "Statistical Methods Applied to Software Quality Control," Handbook of Software Quality Assurance, 2d ed., G. Schulmeyer and J. McManus, eds., Van Nostrand Reinhold, New York, 1992, pp. 427- 464.
8. Walton, G., J. Poore, and C. Trammell, "Statistical Testing of Software Based on a Usage Model," Software-Practice and Experience, Vol. 25, No. 1, 1995, pp. 97-108.
9. Suwannasart, T., "Towards the Development of a Testing Maturity Model," Diss., Illinois Institute of Technology, March 1996.
10. Paulk, M., C. Weber, B. Curtis, and M. Chrissis, The Capability Maturity Model Guideline for Improving the Software Process, Adios-Wesley, Reading, Mass., 1995.
11. Paulk, M., B. Curtis, M. Chrissis, and C. Weber, "Capability Maturity Model, Version 1.1," IEEE Software, July 1993, pp. 18-27.
12. Paulk, M., et al., "Key Practices of the Capability Maturity Model, Version 1.1," Technical Report CMS/SEI-93-TR-25, Software Engineering Institute, Pittsburgh, Pa., 1993.

About the Authors

Ilene Burnstein holds a doctorate from Illinois Institute of Technology. She is an associate professor of computer science at Illinois Institute of Technology and co-director of the Center for Software Engineering. Her research interests include testing process issues, testing techniques, and automated program recognition and debugging.

Illinois Institute of Technology
Computer Science Department
10 West 31st Street
Chicago, IL 60616-3793
Voice: 312-567-5155
Fax: 312-567-5067
E-mail: burnstein@minna.iit.edu

Taratip Suwanassart holds a doctorate in computer science from Illinois Institute of Technology. Her research interests are in test management, test process improvement, software metrics, and data modeling.

C.R. Carlson holds a doctorate from the University of Iowa. He is a professor and chairman of the computer science department at Illinois Institute of Technology. His research interests include object-oriented modeling, design and query languages, and software process issues.

Illinois Institute of Technology
Computer Science Department
10 West 31st Street
Chicago, IL 60616-3793
Voice: 312-567-5152
Fax: 312-567-5067


This is the second of a two-part series of articles on the development of a TMM, which provides a tool to evaluate and improve software testing processes. The first article discussed TMM development and described components and the top-level structure of the model [1].