Service Transition

1Introduction 2Serv. Mgmt. 3Principles 4Processes 5Activities 6Organization 7Consideration 8Implementation 9Issues AAppendeces

4. Service Transition Processes

4.1PLAN/SUPPORT 4.2CHANGE 4.3ASSET/CONFIG 4.4RELEASE/DEPLOY 4.5 VALIDATE/TEST 4.6EVALUATE 4.7KNOWLEDGE

4.5 Service Validation And Testing

The underlying concept to which Service Testing and Validation contributes is quality assurance - establishing that the Service Design and release will deliver a new or changed service or service offering that is fit for purpose and fit for use. Testing is a vital area within Service Management and has often been the unseen underlying cause of what was taken to be inefficient Service Management processes. If services are not tested sufficiently then their introduction into the operational environment will bring a rise in:

4.5.1 Purpose, Goal And Objectives
The purpose of the Service Validation and Testing process is to:

The goal of Service Validation and Testing is to assure that a service will provide value to customers and their business.

The objectives of Service Validation and Testing are to:

4.5.2 Scope
The service provider takes responsibility for delivering, operating and/or maintaining customer or service assets at specified levels of warranty, under a service agreement. Service Validation and Testing can be applied throughout the service lifecycle to quality assure any aspect of a service and the service providers' capability, resources and capacity to deliver a service and/or service release successfully. In order to validate and test an end-to-end service the interfaces to suppliers, customers and partners are important. Service provider interface definitions define the boundaries of the service to be tested, e.g. process interfaces and organizational interfaces.

Testing is equally applicable to in-house or developed services, hardware, software or knowledge-based services. It includes the testing of new or changed services or service components and examines the behaviour of these in the target business unit, service unit, deployment group or environment. This environment could have aspects outside the control of the service provider, e.g. public networks, user skill levels or customer assets.

Testing directly supports the release and deployment process by ensuring that appropriate levels of testing are performed during the release, build and deployment activities. It evaluates the detailed service models to ensure that they are fit for purpose and fit for use before being authorized to enter Service Operations, through the service catalogue. The output from testing is used by the evaluation process to provide the information on whether the service is independently judged to be delivering the service performance with an acceptable risk profile.

4.5.3 Value To Business
Service failures can harm the service provider's business and the customer's assets and result in outcomes such as loss of reputation, loss of money, loss of time, injury and death. The key value to the business and customers from Service Testing and Validation is in terms of the established degree of confidence that a new or changed service will deliver the value and outcomes required of it and understanding the risks.

Successful testing depends on all parties understanding that it cannot give, indeed should not give, any guarantees but provides a measured degree of confidence. The required degree of confidence varies depending on the customer's business requirements and pressures of an organization.

4.5.4 Policies, Principles And Basic Concepts
4.5.4.1 Inputs From Service Design
A service is defined by a service package that comprises one or more service level packages (SLPs) and re-usable components, many of which themselves are services, e.g. supporting services. The service package defines the service utilities and warranties that are delivered through the correct functioning of the particular set of identified service assets. An SLP provides a definitive level of utility or warranty from the perspective of outcomes, assets and patterns of business activity (PBA) of customers. It is therefore a key input to test planning and design.

The design of a service is related to the context in which a service will be used (the categories of customer asset). The attributes of a service characterize the form and function of the service from a utilization perspective.

These attributes should be traceable to the predicted business outcomes that provide the utility from the service. Some attributes are more important than others for different sets of users and customers, e.g. basic, performance and excitement attributes. A well-designed service provides a combination of these to deliver an appropriate level of utility for the customer.

Figure 4.26 Service models describe the structure and dynamics of a service
Figure 4.26 Service models describe the structure
and dynamicsof a service

The Service Design Package defines the agreed requirements of the service, expressed in terms of the service model and Service Operations plan that provide key input to test planning and design. Service models are described further in the Service Strategy publication.

The service model (Figure 4.26) describes the structure and dynamics of a service that will be delivered by Service Operations, through the Service Operations plan. Service Transition evaluates these during the validation and test stages.

Structure is defined in terms of particular core and supporting services and the service assets needed and the patterns in which they are configured. As the new or changed service is designed, developed and built, the service assets are tested and verified against the requirements and design specifications: is the service asset built correctly?

For example, the design for managed storage services must have input on how customer assets such as business applications utilize the storage, the way in which storage adds value to the applications, and what costs and risks the customer would like to avoid. The information on risks is of particular importance to service testing as this will influence the test coverage and prioritization.

Figure 4.27 Dynamics of a service model
Figure 4.27 Dynamics of a service model

Service models also describe the dynamics of creating value. Activities, flow of resources, coordination, and interactions describe the dynamics (see Figure 4.27). This includes the cooperation and communication between service users and service agents such as service provider staff, processes or systems that the user interacts with, for example, a self-service menu. The dynamics of a service include patterns of business activity, demand patterns, exceptions and variations.

Service Design uses process maps, workflow diagrams, queuing models, and activity patterns to define the service models. As Service Transition evaluates the detailed service models to ensure they are fit for purpose and fit for use it is important to have access to these models to develop the test models and plans. The Service Design package defines a set of design constraints (Figure 4.28) against which the service release and new or changed service will be developed and built.

Validation and testing should test the service at the boundaries to check that the design constraints are correctly defined and particularly if there is a design improvement to add or remove a constraint.

Figure 4.28 Design Constraints of a Service
Figure 4.28 Design Constraints of a Service

4.5.4.2 Service Quality And Assurance
Service assurance is delivered though verification and validation, which in turn are delivered through testing (trying something out in conditions that represent the final live situation - a test environment) and by observation or review against a standard or specificationR.

Validation confirms, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled. Validation in a lifecycle context is the set of activities ensuring and gaining confidence that a system or service is able to accomplish its intended use, goals and objectives.

The validation of the service requirements and the related service Acceptance Criteria begins from the time that the service requirements are defined. There will be increasing levels of service validation testing performed as a service release progresses through the service lifecycle.

Verification is confirmation, through the provision of objective evidence, that specified requirements have been fulfilled, e.g. a service asset meets its specification.

Early in the service lifecycle, validation confirms that the customer needs, contracts and service attributes, specified in the service package, are translated correctly into the Service Design as service level requirements and constraints, e.g. capacity and demand limitations. Later in the service lifecycle tests are performed to assess whether the actual service delivers the required levels of service, utilities and warranties. The warranty is an assurance that a product or service will be provided or will meet certain specifications. Value is created for customers if the utilities are fit for purpose and the warranties are fit for use (Figure 4.29). This is the focus of service validation.

4.5.4.3 Policies
Policies that drive and support Service Validation and Testing include:

4.5.4.4 Test Strategy
A test strategy defines the overall approach to organizing testing and allocating testing resources. It can apply to the whole organization, a set of services or an individual service. Any test strategy needs to be developed with appropriate stakeholders to ensure there is sufficient buyin to the approach.

Early in the lifecycle the service validation and test role needs to work with Service Design and service evaluation to plan and design the test approach using information from the service package, SLPs, SDP and the interim evaluation report. The activities will include:

It is also vital to work with Project Managers to ensure that:

The aspects to consider and document in developing the test strategy and related plans are shown below. Some of the information may also be specified in the Service Transition plan or other test plans and it is important to structure the plans so that there is minimal duplication.

Test Strategy Contents

4.5.4.5 Test Models
A test model includes a test plan, what is to be tested and the test scripts that define how each element will be tested. A test model ensures that testing is executed consistently in a repeatable way that is effective and efficient. The test scripts define the release test conditions, associated expected results and test cycles. To ensure that the process is repeatable, test models need to be well structured in a way that:

Examples of test models are illustrated in Table 4.10.

Test modelObjective/target deliverableTest conditions based on
Service contract test modelTo validate that the customer can use the service to deliver a value proposition.Contract requirements. Fit for purpose, fit for User criteria.
Service requirements test modelTo validate that the service provider can/has delivered the service required and expected by the customer.Service requirements and Service Acceptance Criteria.
Service level test modelTo ensure that the service provider can deliver the service level requirements, and service level requirements can be met in the production environment, e.g. testing the response and fix time, availability, product delivery times, support services.Service level requirements, SLA, OLA. Service model.
Service test modelTo ensure that the service provider is capable of delivering, operating and managing the new or changed service using the 'as-designed' service model that includes the resource model, cost model, integrated process model, capacity and performance model etc.Service model.
Operations test modelTo ensure that the Service Operations teams can operate and support the new or changed service/service component including the service desk, IT operations, application management, technical management. It includes local IT support staff and business representatives responsible for IT service support and operations. There may be different models at different release/test levels, e.g. technology infrastructure, applications.Service model, service operations standards, processes and plans.
Deployment release test modelTo verify that the deployment team, tools and procedures can deploy the release package into a target deployment group or environment within the estimated timeframe. To ensure that the release package contains all the service components required for deployment, e.g. by performing a configuration audit.Release and deployment design and plan.
Deployment installation test modelTo test that the deployment team, tools and procedures can install the release package into a target environment within the estimated timeframe.Release and deployment design and plan
Deployment verification test modelTo test that a deployment has completed successfully and that all service assets and configurations are in place as planned and meet their quality criteria.Tests and audits of 'actual' service assets and configurations.
Table 4.10 Examples of service test models

As the Service Design phase progresses, the tester can use the emerging Service Design and release plan to determine the specific requirements, validation and test conditions, cases and mechanisms to be tested. An example is shown in Table 4.11.

Validation referenceValidation conditionTest levelsTest caseMechanism
1.120% improvement in user survey rating1M020Survey
1.220% reduction in user complaints1M023Process metrics
1.320% increase in use of self service channel2M123Usage statistics
1.4Help function available on front page of self service point application3T235Functional test
1.5Web pages comply with web accessibility standards 4 (Application)T201Usability test
1.610% increase in public self service points4/5 Technical infrastructureT234Installation statistics
1.7Public self-service points comply with standard IS12234/5 Technical infrastructureT234Compliance test
Table 4.11 Service requirements, 1: improve user accessibility and usability

4.5.4.6 Validation And Testing Perspectives
Effective validation and testing focuses on whether the service will deliver as required. This is based on the perspective of those who will use, deliver, deploy, manage and operate the service. The test entry and exit criteria are developed as the Service Design Package is developed. These will cover all aspects of the service provision from different perspectives including:

Service acceptance testing starts with the verification of the service requirements. For example, customers, customer representatives and other stakeholders who sign off the agreed service requirements will also sign off the service Acceptance Criteria and service acceptance test plan. The stakeholders include:

Business Users And Customer Perspective
The business involvement in acceptance testing is central to its success, and is included in the Service Design package, enabling adequate resource planning.

From the business's perspective this is important in order to:

From the service provider's perspective the business involvement is important to:

Even when in live operation, a service is not 'emotionally' accepted by customer and user until they become familiar and content with it. The full benefit of a service will not be realized until that emotional acceptance has been achieved.

Emotional (non) acceptance
Southern US Steel Mill implemented a new order manufacturing service. It was commissioned, designed and delivered by an outside vendor. The service delivered was innovative and fully met the agreed criteria. The end result was that the company sued the vendor citing that the service was not usable because factory personnel (due to lack of training) did not know how to use the system and therefore emotionally did not accept it.

Testing is a situation where 'use cases', focusing on the usable results from a service can be a valuable aid to effective assessment of a service's usefulness to the business.

User Testing - Application, System, Service
Testing is comprised of tests to determine whether the service meets the functional and quality requirements of the end users (customers) by executing defined business processes in an environment that, as closely as possible, simulates the live operational environment. This will include changes to the system or business process. Full details of the scope and coverage will be defined in the user test and user acceptance test (UAT) plans. The end users will test the functional requirements, establishing to the customer's agreed degree of confidence that the service will deliver as they require. They will also perform tests of the Service Management activities that they are involved with, e.g. ability to contact and use the service desk, response to diagnostics scripts, incident management, request fulfilment, change request management.

A key practice is to make sure that business users participating in testing have their expectations clearly set and realize that this is a test and to expect that some things may not go well. There is a risk that they may form an opinion too early about the quality of the service being tested and word may spread that the quality of the service is poor and should not be used.

Operations And Service Improvement Perspective
Steps must be taken to ensure that IT staff requirements have been delivered before deployment of the service.

Operations staff will use the service acceptance step to ensure that appropriate:

Continual Service Improvement will also inherit the new or changed service into the scope of their improvement programme, and should satisfy themselves that they have sufficient understanding of its objectives and characteristics.

4.5.4.7 Levels Of Testing And Test Models
Testing is related directly to the building of service assets and products so that each one has an associated acceptance test and activity to ensure it meets requirements. This involves testing individual service assets and components before they are used in the new or changed service.

Each service model and associated service deliverable is supported by its own re-usable test model that can be used for regression testing during the deployment of a specific release as well as for regression testing in future releases. Test models help with building quality early into the service lifecycle rather than waiting for results from tests on a release at the end.

Levels of build and testing are described in the release and deployment section (paragraph 4.4.5.3). The levels of testing that are to be performed are defined by the selected test model.

Using a model such as the V-model (Figure 4.30) builds in Service Validation and Testing early in the service lifecycle. It provides a framework to organize the levels of configuration items to be managed through the lifecycle and the associated validation and testing activities both within and across stages.

The level of test is derived from the way a system is designed and built up. This is known as a V-model, which maps the types of test to each stage of development. The V-model provides one example of how the Service Transition levels of testing can be matched to corresponding stages of service requirements and design.

Figure 4.30 Example of service V-model
Figure 4.30 Example of service V-model

The left-hand side represents the specification of the service requirements down to the detailed Service Design. The right-hand side focuses on the validation activities that are performed against the specifications defined on the left-hand side. At each stage on the left-hand side, there is direct involvement by the equivalent party on the right-hand side. It shows that service validation and acceptance test planning should start with the definition of the service requirements. For example, customers who sign off the agreed service requirements will also sign off the service Acceptance Criteria and test plan.

4.5.4.8 Testing Approaches And Techniques
There are many approaches that can be combined to conduct validation activities and tests, depending on the constraints. Different approaches can be combined to the requirements for different types of service, service model, risk profile, skill levels, test objectives and levels of testing. Examples include:

In order to optimize the testing resources, test activities must be allocated against service importance, anticipated business impact and risk. Business impact analyses carried out during design for business and service continuity management and availability purposes are often very relevant to establishing testing priorities and schedules and should be available, subject to confidentiality and security concerns.

4.5.4.9 Design Considerations
Service test design aims to develop test models and test cases that measure the correct things in order to establish whether the service will meet its intended use within the specified constraints. It is important to avoid focusing too much on the lower level components that are often easier to test and measure. Adopting a structured approach to scoping and designing the tests helps to ensure that priority is given to testing the right things. Test models must be well structured and repeatable to facilitate auditability and maintainability.

The service is designed in response to the agreed business and service requirements and testing aims to identify if these have been achieved. Service validation and test designs consider potential changes in circumstances and are flexible enough to be changed. They may need to be changed after failures in early service tests identify a change in the environment or circumstances and therefore a change on the testing approach.

Design considerations are applicable for service test models, test cases and test scripts and include:

Aspects that generally need to be considered in designing service tests include:

Awareness of current technological environments for different types of business, customer, staff and user is essential to maintaining a valid test environment. The design of the test environments must consider the current and anticipated live environment when the service is due for operational handover and for the period of its expected operation. In practice, for most organizations, looking more than six to nine months into the business or technological future is about the practical limit. In some sectors, however, much longer lead times require the need to predict further into the future, even to the extent of restricting technological innovation in the interests of thorough and expansive testing - examples are military systems, NASA and other safety critical environments.

Designing the management and maintenance of test data needs to address relevant issues such as:

4.5.4.10 Types Of Testing
The following types of test are used to verify that the service meets the user and customer requirements as well as the service provider's requirements for managing, operating and supporting the service. Care must be taken to establish the full range of likely users, and then to test all the aspects of the service, including support and reporting.

Functional testing will depend on the type of service and channel of delivery. Functional testing is covered in many testing standards and best practices (see Further information).

Service testing will include many non-functional tests. These tests can be conducted at several test levels to help build up confidence in the service release. They include:

There are several types of testing from different perspectives, which are described below.

Figure 4.31 Designing tests to cover range of service assets, utilities and warranties
Figure 4.31 Designing tests to cover range of service assets, utilities and warranties

Service requirements and structure testing - service provider, users and customers Validation of the service attributes against the contract, service package and service model includes evaluating the integration or 'fit' of the utilities across the core and supporting services and service assets to ensure there is complete coverage and no conflicts.

Figure 4.31 shows a matrix of service utility to service warranty and the service assets that support each utility. This matrix is one that can be used to design the service tests to ensure that the service structure and test design coverage is appropriate. Service tests cases are designed to test the service requirements in terms of utility, capacity, resource utilization, finance and risks. For example approaches to testing the risk of service failure include performance, stress, usability and security testing.

Service level testing - service level managers, operations managers and customers
Validate that the service provider can deliver the service level requirements, e.g. testing the response and fix time, availability, product delivery times and support services.

The performance from a service asset should deliver the utility or service expected. This is not necessarily that the asset can deliver what it should be capable of in its own right. For example a car's factory specification may assert that it is capable of 150kph, but for most customers delivering 100kph willfully meet the requirement.

Warranty And Assurance Tests - Fit For Use Testing
As discussed earlier in this section, the customers see the service delivered in terms of warranties against the utilities that add value to their assets in order to deliver the expected business support. For any service, the warranties are expressed in measurable terms that enable tests to be designed to establish that the warranty can be delivered (within the agreed degree of confidence). The degree of detail may vary considerably, but will always reflect the agreement established during Service Design. In all cases the warranty will be described, and should be measurable, in terms of the customer's business and the potential effects on it of success or failure of the service to meet that warranty. The following tests are used to provide confidence that the warranties can be delivered, i.e. the service is fit for use:

Usability - Users And Maintainers
Usability testing is likely to be of increasing importance as more services become widely used as a part of everyday life and ordinary business usage. Focusing on the intuitiveness of a service can significantly increase the efficiency and reduce the unit costs of both using and supporting a service. User accessibility testing considers the restricted abilities of actual or potential users of a new or changed service and is commonly used for testing web services. Care must be taken to establish the types of likely users, e.g. hearing impaired users may be able to operate a PC-based service but would not be supported by a telephone-only-based service-desk support system. This testing might focus on usability for:

Contract And Regulation Testing
Audits and tests are conducted to check that the criteria in contracts have been accepted before acceptance of the end-to-end service. Service providers may have a contractual requirement to comply with the requirements of ISO/IEC 20000 or other standards and they would need to ensure that the relevant clauses of the standard are met during implementation of a new or changed service and release.

Regulatory acceptance testing is required in some industries such as defence, financial services and pharmaceuticals.

Compliance Testing
Testing is conducted to check compliance against internal regulations and existing commitments of the organization, e.g. fraud checks.

Service Management Testing
The service models will dictate the approach to testing the integrated Service Management processes. ISO/IEC 20000 covers the minimum requirements for each process to be compliant with the standard and maintenance of the process interrelationships.

Examples of Service Management manageability tests are shown in Table 4.12.

Service Management functionsExamples of
design phase manageability checks build phase manageability checks operating manageability checks deployment phase manageability checks early life support and CSI manageability checks
Configuration ManagementAre the designers aware of the corporate standards used for Configuration Management? How does the design meet organizational standards for acceptable configurations? Does the design support the concept of version control? Is the design created in a way that allows for the logical breakdown of the service into configuration items (CIs)? Have the developers built the service, application and infrastructure to conform to the corporate standards that are used for Configuration Management? Does the service use only standard supporting systems and tools that are considered acceptable? Does the service include support for version, build, baseline and release control and management? Have the developers built in the chosen Cl structure to the service, application and infrastructure? Does the service deployment update the CMS at each stage of the rollout? Is the deployment team using an updated inventory to complete the plan and the deployment? Can the operations team gain access to the CMS so that they can confirm the service they are managing is the correct version and configured correctly? Are the operating instructions under version and build control similar to those used for the application builds? As the service is reviewed within the optimize phase, is the CMS used to assist with the review? Are Configuration Management personnel involved in the optimization process, including providing advice in the use of and updating the inventory?
Change Management Does the Service Design cope with change? Do the designers understand the Change Management process used by the organization?Have the service assets and components been built and tested against the corporate Change Management process? Has the emergency change process been tested? Is the impact assessment procedure for the Cl type clearly defined and has it been tested?Are the corporate Change Management process and standards used during deployment?Is the operations team involved in the Change Management process; is it part of the sign-off and verification process? Does a member of the operations team attend the Change Management meetings?As modifications are identified within this phase, does the team use the Change Management system to coordinate the changes? Does the optimization team understand the Change Management process?
Release and Deployment ManagementDo the service designers understand the standards and tools used for releasing and deploying services? How will the design ensure that the new or changed service can be deployed into the environment in a simple and efficient way?Has the service, application and infrastructure been built and tested in ways that ensure I can be released into the environment in a simple and efficient way?Is the service being deployed in a manner that minimizes risks, such as a phased deployment? Has a remediation/ back-out option been included in the release package or process for the service and its constituent components?Does the release and deployment process ensure that deployment information is available to the operations teams? Do the Service Operations teams have access to release and information even before the service or application is deployed into the live environment?Do members of the CSI team understand the release process, and are they using this for planning the deployment of improvements? Is Release and Deployment Management involved in providing advice to the assessment process?
Security managementHow does the design ensure that the service is designed with security in the forefront?Is the build process following security best practice for this activity?Can the service be deployed in a manner that meets organizational security standards and requirements?Does the service support the ongoing and periodic checks that security management needs to complete while the service is in operational use? 
Incident managementDoes the design facilitate simple creation of incidents when something goes wrong? Is the design compatible with the organizational incident management system? Does the design accommodate automatic logging and detection of incidents?Is a simple creation-of-incidents process, for when something goes wrong, built into services and tested (e.g. notification from applications)? Has the compatibility with the organizational incident management system been tested?Does the deployment use the incident management system for reporting issues and problems? Do the members of the deployment team have access to the incident management system so that they can record incidents and also view incidents that relate to the deployment? Does the operations team have access to the incident management system and can it update information within this system? Does the operations team understand its responsibilities in dealing with incidents? Is the operations team provided with reports on how well it deals with incidents, and does it act on these? Do members of the CSI team have access to the incident management system so that they can record incidents and also view incidents that may be addressed in optimization?
Problem managementHow does the design facilitate the methods used for root cause analysis used within the organization?Has the method of providing information to facilitate root cause analysis and problem management been tested?Has a problem manager been appointed for this deployment and does the deployment team know who it is?Does the operations team contribute to the problem management process, ideally by assisting with and facilitating root cause analysis? Does the operations team meet problem management staff regularly? Does the operations team see the weekly/ monthly problem management report?Is the optimization process being provided with information by problem management to incorporate into the assessment process?
Capacity managementAre the designers aware of the approach to capacity management used within the organization?

How to measure operations and performance? Is modelling being used to ensure that the design meets capacity needs?

Has the service been built and tested to ensure that it meets the capacity requirements?

Has the capacity information provided by the service been tested and verified?

Are stress and volume characteristics built into the services and constituent applications?

Is capacity management involved in the deployment process so that it can monitor the capacity of the resources involved in the deployment?Is capacity management information being monitored and reported on as this service is used, and is this information provided to capacity management?Is capacity management feeding information into the optimization process?
Availability managementDoes the design address the availability requirements of the service? Has the service been designed to fit in with backup and recovery capabilities of the organization? How has the service been built to address the availability requirements, and how has this been tested? What testing has been done to ensure that the service meets the backup and recovery capabilities of the organization? What happens when the service and underlying applications are under stress?Is availability management monitoring the availability of the service, the applications being deployed and the rest of the technology infrastructure to ensure that the deployment is not affecting availability? How is the ability to back up and recover the service during deployment being dealt with? How is the service's availability being measured, and is this information being fed back to the availability management function within the IT organization? Does the assessment use the availability information to complete the proposal of modifications that are needed for the service? Is any improvement required in the service's ability to be backed up and recovered?
Service continuity managementHow does the design meet the service continuity requirements of the organization?

Will the design meet the needs of the business recovery process following a disaster?

Has the service been built to support the business recovery process following a disaster, and how has this been tested?Will any changes be required to the business recovery process following a disaster if one should occur during or after the deployment of this service?Is the business recovery process for the service tested regularly by operations?What optimization is required in the business recovery process to meet the business needs?
Service level managementHow does the design meet the SLA requirements of the organization?Does the service meet the SLA and performance requirements, and has this been tested?Is service level management aware of the deployment of this service? Does this service have an initial SLA for the deployment phase? Does the service affect the SLA requirements during deployment?Is the SLA visible and understood by the operations team so that it appreciates how its running of the service affects the delivery of the SLA? Does operations see the weekly/monthly service level report?Is service level management information available for inclusion in the optimization process?
Financial managementDoes the design meet the financial requirements for this service?

How does the design ensure that the final new or changed service will meet return of investment expectations?

Has the service been built to deliver financial information, and how is this being tested?Is management accounting being done during the deployment so that the total cost of deployment can be included within the cost of ownership?Does operations provide input into the financial information about the service? For example, if a service requires an operator to perform additional tasks at night, is this recorded?Is financial information available to be included in the assessment process?
Table 4.12 Examples of Service Management manageability tests

Operational Tests - Systems, Services
There will be many operational tests depending on the type of service. Typical tests include:

Regression Testing
Regression testing means 'repeating a test already run successfully, and comparing the new results with the earlier valid results'. On each iteration of true regression testing, all existing, validated tests are run, and the new results are compared with the already-achieved standards. Regression testing ensures that a new or changed service does not introduce errors into aspects of the services or IT infrastructure that previously worked without error. Simple examples of the type of error that can be detected are software contention issues, hardware and network incompatibility. Regression testing also applies to other elements such as Service Management process testing and measurement. In reality it is the integrated concept of service testing - assessing whether the service will deliver the business benefit - that makes regression testing so very important in modern organizations, and will make it ever more important.

4.5.5 Process Activities, Methods And Techniques
The testing process is shown schematically in Figure 4.32. The test activities are not undertaken in a sequence. Several activities may be done in parallel, e.g. test execution begins before all the test design is complete. The activities are described below.

Figure 4.32 Example of a validation and testing process
Figure 4.32 Example of a validation and testing process

  1. Validation and Test Management
    Test management includes the planning, control and reporting of activities through the test stages of Service Transition. These activities include:

    Test management includes managing issues, mitigating risks and implementing changes identified from the testing activities as these can impose delays and create dependencies that need to be proactively managed.

    Test metrics are used to measure the test process and manage and control the testing activities. They enable the test manager to determine the progress of testing, the earned value and the outstanding testing, and this helps the test manager to estimate when testing will be completed. Good metrics provide information for management decisions that are required for prioritization, scheduling and risk management. They also provide useful information for estimating and scheduling for future releases.

  2. Plan And Design Test
    Test planning and design activities start early in the service lifecycle and include:

  3. Verify test plan and test design
    Verify the test plans and test design to ensure that:

  4. Prepare Test Environment
    Prepare the test environment by using the services of the build and test environment resource and also use the release and deployment processes to prepare the test environment where possible; see paragraph 4.4.5.2. Capture a configuration baseline of the initial test environment.

  5. Perform Tests
    Carry out the tests using manual or automated techniques and procedures. Testers must record their findings during the tests. If a test fails, the reasons for failure must be fully documented. Testing should continue according to the test plans and scripts, if at all possible. When part of a test fails, the incident or issues should be resolved or documented (e.g. as a known error) and the appropriate re-tests should be performed by the same tester.

    An example of the test execution activities is shown in Figure 4.33. The deliverables from testing are:

  6. Evaluate Exit Criteria and Report
    The actual results are compared to the expected results. The results may be interpreted in terms of pass/fail; risk to the business/service provider; or if there is a change in a projected value, e.g. higher cost to deliver intended benefits.

    To produce the report, gather the test metrics and summarize the results of the tests. Examples of exit criteria are:

  7. Test Clean Up and Closure
    Ensure that the test environments are cleaned up or initialized. Review the testing approach and identify improvements to input to design/build, buy/build decision parameters and future testing policy/procedures.

4.5.6 Trigger, Input And Outputs, And Inter-process Interfaces
Figure 4.33 Example of perform test activities
Figure 4.33 Example of perform test activities

4.5.6.1 Trigger
The trigger for testing is a scheduled activity on a release plan, test plan or quality assurance plan.

4.5.6.2 Inputs
The key inputs to the process are:

4.5.6.3 Outputs
The direct output from testing is the report delivered to service evaluation (see section 4.6). This sets out:

After the service has been in use for a reasonable time there should be sufficient data to perform an evaluation of the actual vs predicted service capability and performance. If the evaluation is successful, an evaluation report is sent to Change Management with a recommendation to promote the service release out of early life support and into normal operation.

Other outputs include:

4.5.6.4 Interfaces To Other Lifecycle Stages
Testing supports all of the release and deployment steps within Service Transition.

Although this chapter focuses on the application of testing within the Service Transition phase, the test strategy will ensure that the testing process works with all stages of the lifecycle:  Working with Service Design to ensure that designs are inherently testable and providing positive support in achieving this; examples range from including selfmonitoring within hardware and software, through the re-use of previously tested and known service elements through to ensuring rights of access to third party suppliers to carry out inspection and observation on delivered service elements easily

4.5.7 Information Management
The nature of IT Service Management is repetitive, and this ability to benefit from re-use is recognized in the suggested use of transition models. Testing benefits greatly from re-use and to this end it is sensible to create and maintain a library of relevant tests and an updated and maintained data set for applying and performing tests. The test management group within an organization should take responsibility for creating, cataloguing and maintaining test scripts, test cases and test data that can be re-used.

Similarly, the use of automated testing tools (Computer Aided Software Testing - CAST) is becoming ever more central to effective testing in complex software environments. Equivalently standard and automated hardware testing approaches are fast and effective.

Test data
However well a test has been designed, it relies on the relevance of the data used to run it. This clearly applies strongly to software testing, but equivalent concerns relate to the environments within which hardware, documentation etc. is tested. Testing electrical equipment in a protected environment, with smoothed power supply and dust, temperature and humidity control will not be a valuable test if the equipment will be used in a normal office.

Test Environments
Test environments must be actively maintained and protected. For any significant change to a service, the question should be asked (as for continued relevance of the continuity and capacity plans, should the change be accepted and implemented): 'If this change goes ahead, will there need to be a consequential impact to the test data?' If so, it may involve updating test data as part of the change, and the dependency of a service, or service element, on test data or test environment will be evident from the SKMS, via records and relationships held within the CMS. Outcomes from this question include:

Maintenance of test data should be an active exercise and should address relevant issues including:

An established test database can also be used as a safe and realistic training environment for a service

4.5.8 Key Performance Indicators . And Metrics
4.5.8.1 Primary (of Value To The Business/customers)
he business will judge testing performance as a component of the Service Design and transition stages .f the service lifecycle. Specifically, the effectiveness of testing in delivering to the business can be judged through:

The business will also be concerned with the economy of the testing process - in terms of:

4.5.8.2 Secondary (internal)
The testing function and process itself must strive to be effective and efficient, and so measures of its effectiveness and costs need to be taken. These include:

Testing is about measuring the ability of a service to perform as required in a simulated (or occasionally the actual) environment, and so to that extent is focused on measurement. Care must be taken to try and separate out the measures that actually relate to the testing process from the number of errors introduced into services and systems. Careless measurement can appear to improve testing effectiveness although the development practices are worse - it is simply easier to find defects when there are lots of them. The point here is that testing is actually a stage of the design, build, release and deployment processes and the important measure is the overall one - about delivering services that deliver benefits and fail less often.

4.5.9 Challenges, Critical Success Factors And Risks
Still the most frequent challenges to effective testing are based on lack of respect and understanding for the role of testing. Traditionally testing has been starved of funding, and this results in:

Critical success factors include:

Risks to successful Service Validation and Testing include:

Supporting Material
  1. MOF - Validate and Review the Change
  2. MOF - Microsoft Technet - Testing Methodologies
  3. Wiki - Software Testing

[To top of Page]


Visit my web site