In this section, I provide a basic introduction for each of the ITIL best practice categories and offer more granular outlines of possible service improvement initiatives. Many of the suggestions are derived from the proposal selection criteria advanced in the previous section.
Incident | Service Desk | Change | Problem | Configuration | Release | Service Level | Availability | Capacity | Financial | Continuity |
![]() |
Incident Management (sometimes referred to as Event Management) will be one of the first processes an organization will implement. Indeed, all organizations will have some capability to handle Incidents. One of the major insights that ITIL offers is its' pre-occupation with restoring service as an exercise separate and distinct from identifying and solving the underlying root cause of the incident. The logic inherent in this is simple and sublime, but one that requires a consistent and dedicated application by the organization because there will be a natural tendency to seek out root cause during the investigation of the incident. Unfortunately, all too often doing so lengthens the time to restore services. Documented, easily available "workarounds" which can be implemented quickly to restore service as quickly as possible are the stock and trade of good Incident Management. At the same time that Incident Management is seeking to restore a service to defined operating parameters, Availability and Problem Management are working to ensure that problems and "Known Errors" are removed from the infrastructure - so they do not re-occur. Proactive measures by the organizations responsible for these areas will reduce the workload on Incident Management teams. Mature organizations seek to re-direct activity from addressing incidents to removing their existence in the first place. Ensuring quality control in recording information on Incident Tickets provides the data needed to undertake this kind of analysis. Accurate and complete information is, therefore, an important quality of service (QoS) consideration which pays dividends to organizations capable of effectively translating this data into concrete service improvement initiatives. |
![]() | In small organization the distinction between Incident Management, the Service Desk and Problem Management may be less pronounced that in larger enterprises. In small organization a single person might perform all three roles. The Service Desk should be the first line of contact for daily user interaction with the IT Department. They specialize in a quick turn-around of the most frequent interactions. Typically these will include service requests, requests for information and recurring incidents which have known parameters and solutions. These functions are subject to automation by way of online request systems, online knowledge bases and bulletin boards to keep user informed of outages. However, if the organization chooses to rely heavily upon automated service desk functionality care must be taken to ensure that the important roles of communication traffic controller and initial Ticket Owner are re-assigned accordingly within the Incident Management System. |
![]() | Poorly executed releases and changes are a primary cause of incidents. Therefore, improvements in change management processes can result in a quick return on investment for the organization. The evidence linking infrastructure changes to user incidents is large and conclusive R. Many organizations< however, are reluctant to undertake basic changes in this area. Doing so requires the many distinct platforms (eg., mainframe, desktop, operating environments, major business applications) to relinquish elements of control to a central body. The organizational "silo" in this case loses some control over the timing and manner in which they make changes, while many of the benefits are realized in other areas of the enterprise. Whenever gains accrue to a different organizational unit from the one incurring the costs, senior management must be involved to approve and direct any financial re-alignment or to enforce the change in responsibility on behalf of the overall organization. In many organizations separate and distinct Change Management forums proliferate in subject areas (eg., servers, desktop, application areas, etc.). There may even be separate Help Desks to record and initiate activity - each doing their own thing, unmindful of the cascading affects that a change to their area may have on other infrastructure elements or other business areas. Consolidation of Change Schedules is one of the first change management procedures which should be implemented. This should be quickly followed by either consolidating Change Advisory forums or ensuring proper communications amongst them. A Configuration Management Database will enhance Change Management's ability to analyze the effect of changes by quickly displaying the potential impact of a change - including other forums and areas affected. Any information which improves the organization's ability to review the risks associated with a change will have a positive affects. |
Combination of different boards might be used depending upon the assigned importance and risk associated with the change.
![]() |
An important insight advanced by ITIL is the distinction to be drawn between Incident and Problem Management and the recognition of the distinct goals, governance, processes and procedures separating the two services. Problem Management engages more analytic, quantitative procedures which are often found in more mature organizationsN. Initiating Problem Management practices will challenge organizations unwilling or incapable of assuming a more proactive or predictive view of the infrastructure. Sometimes an organization will embrace Problem Management, but lack the necessary management commitment or resources to devote to the effort. In these situations attention will almost always be redirected to fire-fighting (ie, urgent, immediate) activities the moment any resourcing pressures arise. Nonetheless, there is strong evidence that efforts spent in identifying and removing infrastructure faults will produce significant benefit to the infrastructure's overall stability and have a positive rate of return on the investment. Yet, like Change Management, issues can arise with regard to the distribution of the rewards. It is in the nature of process re-engineering undertakings that the realization of benefits will often accrue to a different part of the organization than where the costs are incurred. Without supporting benefit-cost data and in the absence of senior management support (above the organizations incurring the costs and benefits) the promise of benefits may not be sufficient to overcome bureaucratic hurdles. |
Problem Management Process |
![]() | Because there is a shared corporate obligation to monitor the assets of an organization, most corporations will have developed an Asset Management system and have an Asset database describing the financial attributes of items. This asset, while highly useful for tracking software licensing and machine movement does not describe the inter-relationships of IT devices within the infrastructure. These dependencies are important for developing incident containment strategies and assessing the impact of releases and changes. The Asset database must either be amended to include additional fields (due to the nature of inter-relationships most financial databases are ill-suited for this extension) or a CMDB purchased or developed. Ideally, asset data can be migrated from the Asset system into the CMDB, or, alternatively both systems may continue to operate with bridges built between them - a logical CMDB. Many organization have purchased or built a CMDB and populated it only to find that these task are easy compared to keeping the data accurate and current. Doing this requires a concerted organization commitment. Discovery Tools can be used to search and discover infrastructure components (ie. Configuration Items). Using these tools to perform monthly updates of the CMDB is a low cost way to keep the CMDB accurate. It may not, however, be easy to update other important fields so that items pertaining to any individual Configuration Item may be in varying states of currency. An important consideration in establishing a CMDB is the level of granularity for which Configuration Items should be described. The fewer the number the easier the maintenance - but, the less useful the database will be. A good rule of thumb is to get control of the organization's major application areas (where the business impact of service outage is the most severe). This data can then be used to support Incident and Change Management processes. The organization can then continue to enhance the CMDB's inclusiveness until a point of diminishing returns occurs where further infrastructure elements (coverage) and/or details on each item (granularity) fail to be worth the additional effort in maintaining them. |
This does not mean that all the tools features need be implemented immediately. Populating the CMDB may be difficult, but it is a minor problem compared to keeping the information current and accurate. The logic is to start small and build upon your experience.
Note the following caution.
"the jury's out on whether CMDBs can be made to work, at least in their holistic, 'framework' sense. Most successful CMDB implementations to date are targeted at a subset of possible functions and specific management processes and applications." Dennis Drogseth, Vice President, Enterprise Management Associates, quoted in The Fastest Payback for Your ITIL Investment, Integrien, White Paper, October, 2005 |
Discovery Tools might also be employed to update the CMDB. At a minimum all anomalies identified should be investigated since they might indicate improperly implemented Changes (which are reported to Change Management - an important metric for them).
![]() | Many organizations will confuse Release and Change Management. Both processes are concerned with introducing changes to the "known and trusted environment" and often Release Boards are established to manage changes associated with specific applications. Change Management should be the final arbitrator of what gets introduced into the infrastructure. Release Management is like a furniture mover. Their prime task is to ensure a smooth migration of furniture (ie. code and machinery) from one location to another (ie., test environments into the production environment). Their tools are project management and migration tools (such as Microsoft Solution Framework). Their jobs are frequently large, typically involving multiple milestones and interactions with Change Management. |
Release Management Process |
A Release Policy is revised or extended when an organization adopts a new technical infrastructure. Piloting of new Release Management procedures should form part of a project to implement a new infrastructure. For example, a new approach to releasing software may need to be developed when an organization decides to adopt a new hardware or software platform. This may be something as small as a new programming language or as major as a completely new hardware platform with its own operating system, or a network management system.
The exact configuration of the DSL that is required for Release Management should be defined before development commences. The DSL forms part of the Release Policy or Change and Configuration Management plan for the organization.
![]() |
The quality of a Service Level Management process will determine how well the IT Service provider promotes its' client's interests. The definitive document in this process is the SLA. It replaces anecdote with fact, and in doing so, provides a more solid basis for discussion and negotiation between service provider and service customer(s). It positions the IT Department as a seller of services at defined costs, therein promoting a more business-focused relationship. Many organizations have developed SLAs which have failed to deliver on their promise because the organization lacks the ability to monitor and report consistently on the performance of the included services. Quickly, the SLA documents get ignored - just another piece of paper. The approach advocated here recognizes the inherent difficulties associated with negotiating the multi-varied relationships involved. Much internal discussion within the IT division (ie. getting the house in order) should precede negotiating service parameters and levels with business units. This can be accomplished through the publication of a Service Catalogue by the IT provider - listing available services, service options and costs. Unlike many other ITIL areas which can be targeted for a generalized implementation in concert with the achievement of a specific level of organizational maturity, there are aspects of Service Level Management at all maturity levels. Few organizations will achieve a Level 5 implementation - optimized service provision through continuous adjustment based upon the provision of real-time information of performance. Many, however, can put in place key elements, reflective of the organization's level of maturity at any point in time. These elements will establish and promote improvements in the relationship between IT provider and customer and establish key enablers to implement other service improvement initiatives. |
Based on the service chain associated with the NEW Employee service develop an Operational Level Agreement between the participants and the Senior Leadership Team of the Division offering the service.
The third type is often the most efficient. It can outline a broad spectrum of services offered to the entire customer base for generalized services (including level variations - ie. silver, gold, platinum). Then more specialized or targeted services such as specific application ideosyncratic factors can be offered as addenda or side deals to affected clientsN.
The exercise may be coordinated by a Service Manager who, based upon the collected OLOs, develops a composite Service Level Objectives. This measure is often contrasted with industry benchmarks. Discrepancies require explanation. The objective is to advance SLOs which describe expected (and perhaps stretch) service performances.
All current contracts with third party providers should be reviewed for their internal consistency with the OLOs and SLOs. Performance requirement deficiencies should be noted and remedied at the earliest contract renewal opportunity.
It should be understood that the Incident resolution targets included in SLAs will typically not mirror targets included in contracts or OLAs. SLA targets will include performance specs for all process steps in the support cycle (e.g. detection time, Service Desk logging time, escalation time, referral time between groups etc, Service Desk review and closure time - as well as the actual time fixing the failure). Each OLO target will have achievement parameters (eg., achieved 95% of the time over the course of a month). The sum of the achievement parameters for all process stages are not multiplicative (ie., 95% times 95% times 95%). This simply implies that over the course of an entire service chain it is not expected that a service delay will exist at each stage. Rather, one might expect that a service deficiency may occur at one of the stages 5% of the time - but we don't necessarily know at which stage the delay will be experienced.
Before committing to SLAs, it is therefore important that existing contractual arrangements are investigated and where necessary, upgraded. This is likely to incur additional costs, which must either be absorbed by IT, or passed on to the Customer. In the latter case the Customer must agree to this, or the more relaxed targets in existing contracts should be agreed for inclusion in SLAs.
OLAs should be monitored against these targets and feedback given to the Managers of the support groups. This highlights potential problem areas, which may need to be addressed internally or by a further review of the SLA.
It can be difficult to draw out requirements, as the business may not know what they want - especially if this is their initial introduction to the concept of negotiated service. They may need help in understanding and defining their needs. Be aware that the requirements initially expressed may not be those ultimately agreed - they are more likely to change where charging is in place (since the Business Customer may have to make compromises based upon the costs of delivering the service). Several iterations of negotiations may be required before an affordable balance is struck between what is desired and what is achievable and affordable.
Many organizations have found it valuable to produce a pro-forma that can be used as a starting point for all SLAs. The pro-forma can often be developed alongside the pilot SLA.
Using the draft agreement as a basis, negotiations must be held with the Customer(s), or Customer representatives to finalize the contents of the SLA and the initial service level targets, and with the service providers to ensure that these are achievable.
Where charges are being made for the services provided, customer demands can be expected to be more reasonable than in environments where there is no charge mechanism in place. Where direct charges are not made, the support of senior business managers must be enlisted to ensure that excessive or unrealistic demands are not placed upon the IT provider by any individual customer group.
Existing monitoring capabilities should be reviewed and upgraded as necessary. Ideally this should be done ahead of, or in parallel with, the drafting of SLAs, so that monitoring can be in place to assist with the validation of proposed targets. It is essential that monitoring matches the Customer's true perception of the service - in practice achieving this is often elusive.
Reporting can be broadly divided into two categories: real-time and periodic. Reviewing and improving services is the validation of Service Level ManagementN. Because it implies performance (and, hence, internal compensation and rewards) performance reporting is sensitive and may imply major cultural and organizational shifts. It should, therefore, be approached cautiously and in stages corresponding to increasing organizational maturity capabilities. This usually implies a movement from ad hoc, to regular, to automated to real-time reportingN.
Real-time reporting should be the ultimate goal for service-level reporting. On a real-time basis, clients need to know the status (i.e., health) of the service. A simple, positive confirmation that the service is believed to be functioning normally is often sufficient. Some companies with SLM tools denote this status check with an icon similar to a stop light, with red, amber and green lights, with the obvious meanings.
The expected process performance can be used in establishing the project's quality and process-performance objectives and can be used as a baseline against which actual project performance can be compared. This information is used to quantitatively manage the project. Each quantitatively managed project, in turn, provides actual performance results that become a part of the baseline data for the organizational process assets. The associated process performance models are used to represent past and current process performance and to predict future results of the process. For example, the latent defects in the delivered product can be predicted using Six Sigma techniques of defects identified during service verification activities.
When the organization has measures, data, and analytic techniques for critical services and service characteristics, it is able to do the following:
A key to proliferating organizational successes is the establishment and maintenance of baselines and models which characterize the expected process performance of the organization's set of standard processes.
To undertake quantitative SLM requires an enhanced ability to measure Availability in real-time, so there is a pre-requisite on Availability Management's ability to achieve Level 4 maturity.
Maturity level 5 focuses on continually improving services through both incremental and innovative technological improvements. Quantitative service-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing service improvement projects (SIPs). The effects of deployed service improvements are measured and evaluated against the quantitative SLOs. Both defined processes and the organization's set of standard processes are targets of measurable improvement activities.
Process improvements to address common causes of process variation and measurably improve the organization's processes are identified, evaluated, and deployed. Improvements are selected based on a quantitative understanding of their expected contribution to achieving the organization's process-improvement objectives versus the cost and impact to the organization. The performance of the organization's processes and services are continually improved.
Optimizing processes that are agile and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization. The organization's ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. Improvement of the processes is inherently part of everybody's role, resulting in a cycle of continual improvement. Ref
![]() |
All organizations, regardless of their maturity level, must undertake some availability management. Organizations who ignore Availability Management experience chronic outages with severe and persistent business impacts. Like the other service delivery elementsN, Availability Management must be acknowledged within the organization even if actions are haphazard, uncoordinated and unfocused. ITIL's contribution is the recognition of these elements as identified tasks common across technology platforms.
Creating identified responsibilities for these functions (whether by creating identifiable organizational units or through cooperative venues in which individuals resident in subject areas which commiserate on common processes and documents such an Availability Plan) may benefit organizations of a size large enough and mature enough to entertain this kind of functional specialization. |
Availability Management Process |
These infrastructure components can be analyzed for evolving trends which will provide information and insights into potential availability issues and suggest opportunities to strengthen overall availability.
Collection techniques fall into one of two primary categories:
![]() |
Capacity Management, like Availability Management, will be done within the IT organization. ITIL main contribution to service management in this regard is the recognition of the elements as identified tasks common to many technology areas. Grouping these functions together, whether by creating identifiable organizational units or through cooperative venues in which individuals resident in subject areas commiserate on common processes, documents (ie., a Capacity Plan) will benefit the organization. The timing and nature of Capacity Management processes is different from those of Availability management by virtue of a heavy reliance upon common technologies and toolsets. Because there are economies to be achieved in purchasing bandwidth and software licensing there is a great need to consider overall capacity requirements for the IT Department. On the other hand, Availability requirements will be specific to individual application areas. Thus Capacity Management requires much greater activity coordination than availability and this need is frequently reflected in units charged with consideration of overall capacity issues. |
Current utilizations should then be compared to maximum capacities. The intent here is to determine how much excess capacity is available for selected components. The utilization or performance of each component measured should be compared to the maximum usable capacity. N
Then collect meaningful workload forecasts from representative users. User workload forecast worksheets should be customized and used as much as possible to meet the unique requirements of your particular environment. These forecasts are then re-stated as resource requirements. Measurement tools or a senior analyst's expertise can help in changing projected transaction loads, for example, into increased capacity of server processors. The worksheets also allow you to project the estimated time frames during which workload increases will occur. [Note]
The capacity plan should document the current levels of resource utilization and service performance, take into account business strategy and plan in its forecast of future requirements for resource in support of IT services delivered or new services planned. Any recommendations made in the plan should include quantified details of necessary resource, any relevant impact, associated cost and benefits.
The production and update of a capacity plan needs to occur at pre-defined intervals, preferably yearly in line with the business or budget cycle. A quarterly re-issue of the updated plan may be necessary to take into account changes in business plans, to report on the accuracy of forecasts, and to make or refine recommendations.
Inputs to consider for the CDB are:
Other detail input for the CDB includes:
Capacity management issues can dramatically affect the business if they cause unplanned downtime of a vital business function. This requires considerations for capacity and availability management to be intertwined and solution designs consistent. Service continuity management is weighing risk versus cost for scenarios outside the normal availability design. Its contingency planning relies on capacity forecast and recommendations to move forward in documenting a chosen contingency measure. It follows, that the clear and distinct requirements of each process have correlated capacity and performance data identified and properly recorded in the CDB. It is important capacity detail data in the CDB relates to OLA and associated OLA and/or SLA information is tracked in the configuration management database (CMDB). Because there is an implied dependency of availability management information on the proper integration of performance and capacity measurement data, capacity and availability staff often shares common monitoring tools and management solutions.
In considering the data that needs to be included, a distinction needs to be drawn between the data collected to monitor Capacity (e.g. throughput), and the data to monitor performance (e.g. response times). Data of both types is required by the Service and Resource Capacity Management sub-processes. The data should be gathered at total resource utilization level and at a more detailed profile for the load that each service places on each particular resource. This needs to be carried out across the whole Infrastructure, host or server, the network, local server and client or workstation.
Part of the monitoring activity should be of thresholds and baselines or profiles of the normal operating levels. If these are exceeded, alarms should be raised and exception reports produced. These thresholds and baselines should have been determined from the analysis of previously recorded data, and can be set on:
The data collected from the monitoring should be analyzed to identify trends from which the normal utilization and service level, or baseline, can be established. By regular monitoring and comparison with this baseline, exception conditions in the utilization of individual components or service thresholds can be defined, and breaches or near misses in the SLAs can be reported upon. Also, the data can be used to predict future resource usage, or to monitor actual business growth against predicted growth.
The use of each resource and service needs to be considered over the short, medium and long term, and the minimum, maximum and average utilization for these periods recorded. Typically, the short term pattern covers the utilization over a 24-hour period, while the medium term may cover a one-week to four-week period, and the long term, a year-long period. Over time the trend in the use of the resource by the various IT Services will become apparent.
It is important to understand the utilization in each of these periods, so that Changes in the use of any service can be related to predicted Changes in the level of utilization of individual resources. The ability to identify the specific hardware or software resource - on which a particular IT Service depends -is improved greatly by an accurate, up-to-date and comprehensive CMDB.
When the utilization of a particular resource is considered, it is important to understand both the total level of utilization and the utilization by individual services of the resource.
The analysis of the monitored data may identify areas of the configuration that could be tuned to better utilize the system resource or improve the performance of the particular service.
The implementation of any Changes arising from these activities must be undertaken through a formal Change Management process. The impact of system tuning changes can have major implications on the Customers of the service. The impact and risk associated with these types of changes are likely to be greater than that of other different type of changes. Implementing the tuning changes under formal Change Management procedures results in Less adverse impact on the Users of the service. Increased User productivity, increased productivity of IT personnel, a reduction in the number of Changes that need to be backed-out (including the ability to do so more easily) and greater management and control of business critical application services.
It is important that further monitoring takes place, so that the effect of the Change can be assessed. It may be necessary to make further Changes or to regress some of the original Changes.
Demand Management can be carried out as part of any one of the sub-processes of Capacity Management. However, Demand Management must be carried out sensitively, without causing damage to the relationship with the business Customers or to the reputation of the IT organization. It is necessary to understand fully the requirements of the business and the demands on the IT Services, and to ensure that the Customers are kept informed of all the actions being taken.
Baseline Modeling is the first stage in modeling is to create a baseline model that reflects accurately the performance that is being achieved. When this baseline model has been created, predictive modeling can be done (i.e. ask the 'what if?’ questions that reflect planned Changes to the hardware and/or the volume/variety of workloads). If the baseline model is accurate, then the accuracy of the result of the predicted Changes can be trusted.
Analytical Modeling involves representations of the behavior of computer systems using mathematical techniques, (e.g. multi-class network queuing theory). Typically, a model is built, using a PC-based software package, by specifying within the package the components and structure of the configuration that needs to be modeled, and the utilization of the components, e.g. CPU, memory and disks, by the various workloads or applications. When the model is run, the queuing theory is used to calculate the response times in the computer system. If the response times predicted by the model are sufficiently close to the response times recorded in real life, the model can be regarded as an accurate representation of the computer system.
Simulation Modeling is even more sophisticated and involves the modeling of discrete events, e.g. transaction arrival rates, against & given hardware configuration. This type of modeling can be very accurate in sizing new applications or predicting the effects of Changes on existing applications, but can also be very time-consuming and therefore costly.
When simulating transaction arrival rates, have a number of staff enter a series of transactions from prepared scripts, or use software to input the same scripted transactions with a random arrival rate. Either of these approaches takes time and effort to prepare and run. However, it can be cost-justified for organizations with very large systems where the cost (millions of dollars) and the associated performance implications assume great importance.
Effective Service and Resource Capacity Management together with modeling techniques enable Capacity Management to answer the 'What if' questions. 'What if the throughput of Service A doubles? "What if Service B is moved from the current processor onto a new processor - how will the response times in the two services be altered?'
![]() | As an IT management tool, budgeting is ignored or under-used. Budgeting provides a starting point for cost cutting, as it forces one to think about how to provide the same service with less money. In an IT environment budgeting is notoriously difficult, because while logic demands an IT budget follow the business budget, reality mandates a simultaneous budgeting process. An IT budget should be made after the goals for the business are set, after the budgets for all other supporting departments are set, and after it is known what the actual business needs are. IT budgeting is made more difficult when the Capacity Management process is incomplete or does not function properly. IT groups typically are organized in highly politicized functional silos, and planning for the needed capacity often is pure guesswork or - even worse - a result of carefully crafted political compromises. Accounting practices in many IT organizations can also be chaotic and misunderstood. Changes in the IT department and lack of control over these changes provide headaches to most IT accounting people as most uncontrolled change creates chaos. Reporting, in conjunction with Service Level Management, is non-existent in most companies and "fire fighting" considerations determine where the dollars go. Internal billing practices often are considered "funny money" and a burdensome bureaucratic control. As a result, the potential benefit of billing and thus influencing end-users' behavior often is ignored or unknown. Even in companies where sophisticated service level monitoring is being practiced, it is normal for the end-user to face a lack of transparency around charging practices. The end-user seldom has a clue what the service is actually costing or what his or her department is being charged for the use of any IT service. |
Determine how the costs of service delivery to customers will be allocated and accounted. The cost structures must consider any economies of scale that are related to the delivery of several combined services or increases in volume.
In reality, linking any IT initiative into the total existing IT environment involves a cost. the amount of that cost depends not only on the size and complexity of the initiative being undertaken, but also on the state of the current IT environment.
Mark D Lutcheon, managing IT as a business, John Wiley & Sons, 2004, ISBN: 0-471-47104-6, p. 137 |
![]() | Almost all large organizations have Disaster Recovery Plans for their major, business-critical applications. There is an abundance of evidence about the pitfalls of organizations which have tried to save money by cutting corners on these plans. Efforts to maintain and update the plans, however, often suffer due to the absence of a highly visible and permanent responsibility for their maintenance. Consequently, efforts in this area are often sporadic and in consistent. Theoretically, Service Continuity Management is an adjunct of Availability Management. The two areas share a common goal. However, because this area represents such a severe threat to business interests, most business units will wish to maintain tight control over plans and recovery strategies. Because of this pre-occupation Service Continuity is frequently not considered during ITSM improvement projects. It is considered tangential to or outside the sphere if the iT Division. IT service managements efforts devoted to Availability Management are seen as the most appropriate vehicle for the provider to exercise some influence and direction over continuity plans. |
In addition to disaster avoidance, a risk assessment or analysis is performed to identify and define potential risks to the organization. Based upon the risks, preventative measures are implemented to mitigate the affect of the risk or threat.
These strategies are placed under Change Management and will form part of SLAs negotiated with lines of business.
![]() | Getting Started ![]() |