Service Design
5. Service Design Technology-Related Activities
This chapter considers the technology-related activities of requirement engineering and the development of technology architectures. The technology architectures cover aspects of Service Design in the following areas:
- Infrastructure Management
- Environmental Management
- Data and Information Management
- Application Management.
5.1 Requirements Engineering
Requirements engineering is the approach by which sufficient rigour is introduced into the process of understanding and documenting the business and user's requirements, and ensuring traceability of changes to each requirement. This process comprises the stages of elicitation, analysis (which feeds back into the elicitation) and validation. All these contribute to the production of a rigorous, complete requirements document. The core of this document is a repository of individual requirements that is developed and managed. Often these requirements are instigated by IT but ultimately they need to be documented and agreed with the business.
There are many guidelines on requirements engineering, including the Recommended Practice for Software Requirements Specifications (IEEE 830), The Software Engineering Body of Knowledge (SWEBOK), CMMI and the V-Model, which is described in detail in the Service Transition publication.
5.1.1 Different Requirement Types
A fundamental assumption here is that the analysis of the current and required business processes results in functional requirements met through IT services (comprising applications, data, infrastructure, environment and support skills).
It is important to note that there are commonly said to be three major types of requirements for any system - functional requirements, management and operational requirements, and usability requirements.
- Functional requirements are those specifically required to support a particular business function.
- Management and operational requirements (sometimes referred to as non-functional requirements) address the need for a responsive, available and secure service, and deal with such issues as ease of deployment, operability, management needs and security.
Usability requirements are those that address the 'look and feel' needs of the user and result in features of the service that facilitate its ease of use. This requirement type is often seen as part of management and operational requirements, but for the purposes of this section it will be addressed separately.
5.1.1.1 Functional Requirements
Functional requirements describe the things a service is intended to do, and can be expressed as tasks or functions that the component is required to perform. One approach for specifying functional requirements is through such methods as a system context diagram or a use case model. Other approaches show how the inputs are to be transformed into the outputs (data flow or object diagrams) and textual descriptions.
A system context diagram, for instance, captures all information exchanges between, on the one hand, the IT service and its environment and, on the other, sources or destinations of data used by the service. These information exchanges and data sources represent constraints on the service under development.
A use case model defines a goal-oriented set of interactions between external actors and the service under consideration. Actors are parties outside the service that interact with the service. An actor may reflect a class of user's roles that users can play, or other services and their requirements. The main purpose of use case modeling is to establish the boundary of the proposed system and fully state the functional capabilities to be delivered to the users. Use cases are also helpful for establishing communication between business and application developers. They provide a basis for sizing and feed the definition of usability requirements. Use cases define all scenarios that an application has to support and can therefore easily be expanded into test cases. Since use cases describe a service's functionality on a level that's understandable for both business and IT, they can serve as a vehicle to specify the functional elements of an SLA, such as the actual business deliverables from the service.
One level 'below' the use case and the context diagram, many other modeling techniques can be applied. These models depict the static and dynamic characteristics of the services under development. A conceptual data model (whether called object or data) describes the different 'objects' in the service, their mutual relationships and their internal structure. Dynamics of the service can be described using state models (e.g. state transition diagrams) that show the various states of the entities or objects, together with events that may cause state changes. Interactions between the different application components can be described using interaction diagrams (e.g. object interaction diagrams). Alongside a mature requirements modelling process, CASE tools can help in getting and keeping these models consistent, correct and complete.
5.1.1.2 Management And Operational Requirements
Management and operational requirements (or nonfunctional requirements) are used to define requirements and constraints on the IT service. The requirements serve as a basis for early systems and service sizing and estimates of cost, and can support the assessment of the viability of the proposed IT service. Management and operational requirements should also encourage developers to take a broader view of project goals.
Categories of management and operational requirements include:
- Manageability: Does it run? Does it fail? How does it fail?
- Efficiency: How many resources does it consume?
- Availability and reliability: How reliable does it nee. to be?
- Capacity and performance: What level of capacity do we need?
- Security: What classification of security is required?
- Installation: How much effort does it take to install the application? Is it using automated install procedures?
- Continuity: What level of resilience and recovery is required?
- Controllability: Can it be monitored, managed and adjusted?
- Maintainability: How well can the application be adjusted, corrected, maintained and changed for future requirements?
- Operability: Do the applications disturb other applications in their functionalities?
- Measurability and reportability: Can we measure and report on all of the required aspects of the application?
The management and operational requirements can be used to prescribe the quality attributes of the application being built. These quality attributes can be used to design test plans for testing the applications on the compliance to management and operational requirements.
5.1.1.3 Usability Requirements
The primary purpose of usability requirements is to ensure that the service meets the expectations of its users with regard to its ease of use. To achieve this:
- Establish performance standards for usability evaluations
- Define test scenarios for usability test plans and usability testing.
Like the management and operational requirements, usability requirements can also be used as the quality attributes of design test plans for testing the applications on their compliance to usability requirements.
5.1.2 Requirements For Support - The User View
Users have formally defined roles and activities as user representatives in requirements definition and acceptance testing. They should be actively involved in identifying all aspects of service requirements, including the three categories above, and also in:
- User training procedures and facilities
- Support activities and Service Desk procedures.
5.1.3 Requirements Investigation Techniques
There is a range of techniques that may be used to investigate business situations and elicit service requirements. Sometimes the customers and the business are not completely sure of what their requirements actually are and will need some assistance and prompting from the designer or requirements gatherer. This must be completed in a sensitive way to ensure that it is not seen as IT dictating business requirements again. The two most commonly used techniques are interviewing and workshops, but these are usually supplemented by other techniques, such as observation and scenarios.
5.1.3.1 Interviews
The interview is a key tool and can be vital in achieving a number of objectives, such as:
- Making initial contact with key stakeholders and establishing a basis for progress
- Building and developing rapport with different users and managers
- Acquiring information about the business situation, including issues and problems.
There are three areas that are considered during interviews:
- Current business processes that need to be fulfilled in any new business systems and services
- Problems with the current operations that need to be addressed
- New features required from the new business system or service and any supporting IT service.
The interviewing process is improved when the interviewer has prepared thoroughly as this saves time by avoiding unnecessary explanations and demonstrates interest and professionalism. The classic questioning structure of 'Why, What, Who, When, Where, How' provides an excellent framework for preparing for interviews.
It is equally important to formally close the interview by:
- Summarizing the points covered and the actions agreed
- Explaining what happens next, both following the interview and beyond
- Asking the interviewee how any further contact should be made.
It is always a good idea to write up the notes of the interview as soon as possible - ideally straight away and usually by the next day.
The advantages of interviewing are:
- Builds a relationship with the users
- Can yield important information
- Opportunity to understand different viewpoints and attitudes across the user group
- Opportunity to investigate new areas that arise
- Collection of examples of documents and reports
- Appreciation of political factors
- Study of the environment in which the new service will operate.
The disadvantages of interviewing are:
- Expensive in elapsed time
- No opportunity for conflict resolution.
5.1.3.2 Workshops
|
Figure 5.1 Requirements workshop techniques |
Workshops provide a forum in which issues can be discussed, conflicts resolved and requirements elicited. Workshops are especially valuable when time and budgets
are tightly constrained, several viewpoints need to be canvassed and an iterative and incremental view of service development is being taken.
The advantages of the workshop are:
- Gain a broad view of the area under investigation - having a group of stakeholders in one room will allow a more complete understanding of the issues and problems
- Increase speed and productivity - it is much quicker to have one meeting with a group of people than interviewing them one by one
- Obtain buy-in and acceptance for the IT service
- Gain a consensus view - if all the stakeholders are involved, the chance of them taking ownership of the results is improved.
There are some disadvantages, including:
- Can be time-consuming to organize - for example, it is not always easy to get all the necessary people together at the same time
- It can be difficult to get all of the participants with the required level of authority
- It can be difficult to get a mix of business and operational people to understand the different requirements.
The success or failure of a workshop session depends, in large part, on the preparatory work by the facilitator and the business sponsor for the workshop. They should spend time before the event planning the following areas:
- The objective of the workshop - this has to be an objective that can be achieved within the time constraints of the workshop.
- Who will be invited to participate in the workshop - it is important that all stakeholders interested in the objective should be invited to attend or be represented.
- The structure of the workshop and the techniques to be used. These need to be geared towards achieving the defined objective, e.g. requirements gathering or prioritization, and should take the needs of the participants into account.
- Arranging a suitable venue - this may be within the organization, but it is better to use a 'neutral' venue out of the office.
During the workshop, a facilitator needs to ensure that the issues are discussed, views are aired and progress is made towards achieving the stated objective. A record needs to be kept of the key points emerging from the discussion.
At the end of the workshop, the facilitator needs to summarize the key points and actions. Each action should be assigned to an owner.
There are two main categories of technique required for a requirements workshop - techniques for discovery and techniques for documentation, as shown in Figure 5.1.
5.1.3.3 Observation
Observing the workplace is very useful in obtaining information about the business environment and the work practices. This has two advantages:
- A much better understanding of the problems and difficulties faced by the business users
- It will help devise workable solutions that are more likely to be acceptable to the business.
Conversely, being observed can be rather unnerving, and the old saying 'you change when being observed' needs to be factored into your approach and findings.
Formal observation involves watching a specific task being performed. There is a danger of being shown just the 'front-story' without any of the everyday variances, but it is still a useful tool.
5.1.3.4 Protocol Analysis
Protocol Analysis is simply getting the users to perform a task, and for them to describe each step as they perform it.
5.1.3.5 Shadowing
Shadowing involves following a user for a period such as a day to find out about a particular job. It is a powerful way to understand a particular user role. Asking for explanations of how the work is done, or the workflow, clarifies some of the already assumed aspects.
5.1.3.6 Scenario Analysis
Scenario Analysis is essentially telling the story of a task or transaction. Its value is that it helps a user who is uncertain what is needed from a new service to realize it more clearly. Scenarios are also useful when analyzing or redesigning business processes. A scenario will trace the course of a transaction from an initial business trigger through each of the steps needed to achieve a successful outcome.
Scenarios provide a framework for discovering alternative paths that may be followed to complete the transaction. This is extremely useful in requirements elicitation and analysis because real-life situations, including the exceptional circumstances, are debated.
Scenarios offer significant advantages:
- They force the user to include every step, so there are no taken-for-granted elements and the problem of tacit knowledge is addressed
- By helping the user to visualize all contingencies, they help to cope with the uncertainty about future systems and services
- A workshop group refining a scenario will identify those paths that do not suit the corporate culture
- They provide a tool for preparing test scripts.
The disadvantages of scenarios are that they can be time consuming to develop, and some scenarios can become very complex. Where this is the case, it is easier to analyse if each of the main alternative paths is considered as a separate scenario.
A popular approach to documenting scenario descriptions is to develop use case descriptions to support use case diagrams. However, there are also a number of graphical methods of documenting a scenario, such as storyboards, activity diagrams, task models and decision tree diagrams.
5.1.3.7 Prototyping
Prototyping is an important technique for eliciting, analyzing, demonstrating and validating requirements. It is difficult for users to envisage the new service before it is actually built. Prototypes offer a way of showing the user how the new service might work and the ways in which it can be used. If a user is unclear what they need the service to do for them, utilizing a prototype often releases blocks to thinking and can produce a new wave of requirements. Incremental and iterative approaches to service development, such as the Dynamic Systems Development Method (DSDM), use evolutionary prototyping as an integral part of their development lifecycle.
There is a range of approaches to building prototypes. They may be built using an application development environment so that they mirror the service; images of the screens and navigations may be built using presentation software; or they may simply be 'mock-ups' on paper.
There are two basic methods of prototyping:
- The throw-away mock-up, which is only used to demonstrate the look and feel
- The incremental implementation, where the prototype is developed into the final system.
It is important to select consciously which is to be used, otherwise there is a danger that a poor-quality mock-up becomes the basis for the real system, causing problems later on.
There is a strong link between scenarios and prototyping because scenarios can be used as the basis for developing prototypes. In addition to confirming the users' requirements, prototyping can often help the users to identify new requirements.
Prototypes are successfully used to:
- Clarify any uncertainty on the part of the service developers and confirm to the user that what they have asked for has been understood
- Open the user up to new requirements as they understand what the service will be able to do to support them
- Show users the 'look and feel' of the proposed service and elicit usability requirements
- Validate the requirements and identify any errors.
Potential problems include:
- Endless iteration
- A view that if the prototype works, the full service can be ready tomorrow.
5.1.3.8 Other Techniques
Other techniques that could be used, include:
5.1.4 Problems With Requirements Engineering
Requirements, seen by users as the uncomplicated bit of a new service development, are actually the most problematic aspect, and yet the time allocated is far less than for the other phases.
Tight timescales and tight budgets - both the result of constraints on the business - place pressures on the development team to deliver a service. The trouble is that without the due time to understand and define the requirements properly, the service that is delivered on time may not be the service that the business thought it was asking for.
Studies carried out into IT project failures tell a common story. Many of the projects and unsatisfactory IT services suggest the following conclusions:
- A large proportion of errors (over 80%) are introduced at the requirements phase
- Very few faults (fewer than 10%) are introduced at design and development - developers are developing things right, but frequently not developing the right things
- Most of the project time is allocated to the development and testing phases of the project
- Less than 12% of the project time is allocated to requirements.
These findings are particularly significant because the cost of correcting errors in requirements increases dramatically the later into the development lifecycle they are found.
One of the main problems with requirements engineering is the lack of detailed skill and overall understanding of the area when people use it. If accurately performed, the work can integrate requirements from numerous areas in a few questions.
Other typical problems with requirements have been identified as:
- Lack of relevance to the objectives of the service
- Lack of clarity in the wording
- Ambiguity
- Duplication between requirements
- Conflicts between requirements
- Requirements expressed in such a way that it is difficult to assess whether or not they have been achieved
- Requirements that assume a solution rather than stating what is to be delivered by the service
- Uncertainty amongst users about what they need from the new service
- Users omitting to identify requirements
- Inconsistent levels of detail
- An assumption that user and IT staff have knowledge that they do not possess and therefore failing to ensure that there is a common understanding
- Requirements creep - the gradual addition of seemingly small requirements without taking the extra effort into account in the project plan.
Another problem is an apparent inability on the part of the users to articulate clearly what it is they wish the service to do for them. Very often they are deterred from doing so because the nature of the requirement is explained in a straightforward statement.
5.1.4.1 Resolving Requirements Engineering Problems
Defining actors
There are some participants that must take part in the requirements process. They represent three broad stakeholder groups:
- The business
- The user community
- The service development team.
The user community should be represented by the domain expert (or subject-matter expert) and end-users.
Dealing with tacit knowledge
When developing a new service, the users will pass on to us their explicit knowledge, i.e. knowledge of procedures and data that is at the front of their minds and that they can easily articulate. A major problem when eliciting requirements is that of tacit knowledge, i.e. those other aspects of the work that a user is unable to articulate or explain.
Some common elements that cause problems and misunderstandings are:
- Skills - explaining how to carry out actions using words alone is extremely difficult.
- Taken-for-granted information - even experienced and expert business users may fail to mention information or clarify terminology, and the analyst may not realize that further questioning is required.
- Front-story/back-story - this issue concerns a tendency to frame a description of current working practices, or a workplace, in order to give a more positive view than is actually the case.
- Future systems knowledge - if the study is for a new service development, with no existing expertise or knowledge in the organization, how can the prospective users know what they want?
- The difficulty of an outsider assuming a common language for discourse, and common norms of communication. (If they do not have this, then the scope for misrepresentation of the situation can grow considerably.)
- Intuitive understanding, usually born of considerable experience. Decision makers are often thought to follow a logical, linear path of enquiry while making their decisions. In reality though, as improved decision-making skills and knowledge are acquired, the linear path is often abandoned in favour of intuitive pattern recognition.
- Organizational culture - without an understanding of the culture of an organization, the requirements exercise may be flawed.
Communities of practice are discrete groups of workers - maybe related by task, by department, by geographical location or some other factor - that have their own sets of norms and practices, distinct from other groups within the organization and the organization as a whole.
| Tacit | Explicit
|
Individual | Skills, values, taken-for-granted, intuitiveness | Tasks, job descriptions, targets,
volumes and frequencies
|
Corporate | Norms, back-story, culture, communities of practice | Procedures, style guides, processes, knowledge sharing
|
Table 5.1 Requirements engineering - tacit and explicit knowledge |
Example levels of tacit and explicit knowledge:
Technique
| Explicit knowledge
| Tacit knowledge
| Skills
| Future requirements
|
Interviewing |  |  |  |
|
Shadowing |  |  |  |
|
Workshops |  |  |  |
|
Prototyping |  |  |  |
|
Scenario analysis |  |  |  |
|
Protocol analysis |  |  |  |
|
Table 5.2 Requirements engineering; examples of tacit and explicit knowledge (Maiden and Rugg, 1995) |
5.1.5 Documenting Requirements
The requirements document is at the heart of the process and can take a number of forms. Typically the document will include a catalogue of requirements, with each individual requirement documented using a standard template. One or more models showing specific aspects, such as the processing or data requirements, may supplement this catalogue.
Before they are formally entered into the catalogue, requirements are subject to careful scrutiny. This scrutiny may involve organizing the requirements into groupings and checking that each requirement is 'well-formed'.
Once the document is considered to be complete, it must be reviewed by business representatives and confirmed to be a true statement of the requirements, at this point in time. During this stage the reviewers examine the requirements and question whether they are well-defined, clear and complete.
As we uncover the requirements from our various users, we need to document them. This is best done in two distinct phases - building the requirements list and, later, developing an organized requirements catalogue. The list tends to be an informal document and can be presented as four columns, as shown in Table 5.3.
Requirements | Source | Comments | Detail level
|
Table 5.3 Requirements list |
Each requirement in the list must be checked to see whether or not it is well formed and SMART (Specific, Measurable, Achievable, Realistic and Timely).
When checking the individual and totality of requirements, the following checklist can be used:
- Are the requirements, as captured, unambiguous?
- Is the meaning clear?
- Is the requirement aligned to the service development and business objectives, or is it irrelevant?
- Is the requirement reasonable, or would it be expensive and time-consuming to satisfy?
- Do any requirements conflict with one another such that only one may be implemented?
- Do they imply a solution rather than state a requirement?
- Are they atomic, or are they really several requirements grouped into one entry?
- Do several requirements overlap or duplicate each other?
There are several potential outcomes from the exercise:
- Accept the requirement as it stands
- Re-word the requirement to remove jargon and ambiguity
- Merge duplicated/overlapping requirements
- Take unclear and ambiguous requirements back to the users for clarification.
5.1.5.1 The Requirements Catalogue
The Requirements Catalogue is the central repository of the users' requirements, and all the requirements should be documented here, following the analysis of the list defined above. The Requirements Catalogue should form part of the overall Service Pipeline within the overall Service Portfolio. Each requirement that has been analyzed is documented using a standard template, such as that shown in Table 5.4.
IT service | Author | Date
|
Requirement ID | Requirement Name
|
Source | Owner | Priority | Business Process
|
Functional Requirement Description
|
Management and Operational and Usability Requirements | Description
|
Justification
|
Related Documents
|
Related Requirements
|
Comments
|
Resolution
|
Version No | Change History | Date | Change request
|
Table 5.4 Requirements template |
The key entries in the template are as follows:
- Requirement ID - this is a unique ID that never changes and is used for traceability - for example, to reference the requirement in design documents, test specifications or implemented code. This ensures that all requirements have been met and that all implemented functions are based on requirements.
- Source - the business area or users who requested the requirement or the document where the requirement was raised. Recording the source of a requirement helps ensure that questions can be answered or the need can be re-assessed in the future if necessary.
- Owner - the user who accepts ownership of the individual requirement will agree that it is worded and
documented correctly, and will sign it off at acceptance testing when satisfied.
- Priority - the level of importance and need for a requirement. Normally approaches such as MoSCoW are used, where the following interpretation of the mnemonic applies:
- Must have - a key requirement without which the service has no value.
- Should have - an important requirement that must be delivered but, where time is short, could be delayed for a future delivery. This should be a short-term delay, but the service would still have value without them.
- Could have - a requirement that would it be beneficial to include if it does not cost too much or take too long to deliver, but it is not central to the service.
- Won't have (but want next time) - a requirement that will be needed in the future but is not required for this delivery. In a future service release, this requirement may be upgraded to a 'must have'.
The following should be clearly agreed during this prioritization activity:
- Requirement priorities can and do change over the life of a service development project.
- 'Should have' requirements need to be carefully considered because, if they are not delivered within the initial design stage, they may be impossible to implement later.
- Requirements are invariably more difficult and more expensive to meet later in the Service Lifecycle.
- It is not just the functional requirements that can be 'must haves' - some of the management and operational requirements should be 'must haves'.
- Requirement description - a succinct description of the requirement. A useful approach is to describe the requirement using the following structure:
- Actor (or user role)
- Verb phrase
- Object (noun or noun phrase).
- Where the requirement incorporates complex business rules or data validation, decision table or decision tree may be more useful to define complex business rules, whilst data validation rules may be defined in a repository. If a supplementary technique is used to specify or model the requirement, there should be a cross-reference to the related document.
- Business process - a simple phrase to group together requirements that support a specific activity, such as sales, inventory, customer service, and so on.
- Justification - not all the requirements that are requested will be met. This may be due to time and budget constraints, or may be because the requirement is dropped in favour of a conflicting requirement. Often the requirement is not met because it adds little value to the business. The justification sets out the reasons for requesting the requirement.
- Related requirements - requirements may be related to each other for several reasons. Sometimes there is a link between the functionality required by the requirements or a high-level requirement is clarified by a series of more detailed requirements.
- Change history - the entries in this section provide a record of all the changes that have affected the requirement. This is required for Configuration Management and traceability purposes.
5.1.5.2 Full Requirements Documentation
An effective requirements document should comprise the following elements:
- A glossary of terms, to define each organizational term used within the requirements document. This will help manage the problem of local jargon and will clarify synonyms and homonyms for anyone using the document
- A scoping model, such as a system context diagram
- The Requirements Catalogue, ideally maintained as part of an overall Service Portfolio
- Supporting models, such as business process models, data flow diagrams or interaction diagrams.
Managing changes to the documentation
Changes may come about because:
- The scope of the new service has altered through budget constraints
- The service must comply with new regulation or legislation
- Changes in business priorities have been announced
- Stakeholders have understood a requirement better after some detailed analysis - for example, using scenarios or prototyping - and amended the original requirement accordingly.
There are a number of specialist support tools on the market to support requirements processes. These are sometimes called CARE (Computer Aided Requirements Engineering) or CASE (Computer Aided Software Engineering). Features include:
- Maintaining cross-references between requirements
- Storing requirements documentation
- Managing changes to the requirements documentation
- Managing versions of the requirements documentation
- Producing formatted requirements specification documents from the database
- Ensuring documents delivered by any solution project are suitable to enable support.
5.1.6 Requirements And Outsourcing
The aim is to select standard packaged solutions wherever possible to meet service requirements. However, whether IT requirements are to be purchased off-the-shelf, developed in-house or outsourced, all the activities up to the production of a specification of business requirements are done in-house. Many IT service development contracts assume it is possible to know what the requirements are at the start, and that it is possible to produce a specification that unambiguously expresses the requirements. For all but the simplest services this is rarely true. Requirements analysis is an iterative process - the requirements will change during the period the application and service are being developed. It will require user involvement throughout the development process, as in the DSDM and other 'agile' approaches.
5.1.6.1 Typical Requirements Outsourcing Scenarios
Typical approaches to contract for the development of IT systems to be delivered in support of an IT service are as follows:
- Low-level requirements specification - the boundary between 'customer' and provider is drawn between the detailed requirements specification and any design activities. All the requirements that have an impact on the user have been specified in detail, giving the provider a very clear and precise implementation target. However, there is increased specification effort, and the added value of the provider is restricted to the less difficult aspects of development.
- High-level requirements specification - the customer/provider boundary is between the high-level requirements and all other phases. The provider contract covers everything below the line. The customer is responsible for testing the delivered service against the business requirements. As it is easier to specify high-level requirements, there is reduced effort to develop contract inputs. However, there may be significant problems of increased cost and risk for both customer and provider, together with increased room for mistakes, instability of requirements and increased difficulty in knowing what information systems you want.
5.2 Data and Information Management
Data is one of the critical asset types that need to be managed in order to develop, deliver and support IT services effectively. Data/Information Management is how an organization plans, collects, creates, organizes, uses, controls, disseminates and disposes of its data/information, both structured records and unstructured data. It also ensures that the value of that data/information is identified and exploited, both in support of its internal operations and in adding value to its customer-facing business processes.
A number of terms are common in this area, including 'Data Management', 'Information Management' and 'Information Resource Management'. For the purposes of this publication, the term 'Data Management' is used as shorthand for all of the three above.
The role of Data Management described is not just about managing raw data: it's about managing all the contextual metadata - additional 'data about the data' - that goes with it, and when added to the raw data gives 'information' or 'data in context'.
Data, as the basis for the organization's information, has all the necessary attributes to be treated as an asset (or resource). For example, it is essential for 'the achievement of business objectives and the successful daily workings of an organization'. In addition, it can be 'obtained and preserved by an organization, but only at a financial cost'. Finally it can, along with other resources/assets, be used to 'further the achievement of the aims of an organization'.
Key factors for successful Data Management are as follows:
- All users have ready access through a variety of channels to the information they need to do their jobs.
- Data assets are fully exploited, through data sharing within the organization and with other bodies.
- The quality of the organization's data is maintained at an acceptable level, and the information used in the business is accurate, reliable and consistent.
- Legal requirements for maintaining the privacy, security, confidentiality and integrity of data are observed.
- The organization achieves a high level of efficiency and effectiveness in its data and information-handling activities.
- An enterprise data model to define the most important entities and their relationships - this helps to avoid redundancies and to avoid the deterioration of the architecture as it is changed over the years.
5.2.1 Managing Data Assets
If data isn't managed effectively:
- People maintain and collect data that isn't needed
- The organization may have historic information that is no longer used
- The organization may hold a lot of data that is inaccessible to potential users
- Information may be disseminated to more people than it should be, or not to those people to whom it should be
- The organization may use inefficient and out-of-date methods to collect, analyse, store and retrieve the data
- The organization may fail to collect data that it needs, reducing data quality and data integrity is lost, e.g. between related data sources.
In addition, whether or not information is derived from good-quality data is a difficult question to answer, because there are no measurements in place against which to compare it. For example, poor data quality often arises because of poor checks on input and/or updating procedures. Once inaccurate or incomplete data have been stored in the IT system, any reports produced using these data will reflect these inaccuracies or gaps. There may also be a lack of consistency between internallygenerated management information from the operational systems, and from other internal, locally-used systems, created because the central data is not trusted.
One way of improving the quality of data is to use a Data Management process that establishes policy and
standards, provides expertise and makes it easier to handle the data aspects of new services. This should then allow full Data/Information Asset Management to:
- Add value to the services delivered to customers
- Reduce risks in the business
- Reduce the costs of business processes
- Stimulate innovation in internal business processes.
5.2.2 Scope Of Data Management
There are four areas of management included within the scope of Data/Information Management:
- Management of data resources: the governance of information in the organization must ensure that all these resources are known and that responsibilities have been assigned for their management, including ownership of data and metadata. This process is normally referred to as data administration and includes responsibility for:
- Defining information needs
- Constructing a data inventory and an enterprise data model
- Identifying data duplication and deficiencies
- Maintaining a catalogue/index of data/information content
- Measuring the cost and value of the organization's data.
- Management of data/information technology: the management of the IT that underpins the organization's information systems; this includes processes such as database design and database administration. This aspect is normally handled by specialists within the IT function - see the Service Operation publication for more details.
- Management of information processes: business processes will lead to IT services involving one or other of the data resources of the organization. The processes of creating, collecting, accessing, modifying, storing, deleting and archiving data - i.e. the data lifecycle - must be properly controlled, often jointly with the applications management process.
- Management of data standards and policies: the organization will need to define standards and policies for its Data Management as an element of an IT strategy. Policies will govern the procedures and responsibilities for Data Management in the organization; and technical policies, architectures and standards that will apply to the IT infrastructure that supports the organization's information systems.
The best practices scope of the Data Management process includes managing non-structured data that is not held in conventional database systems - for example, using formats such as text, image and audio. It is also responsible for ensuring process quality at all stages of the data lifecycle, from requirements to retirement. The main focus in this publication will be on its role in the requirements, design and development phases of the asset and Service Lifecycle.
The team supporting the Data Management process may also provide a business information support service. In this case they are able to answer questions about the meaning, format and availability of data internal to the organization, because they manage the metadata. They also are able to understand and explain what external data might be needed in order to carry out necessary business processes and will take the necessary action to source this.
Critically, when creating or redesigning processes and supporting IT services, it is good practice to consider reusing data and metadata across different areas of the organization. The ability to do this may be supported by a corporate data model - sometimes known as a common information model - to help support re-use, often a major objective for data management.
5.2.3 Data Management and the Service Lifecycle
It is recommended that a lifecycle approach be adopted in understanding the use of data in business processes. General issues include:
- What data is currently held and how can it be classified?
- What data needs to be collected or created by the business processes?
- How will the data be stored and maintained?
- How will the data be accessed, by whom and in what ways?
- How will the data be disposed of, and under whose authority?
- How will the quality of the data be maintained (accuracy, consistency, currency, etc.)?
- How can the data be made more accessible/available?
5.2.4 Supporting the Service Lifecycle
During requirements and initial design, Data Management can assist design and development teams with service specific data modeling and give advice on the use of various techniques to model data.
During detailed ('physical') design and development, the Data Management function can provide technical expertise on database management systems and on how to convert initial 'logical' models of data into physical, product specific, implementations.
Many new services have failed because poor data quality has not been addressed during the development of the service, or because a particular development created its own data and metadata, without consultation with other service owners, or with Data Management.
5.2.5 Valuing Data
Data is an asset and has value. Clearly in some organizations this is more obvious than in others. Organizations that are providers of data to other organizations - Yell, Dun and Bradstreet, and Reuters - can value data as an 'output' in terms of the price that they are charging external organizations to receive it. It's also possible to think of value in terms of what the internal data would be worth to another organization.
It's more common to value data in terms of what it's worth to the owner organization. There are a number of suggested ways of doing this:
- Valuing data by availability: one approach often used is to consider which business processes would not be possible if a particular piece of data were unavailable, and how much that non-availability of data would cost the business.
- Valuing lost data: another approach that's often used is to think about the costs of obtaining some data if it were to be destroyed.
- Valuing data by considering the data lifecycle: this involves thinking about how data is created or obtained in the first place, how it is made available to people to use, and how data is retired, either through archiving or physical destruction. It may be that some data is provided from an external source and then held internally, or it may be that data has to be created by the organization's internal systems. In these two cases, the lifecycle is different and the processes that take place for data capture will be entirely separate. In both cases the costs of redoing these stages can be evaluated. The more highly valued the data, the more the effort that needs to be expended on ensuring its integrity, availability and confidentiality.
5.2.6 Classifying Data
Data can be initially classified as operational, tactical or strategic:
- Operational data: necessary for the ongoing functioning of an organization and can be regarded as the lowest, most specific, level.
- Tactical data: usually needed by second-line management - or higher - and typically concerned with summarized data and historical data, typically year-to-year data or quarterly data. Often the data that's used here appears in management information systems that require summary data from a number of operational systems in order to deal with an accounting requirement, for example.
- Strategic data: often concerned with longer-term trends and with comparison with the outside world, so providing the necessary data for a strategic support system involves bringing together the operational and tactical data from many different areas with relevant external data. Much more data is required from external sources.
An alternative method is to use a security classification of data and documents. This is normally adopted as a corporate policy within an organization.
An orthogonal classification distinguishes between organization-wide data, functional-area data and service specific data.
- Organization-wide data needs to be centrally managed.
- The next level of data is functional-area data that should be shared across a complete business function. This involves sharing data 'instances' - for example, individual customer records - and also ensuring that consistent metadata across that functional area, such as standard address formats, are being used.
- The final level is IT service-specific, where the data and metadata are valid for one IT service and do not need to be shared with other services.
5.2.7 Setting Data Standards
One of the critical aspects of data administration is to ensure that standards for metadata are in place - for example, what metadata is to be kept for different underlying 'data types'. Different details are kept about structured tabular data than for other areas. 'Ownership' is a critical item of this metadata, some sort of unique identifier is another, a description in business meaningful terms another, and a format might be another. The custodian or steward, someone in the IT department who takes responsibility for the day-to-day management of the data, is also recorded.
Another benefit of a Data Management process would be in the field of reference data. Certain types of data, such as postcodes or names of countries, may be needed across a variety of systems and need to be consistent. It is part of the responsibility of data administration to manage reference data on behalf of the whole business, and to make sure that the same reference data is used by all systems in the organization.
Standards for naming must be in place; so, for example, if a new type of data is requested in a new service, then there is a need to use names that meet these standards. An example standard might be 'all capitals, no underlining and no abbreviations'.
5.2.8 Data Ownership
Data administration can assist the service developer by making sure responsibilities for data ownership are taken seriously by the business and by the IT department. One of the most successful ways of doing this is to get the business and the IT department to sign up to a data charter - a set of procedural standards and guidance for the careful management of data in the organization, by adherence to corporately defined standards. Responsibilities of a data owner are often defined here and may include:
- Agreeing a business description and a purpose for the data
- Defining who can create, amend, read and delete occurrences of the data
- Authorizing changes in the way data is captured or derived
- Approving any format, domain and value ranges
- Approving the relevant level of security, including making sure that legal requirements and internal policies about data security are adhered to.
5.2.9 Data Migration
Data migration is an issue where a new service is replacing a number of (possibly just one) existing services, and it's necessary to carry across, into the new service, good-quality data from the existing systems and services. There are two types of data migration of interest to projects here: one is the data migration into data warehouses etc., for business intelligence/analytics purposes; the other is data migration to a new transactional, operational service. In both cases it will be beneficial if data migration standards, procedures and processes are laid down by Data Management. Data migration tools may have already been purchased on behalf of the organization by the Data Management team. Without this support, it's very easy to underestimate the amount of effort that's required, particularly if data consolidation and cleaning has to take place between multiple source systems, and the quality of the existing services' data is known to be questionable.
5.2.10 Data Storage
One area where technology has moved on very rapidly is in the area of storage of data. There is a need to consider different storage media - for example, optical storage - and be aware of the size and cost implications associated with this. The main reason for understanding the developments in this area is that they make possible many types of data management areas that were considered too expensive before. For example, to store real-time video, which uses an enormous bandwidth, has, until the last two to three years, been regarded as too expensive. The same is true of the scanning of large numbers of paper documents, particularly where those documents are not text-based but contain detailed diagrams or pictures. Understanding technology developments with regard to electronic storage of data is critical to understanding the opportunities for the business to exploit the information resource effectively by making the best use of new technology.
5.2.11 Data Capture
It is also very important to work with Data Management on effective measures for data capture. The aim here is to capture data as quickly and accurately as possible. There is a need to ensure that the data capture processes require the minimum amount of keying, and exploit the advantages that graphical user interfaces provide in terms of minimizing the number of keystrokes needed, also decreasing the opportunity for errors during data capture. It is reasonable to expect that the Data Management process has standards for, and can provide expertise on, effective methods of data capture in various environments, including 'non-structured' data capture using mechanisms such as scanning.
5.2.12 Data Retrieval And Usage
Once the data has been captured and stored, the next aspect to consider is the retrieval of information from the data. Services to allow easy access to structured data via query tools of various levels of sophistication are needed by all organizations, and generate their own specific architectural demands.
The whole area of searching within scanned text and other non-structured data such as video, still images or sound is a major area of expansion. Techniques such as automatic indexing, and the use of search engines to give efficient access via keywords to relevant parts of a document, are essential technologies that have been widely implemented, particularly on the internet. Expertise in the use of data or content within websites should exist within the Data Management as well as Content Management - standards and procedures that are vital for websites.
5.2.13 Data Integrity And Related Issues
When defining requirements for IT services, it is vital that management and operational requirements related to data are considered. In particular, the following areas must be addressed:
- Recovery of lost or corrupted data
- Controlled access to data
- Implementation of policies on archiving of data, including compliance with regulatory retention periods
- Periodic data integrity checks.
Data integrity is concerned with ensuring that the data is of high quality and uncorrupted. It is also about preventing uncontrolled data duplication, and hence avoiding any confusion about what is the valid version of the data. There are several approaches that may assist with this. Various technology devices such as 'database locking' are used to prevent multiple, inconsistent, updating of data. In addition, prevention of illegal updating may be achieved through access control mechanisms.
5.3 Application Management
An application is defined here as the software program(s) that perform those specific functions that directly support the execution of business processes and/or procedures.
Applications, along with data and infrastructure components such as hardware, the operating system and middleware, make up the technology components that are part of an IT service. The application itself is only one component, albeit an important one of the service. Therefore it is important that the application delivered matches the agreed requirements of the business. However, too many organizations spend too much time focusing on the functional requirements of the new service and application, and insufficient time is spent designing the management and operational requirements (non-functional requirements) of the service. This means that when the service becomes operational, it meets all of the functionality required, but totally fails to meet the expectation of the business and the customers in terms of its quality and performance, and therefore becomes unusable.
Two alternative approaches are necessary to fully implement Application Management. One approach employs an extended Service Development Lifecycle (SDLC) to support the development of an IT service. SDLC is a systematic approach to problem solving and is composed of the following steps:
- Feasibility study
- Analysis
- Design
- Testing
- Implementation
- Evaluation
- Maintenance.
The other approach takes a global view of all services to ensure the ongoing maintainability and manageability of the applications:
- All applications are described in a consistent manner, via an Application Portfolio that is managed and maintained to enable alignment with dynamic business needs.
- Consistency of approach to development is enforced through a limited number of application frameworks and design patterns and through a 're-use' first philosophy.
- Common software components, usually to meet management and operational requirements, are created or acquired at an 'organizational' level and used by individual systems as they are designed and built.
Application name | IT operations owner | New development cost
|
Application identifier | IT development owner | Annual operational costs
|
Application description | Support contacts | Annual support cost
|
Business process supported | Database technologies | Annual maintenance costs
|
IT services supported | Dependent applications | Outsourced components
|
Executive sponsor | IT systems supported | Outsource partners
|
Geographies supported | User interfaces | Production metrics
|
Business criticality | IT Architecture, including Network topology | OLA link
|
SLA link | Application technologies used | Support metrics
|
Business owner | Number of users |
|
Table 5.5 Application Portfolio attributes example |
5.3.1 The Application Portfolio
This is simply a full record of all applications within the organization and is dynamic in its content.
5.3.2 Linking Application and Service Portfolios
Some organizations maintain a separate Application Portfolio with separate attributes, while in other organizations the Application Portfolio is stored within the CMS, together with the appropriate relationships. Other organizations combine the Application Portfolio together with the Service Portfolio. It is for each organization to decide the most appropriate strategy for its own needs. What is clear is that there should be very close relationships and links between the applications and the services they support and the infrastructure components used.
5.3.3 Application Frameworks
The concept of an application framework is a very powerful one. The application framework covers all management and operational aspects and actually provides solutions for all the management and operational requirements that surround an application.
Implied in the use of application frameworks is the concept of standardization. If an organization uses and has to maintain an application framework for every single application, there will not be many benefits of the use of an application framework.
An organization that wants to develop and maintain application frameworks, and to ensure the application frameworks comply with the needs of the application developers, must invest in doing so. It is essential that applications framework architectures are not developed in
isolation, but are closely related and integrated with all other framework and architectural activities. The Service, Infrastructure, Environment and Data Architectures must all be closely integrated with the Application Architecture and framework.
Architecture, application frameworks and standards
Architecture-related activities have to be planned and managed separately from individual system-based software projects. It is also important that architecture-related activities be performed for the benefit of more than just one application. Application developers should focus on a single application, while application framework developers should focus on more than one application, and on the common features of those applications in particular.
A common practice is to distinguish between various types of applications. For instance, not every application can be built on top of a Microsoft@ Windows operating system platform, connected to a UNIX server, using HTML, Java applets, JavaBeans and a relational database. The various types of applications can be regarded as application families. All applications in the same family are based on the same application framework.
Utilizing the concept of an application framework, the first step of the application design phase is to identify the appropriate application framework. If the application framework is mature, a large number of the design decisions are given. If it is not mature, and all management and operational requirements cannot be met on top of an existing application framework, the preferred strategy is to collect and analyse the requirements that cannot be dealt with in the current version of the application framework. Based on the application requirements, new requirements can be defined for the application framework. Next, the application framework can be modified so that it can cope with the application requirements. In fact, the whole family of applications that corresponds to the application framework can then use the newly added or changed framework features.
Developing and maintaining an application framework is a demanding task and, like all other design activities, should be performed by competent and experienced people. Alternatively, application frameworks can be acquired from third parties.
5.3.4 The Need For Case Tools And Repositories
One important aspect of that overall alignment is the need to align applications with their underlying support structures. Application development environments traditionally have their own Computer Assisted/Aided Software Engineering (CASE) tools that offer the means to specify requirements, draw design diagrams (according to particular modeling standards), or even generate complete applications, or nearly complete application skeletons, almost ready to be deployed. These environments also provide a central location for storing and managing all the elements that are created during application development, generally called a repository. Repository functionality includes version control and consistency checking across various models. The current approach is to use metacase-tools to model domain specific languages and use these to make the CASE-work more aligned to the needs of the business.
5.3.5 Design Of Specific Applications
The requirements phase of the lifecycle was addressed earlier in the requirements engineering section. The design phase is one of the most important phases within the application lifecycle. It ensures that an application is conceived with operability and Application Management in mind. This phase takes the outputs from the requirements phase and turns them into the specification that will be used to develop the application.
The goal for designs should be satisfying the organization's requirements. Design includes the design of the application itself, and the design of the infrastructure and environment within which the application operates. Architectural considerations are the most important aspect of this phase, since they can impact on the structure and content of both application and operational model. Architectural considerations for the application (design of the Application Architecture) and architectural considerations for the environment (design of the IT Architecture) are strongly related and need to be aligned. Application Architecture and design should not be considered in isolation but should form an overall integrated component of service architecture and design.
Generally, in the design phase, the same models will be produced as have been delivered in the requirements phase, but during design many more details are added. New models include the architecture models, where the way in which the different functional components are mapped to the physical components (e.g. desktops, servers, databases and network) needs to be defined. The mapping, together with the estimated load of the system, should allow for the sizing of the infrastructure required.
Another important aspect of the architecture model is the embedding of the application in the existing environment. Which pieces of the existing infrastructure will be used to support the required new functions? Can existing servers or networks be used? With what impact? Are required functions available in existing applications that can be utilized? Do packages exist that offer the functionality needed or should the functions be built from scratch?
The design phase takes all requirements into consideration and starts assembling them into an initial design for the solution. Doing this not only gives developers a basis to begin working: it is also likely to bring up questions that need to be asked of the customers/users. If possible, application frameworks should be applied as a starting point.
It is not always possible to foresee every aspect of a solution's design ahead of time. As a solution is developed, new things will be learned about how to do things and also how not to.
The key is to create a flexible design, so that making a change does not send developers all the way back to the beginning of the design phase. There are a number of approaches that can minimize the chance of this happening, including:
- Designing for management and operational requirements
- Managing trade-offs
- Using application-independent design guidelines; using application frameworks
- Employing a structured design process/manageability checklist.
Design for management and operational requirements means giving management and operational requirements a level of importance similar to that for the functional requirements, and including them as a mandatory part of the design phase. This includes a number of management and operational requirements such as availability, capacity, maintainability, reliability and security. It is now inconceivable in modern application development projects that user interface design (usability requirements) would be omitted as a key design activity. However, many organizations ignore or forget manageability. Details of the necessary management and operational requirements are contained within the SDP and SAC in Appendices A and B.
5.3.6 Managing Trade-offs
Managing trade-off decisions focuses on balancing the relationship among resources, the project schedule, and those features that need to be included in the application for the sake of quality.
When development teams try to complete this balancing, it is often at the expense of the management and operational requirements. One way to avoid that is to include management and operational requirements in the application-independent design guidelines - for example, in the form of an application framework. Operability and manageability effectively become standard components of all design processes - for example, in the form of an application framework - and get embedded into the working practices and culture of the development organization.
5.3.7 Typical Design Outputs
The following are examples of the outputs from an applications design forming part of the overall Service Design:
- Input and output design, including forms and reports
- A usable user interface (human computer interaction) design
- A suitable data/object model
- A process flow or workflow model
- Detailed specifications for update and read-only processes
- Mechanisms for achieving audit controls, security, confidentiality and privacy
- A technology specific 'physical' design
- Scripts for testing the systems design
- Interfaces and dependencies on other applications.
There are guidelines and frameworks that can be adopted to determine and define design outputs within Applications Management, such as Capability Maturity Model Integration (CMMI).
5.3.8 Design Patterns
A design pattern is a general, repeatable solution to a commonly occurring problem in software design. Object oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Design patterns describe both a problem and a solution for common issues encountered during application development.
An important design principle used as the basis for a large number of the design patterns found in recent literature is that of separation of concern. Separation of concerns will lead to applications divided into components, with a strong cohesion and minimal coupling between components. The advantage of such an application is that modification can be made to individual components with little or no impact on other components.
In typical application development projects, more than 70% of the effort is spent on designing and developing generic functions and on satisfying the management and operational requirements. That is because each individual application needs to provide a solution for such generic features as printing, error handling and security.
Among others, the Object Management Group (OMG, www.omg.com) defined a large number of services that are needed in every application. OMG's Object Management Architecture (OMA) clearly distinguishes between functional and management and operational aspects of an application. It builds on the concept of providing a run-time environment that offers all sorts of facilities to an application.
In this concept, the application covers the functional aspects, and the environment covers all management and operational aspects. Application developers should, by definition, focus on the functional aspects of an application, while others can focus on the creation of the environment that provides the necessary management and operational services. This means that the application developers focus on the requirements of the business, while the architecture developers or application framework developers focus on the requirements of the application developers.
5.3.9 Developing Individual Applications
Once the design phase is completed, the application development team will take the designs that have been produced and move on to developing the application. Both the application and the related environment are
.made ready for deployment. Application components are coded or acquired, integrated and tested.
To ensure that the application is developed with management at the core, the development team needs to focus on ensuring that the developing phase continues to correctly address the management and operational aspects of the design (e.g. responsiveness, availability, security).
The development phase guidance covers the following topics:
- Consistent coding conventions
- Application-independent building guidelines
- Operability testing
- Management checklist for the building phase
- Organization of the build team roles.
5.3.10 Consistent Coding Conventions
The main reason for using a consistent set of design and coding conventions is to standardize the structure and coding style of an application so that everyone can easily read, understand and manage the application development process. Good design and coding conventions result in precise, readable and unambiguous source code that is consistent with the organizational coding and management standards and is as intuitive to follow as possible. Adding application operability into this convention ensures that all applications are built in a way that ensures that they can be fully managed all the way through their lifecycles.
A coding convention itself can be a significant aid to managing the application, as consistency allows the management tools to interact with the application in a known way. It is better to introduce a minimum set of conventions that everyone will follow rather than to create an overly complex set that encompasses every facet but is not followed or used consistently across the organization.
5.3.11 Templates And Code Generation
A number of development tools provide a variety of templates for creating common application components. Rather than creating all the pieces of an application from scratch, developers can customize an existing template. They can also re-use custom components in multiple applications by creating their own templates. Other development tools will generate large pieces of code (skeletons) based on the design models and coding conventions. The code could include hooks at the code
pieces that need to be added.
In this respect, templates and application frameworks
should be considered IT assets. These assets not only guide the developing of applications, but also incorporate the lessons learned or intellectual capital from previous application development efforts. The more that standard components are designed into the solution, the faster applications can be developed, against lower costs in the long term (not ignoring the fact that development of ! templates, code generators and application frameworks requires significant investment).
5.3.12 Embedded Application Instrumentation
The development phase deals with incorporating instrumentation into the fabric of the application. Developers need a consistent way to provide instrumentation for application drivers/middleware components (e.g. database drivers) and applications that is efficient and easy to implement. To keep application developers from reinventing the wheel with every new application they develop, the computer industry provides methods and technologies to simplify and facilitate the instrumentation process.
These include:
- Application Response Measurement (ARMS)
- IBM Application Management Specification (AMS)
- Common Information Model (CIM) and Web-Based
- Enterprise Management (WBEM) from the Distributed
- Management Task Force (DMTF)
- Desktop Management Instrumentation (DMl)
- Microsoft Windows© Management Instrumentation (WMI)
- Java Management Extension (JMX).
Each of these technologies provides a consistent and richly descriptive model of the configuration, status and operational aspects of applications and services. These are provided through programming Application Program Interfaces (APIs) that the developer incorporates into an application, normally through the use of standard programming templates.
It is important to ensure that all applications are built to conform to some level of compliance for the application instrumentation. Ways to do this could include:
- Provide access to management data through the instrumentation API
- Publish management data to other management systems, again through the instrumentation API
- Provide applications event handling
- Provide a diagnostic hook.
5.3.13 Diagnostic Hooks
Diagnostic hooks are of greatest value during testing and when an error has been discovered in the production service. Diagnostic hooks mainly provide the information necessary to solve problems and application errors rapidly and restore service. They can also be used to provide measurement and management information of applications.
The three main categories are:
- System-level information provided by the OS and hardware
- Software-level information provided by the application infrastructure components such as database, web server or messaging systems
- Custom information provided by the applications
- Information on component and service performance.
5.3.14 Major Service Outputs From Development
The major outputs from the development phase are:
- Scripts to be run before or after deployment
- Scripts to start or stop the application
- Scripts to check hardware and software configurations of target environments before deployment or installation
- Specification of metrics and events that can be retrieved from the application and that indicate the performance status of the application
- Customized scripts initiated by Service Operation staff to manage the application (including the handling of application upgrades)
- Specification of access control information for the system resources used by an application
- Specification of the details required to track an application's major transactions
- SLA targets and requirements
- Operational requirements and documentation
- Support requirements
- Application recovery and backups
- Other IT SM requirements and targets.
