Service Design

1Introduction 2Serv. Mgmt. 3Principles 4Processes 5Tech Activities 6Organization 7Tech Considerations 8Implementation 9Challenges Appendeces

5. Service Design Technology-Related Activities

5.1Requirements 5.2Data & Information 5.3Application Mgmt

This chapter considers the technology-related activities of requirement engineering and the development of technology architectures. The technology architectures cover aspects of Service Design in the following areas:

5.1 Requirements Engineering

Requirements engineering is the approach by which sufficient rigour is introduced into the process of understanding and documenting the business and user's requirements, and ensuring traceability of changes to each requirement. This process comprises the stages of elicitation, analysis (which feeds back into the elicitation) and validation. All these contribute to the production of a rigorous, complete requirements document. The core of this document is a repository of individual requirements that is developed and managed. Often these requirements are instigated by IT but ultimately they need to be documented and agreed with the business.

There are many guidelines on requirements engineering, including the Recommended Practice for Software Requirements Specifications (IEEE 830), The Software Engineering Body of Knowledge (SWEBOK), CMMI and the V-Model, which is described in detail in the Service Transition publication.

5.1.1 Different Requirement Types
A fundamental assumption here is that the analysis of the current and required business processes results in functional requirements met through IT services (comprising applications, data, infrastructure, environment and support skills).

It is important to note that there are commonly said to be three major types of requirements for any system - functional requirements, management and operational requirements, and usability requirements.

Usability requirements are those that address the 'look and feel' needs of the user and result in features of the service that facilitate its ease of use. This requirement type is often seen as part of management and operational requirements, but for the purposes of this section it will be addressed separately.

5.1.1.1 Functional Requirements
Functional requirements describe the things a service is intended to do, and can be expressed as tasks or functions that the component is required to perform. One approach for specifying functional requirements is through such methods as a system context diagram or a use case model. Other approaches show how the inputs are to be transformed into the outputs (data flow or object diagrams) and textual descriptions.

A system context diagram, for instance, captures all information exchanges between, on the one hand, the IT service and its environment and, on the other, sources or destinations of data used by the service. These information exchanges and data sources represent constraints on the service under development.

A use case model defines a goal-oriented set of interactions between external actors and the service under consideration. Actors are parties outside the service that interact with the service. An actor may reflect a class of user's roles that users can play, or other services and their requirements. The main purpose of use case modeling is to establish the boundary of the proposed system and fully state the functional capabilities to be delivered to the users. Use cases are also helpful for establishing communication between business and application developers. They provide a basis for sizing and feed the definition of usability requirements. Use cases define all scenarios that an application has to support and can therefore easily be expanded into test cases. Since use cases describe a service's functionality on a level that's understandable for both business and IT, they can serve as a vehicle to specify the functional elements of an SLA, such as the actual business deliverables from the service.

One level 'below' the use case and the context diagram, many other modeling techniques can be applied. These models depict the static and dynamic characteristics of the services under development. A conceptual data model (whether called object or data) describes the different 'objects' in the service, their mutual relationships and their internal structure. Dynamics of the service can be described using state models (e.g. state transition diagrams) that show the various states of the entities or objects, together with events that may cause state changes. Interactions between the different application components can be described using interaction diagrams (e.g. object interaction diagrams). Alongside a mature requirements modelling process, CASE tools can help in getting and keeping these models consistent, correct and complete.

5.1.1.2 Management And Operational Requirements
Management and operational requirements (or nonfunctional requirements) are used to define requirements and constraints on the IT service. The requirements serve as a basis for early systems and service sizing and estimates of cost, and can support the assessment of the viability of the proposed IT service. Management and operational requirements should also encourage developers to take a broader view of project goals.

Categories of management and operational requirements include:

The management and operational requirements can be used to prescribe the quality attributes of the application being built. These quality attributes can be used to design test plans for testing the applications on the compliance to management and operational requirements.

5.1.1.3 Usability Requirements
The primary purpose of usability requirements is to ensure that the service meets the expectations of its users with regard to its ease of use. To achieve this:

Like the management and operational requirements, usability requirements can also be used as the quality attributes of design test plans for testing the applications on their compliance to usability requirements.

5.1.2 Requirements For Support - The User View
Users have formally defined roles and activities as user representatives in requirements definition and acceptance testing. They should be actively involved in identifying all aspects of service requirements, including the three categories above, and also in:

5.1.3 Requirements Investigation Techniques
There is a range of techniques that may be used to investigate business situations and elicit service requirements. Sometimes the customers and the business are not completely sure of what their requirements actually are and will need some assistance and prompting from the designer or requirements gatherer. This must be completed in a sensitive way to ensure that it is not seen as IT dictating business requirements again. The two most commonly used techniques are interviewing and workshops, but these are usually supplemented by other techniques, such as observation and scenarios.

5.1.3.1 Interviews
The interview is a key tool and can be vital in achieving a number of objectives, such as:

There are three areas that are considered during interviews:

The interviewing process is improved when the interviewer has prepared thoroughly as this saves time by avoiding unnecessary explanations and demonstrates interest and professionalism. The classic questioning structure of 'Why, What, Who, When, Where, How' provides an excellent framework for preparing for interviews. It is equally important to formally close the interview by:

It is always a good idea to write up the notes of the interview as soon as possible - ideally straight away and usually by the next day. The advantages of interviewing are:

The disadvantages of interviewing are:

5.1.3.2 Workshops
Figure 5.1 Requirements workshop techniques
Figure 5.1 Requirements workshop techniques

Workshops provide a forum in which issues can be discussed, conflicts resolved and requirements elicited. Workshops are especially valuable when time and budgets are tightly constrained, several viewpoints need to be canvassed and an iterative and incremental view of service development is being taken.

The advantages of the workshop are:

There are some disadvantages, including:

The success or failure of a workshop session depends, in large part, on the preparatory work by the facilitator and the business sponsor for the workshop. They should spend time before the event planning the following areas:

During the workshop, a facilitator needs to ensure that the issues are discussed, views are aired and progress is made towards achieving the stated objective. A record needs to be kept of the key points emerging from the discussion.

At the end of the workshop, the facilitator needs to summarize the key points and actions. Each action should be assigned to an owner.

There are two main categories of technique required for a requirements workshop - techniques for discovery and techniques for documentation, as shown in Figure 5.1.

5.1.3.3 Observation
Observing the workplace is very useful in obtaining information about the business environment and the work practices. This has two advantages:

Conversely, being observed can be rather unnerving, and the old saying 'you change when being observed' needs to be factored into your approach and findings.

Formal observation involves watching a specific task being performed. There is a danger of being shown just the 'front-story' without any of the everyday variances, but it is still a useful tool.

5.1.3.4 Protocol Analysis
Protocol Analysis is simply getting the users to perform a task, and for them to describe each step as they perform it.

5.1.3.5 Shadowing
Shadowing involves following a user for a period such as a day to find out about a particular job. It is a powerful way to understand a particular user role. Asking for explanations of how the work is done, or the workflow, clarifies some of the already assumed aspects.

5.1.3.6 Scenario Analysis
Scenario Analysis is essentially telling the story of a task or transaction. Its value is that it helps a user who is uncertain what is needed from a new service to realize it more clearly. Scenarios are also useful when analyzing or redesigning business processes. A scenario will trace the course of a transaction from an initial business trigger through each of the steps needed to achieve a successful outcome.

Scenarios provide a framework for discovering alternative paths that may be followed to complete the transaction. This is extremely useful in requirements elicitation and analysis because real-life situations, including the exceptional circumstances, are debated.

Scenarios offer significant advantages:

The disadvantages of scenarios are that they can be time consuming to develop, and some scenarios can become very complex. Where this is the case, it is easier to analyse if each of the main alternative paths is considered as a separate scenario.

A popular approach to documenting scenario descriptions is to develop use case descriptions to support use case diagrams. However, there are also a number of graphical methods of documenting a scenario, such as storyboards, activity diagrams, task models and decision tree diagrams.

5.1.3.7 Prototyping
Prototyping is an important technique for eliciting, analyzing, demonstrating and validating requirements. It is difficult for users to envisage the new service before it is actually built. Prototypes offer a way of showing the user how the new service might work and the ways in which it can be used. If a user is unclear what they need the service to do for them, utilizing a prototype often releases blocks to thinking and can produce a new wave of requirements. Incremental and iterative approaches to service development, such as the Dynamic Systems Development Method (DSDM), use evolutionary prototyping as an integral part of their development lifecycle.

There is a range of approaches to building prototypes. They may be built using an application development environment so that they mirror the service; images of the screens and navigations may be built using presentation software; or they may simply be 'mock-ups' on paper.

There are two basic methods of prototyping:

It is important to select consciously which is to be used, otherwise there is a danger that a poor-quality mock-up becomes the basis for the real system, causing problems later on.

There is a strong link between scenarios and prototyping because scenarios can be used as the basis for developing prototypes. In addition to confirming the users' requirements, prototyping can often help the users to identify new requirements.

Prototypes are successfully used to:

Potential problems include:

5.1.3.8 Other Techniques
Other techniques that could be used, include:

5.1.4 Problems With Requirements Engineering
Requirements, seen by users as the uncomplicated bit of a new service development, are actually the most problematic aspect, and yet the time allocated is far less than for the other phases.

Tight timescales and tight budgets - both the result of constraints on the business - place pressures on the development team to deliver a service. The trouble is that without the due time to understand and define the requirements properly, the service that is delivered on time may not be the service that the business thought it was asking for.

Studies carried out into IT project failures tell a common story. Many of the projects and unsatisfactory IT services suggest the following conclusions:

These findings are particularly significant because the cost of correcting errors in requirements increases dramatically the later into the development lifecycle they are found.

One of the main problems with requirements engineering is the lack of detailed skill and overall understanding of the area when people use it. If accurately performed, the work can integrate requirements from numerous areas in a few questions.

Other typical problems with requirements have been identified as:

Another problem is an apparent inability on the part of the users to articulate clearly what it is they wish the service to do for them. Very often they are deterred from doing so because the nature of the requirement is explained in a straightforward statement.

5.1.4.1 Resolving Requirements Engineering Problems
Defining actors There are some participants that must take part in the requirements process. They represent three broad stakeholder groups:

The user community should be represented by the domain expert (or subject-matter expert) and end-users. Dealing with tacit knowledge When developing a new service, the users will pass on to us their explicit knowledge, i.e. knowledge of procedures and data that is at the front of their minds and that they can easily articulate. A major problem when eliciting requirements is that of tacit knowledge, i.e. those other aspects of the work that a user is unable to articulate or explain.

Some common elements that cause problems and misunderstandings are:

Communities of practice are discrete groups of workers - maybe related by task, by department, by geographical location or some other factor - that have their own sets of norms and practices, distinct from other groups within the organization and the organization as a whole.

TacitExplicit
IndividualSkills, values, taken-for-granted, intuitivenessTasks, job descriptions, targets, volumes and frequencies
CorporateNorms, back-story, culture, communities of practiceProcedures, style guides, processes, knowledge sharing
Table 5.1 Requirements engineering - tacit and explicit knowledge

Example levels of tacit and explicit knowledge:

Technique Explicit knowledge Tacit knowledge Skills Future requirements
Interviewing
Shadowing
Workshops
Prototyping
Scenario analysis
Protocol analysis
Table 5.2 Requirements engineering; examples of tacit and explicit knowledge (Maiden and Rugg, 1995)

5.1.5 Documenting Requirements
The requirements document is at the heart of the process and can take a number of forms. Typically the document will include a catalogue of requirements, with each individual requirement documented using a standard template. One or more models showing specific aspects, such as the processing or data requirements, may supplement this catalogue.

Before they are formally entered into the catalogue, requirements are subject to careful scrutiny. This scrutiny may involve organizing the requirements into groupings and checking that each requirement is 'well-formed'.

Once the document is considered to be complete, it must be reviewed by business representatives and confirmed to be a true statement of the requirements, at this point in time. During this stage the reviewers examine the requirements and question whether they are well-defined, clear and complete.

As we uncover the requirements from our various users, we need to document them. This is best done in two distinct phases - building the requirements list and, later, developing an organized requirements catalogue. The list tends to be an informal document and can be presented as four columns, as shown in Table 5.3.

RequirementsSourceCommentsDetail level
Table 5.3 Requirements list

Each requirement in the list must be checked to see whether or not it is well formed and SMART (Specific, Measurable, Achievable, Realistic and Timely).

When checking the individual and totality of requirements, the following checklist can be used:

There are several potential outcomes from the exercise:

5.1.5.1 The Requirements Catalogue
The Requirements Catalogue is the central repository of the users' requirements, and all the requirements should be documented here, following the analysis of the list defined above. The Requirements Catalogue should form part of the overall Service Pipeline within the overall Service Portfolio. Each requirement that has been analyzed is documented using a standard template, such as that shown in Table 5.4.

IT serviceAuthorDate
Requirement IDRequirement Name
SourceOwnerPriorityBusiness Process
Functional Requirement Description
Management and Operational and Usability RequirementsDescription
Justification
Related Documents
Related Requirements
Comments
Resolution
Version NoChange HistoryDateChange request
Table 5.4 Requirements template

The key entries in the template are as follows:

The following should be clearly agreed during this prioritization activity:

5.1.5.2 Full Requirements Documentation
An effective requirements document should comprise the following elements:

Managing changes to the documentation
Changes may come about because:

There are a number of specialist support tools on the market to support requirements processes. These are sometimes called CARE (Computer Aided Requirements Engineering) or CASE (Computer Aided Software Engineering). Features include:

5.1.6 Requirements And Outsourcing
The aim is to select standard packaged solutions wherever possible to meet service requirements. However, whether IT requirements are to be purchased off-the-shelf, developed in-house or outsourced, all the activities up to the production of a specification of business requirements are done in-house. Many IT service development contracts assume it is possible to know what the requirements are at the start, and that it is possible to produce a specification that unambiguously expresses the requirements. For all but the simplest services this is rarely true. Requirements analysis is an iterative process - the requirements will change during the period the application and service are being developed. It will require user involvement throughout the development process, as in the DSDM and other 'agile' approaches.

5.1.6.1 Typical Requirements Outsourcing Scenarios
Typical approaches to contract for the development of IT systems to be delivered in support of an IT service are as follows:

[To top of Page]

5.2 Data and Information Management

Data is one of the critical asset types that need to be managed in order to develop, deliver and support IT services effectively. Data/Information Management is how an organization plans, collects, creates, organizes, uses, controls, disseminates and disposes of its data/information, both structured records and unstructured data. It also ensures that the value of that data/information is identified and exploited, both in support of its internal operations and in adding value to its customer-facing business processes.

A number of terms are common in this area, including 'Data Management', 'Information Management' and 'Information Resource Management'. For the purposes of this publication, the term 'Data Management' is used as shorthand for all of the three above.

The role of Data Management described is not just about managing raw data: it's about managing all the contextual metadata - additional 'data about the data' - that goes with it, and when added to the raw data gives 'information' or 'data in context'.

Data, as the basis for the organization's information, has all the necessary attributes to be treated as an asset (or resource). For example, it is essential for 'the achievement of business objectives and the successful daily workings of an organization'. In addition, it can be 'obtained and preserved by an organization, but only at a financial cost'. Finally it can, along with other resources/assets, be used to 'further the achievement of the aims of an organization'.

Key factors for successful Data Management are as follows:

5.2.1 Managing Data Assets
If data isn't managed effectively:

In addition, whether or not information is derived from good-quality data is a difficult question to answer, because there are no measurements in place against which to compare it. For example, poor data quality often arises because of poor checks on input and/or updating procedures. Once inaccurate or incomplete data have been stored in the IT system, any reports produced using these data will reflect these inaccuracies or gaps. There may also be a lack of consistency between internallygenerated management information from the operational systems, and from other internal, locally-used systems, created because the central data is not trusted.

One way of improving the quality of data is to use a Data Management process that establishes policy and standards, provides expertise and makes it easier to handle the data aspects of new services. This should then allow full Data/Information Asset Management to:

5.2.2 Scope Of Data Management
There are four areas of management included within the scope of Data/Information Management:

The best practices scope of the Data Management process includes managing non-structured data that is not held in conventional database systems - for example, using formats such as text, image and audio. It is also responsible for ensuring process quality at all stages of the data lifecycle, from requirements to retirement. The main focus in this publication will be on its role in the requirements, design and development phases of the asset and Service Lifecycle.

The team supporting the Data Management process may also provide a business information support service. In this case they are able to answer questions about the meaning, format and availability of data internal to the organization, because they manage the metadata. They also are able to understand and explain what external data might be needed in order to carry out necessary business processes and will take the necessary action to source this.

Critically, when creating or redesigning processes and supporting IT services, it is good practice to consider reusing data and metadata across different areas of the organization. The ability to do this may be supported by a corporate data model - sometimes known as a common information model - to help support re-use, often a major objective for data management.

5.2.3 Data Management and the Service Lifecycle
It is recommended that a lifecycle approach be adopted in understanding the use of data in business processes. General issues include:

5.2.4 Supporting the Service Lifecycle
During requirements and initial design, Data Management can assist design and development teams with service specific data modeling and give advice on the use of various techniques to model data. During detailed ('physical') design and development, the Data Management function can provide technical expertise on database management systems and on how to convert initial 'logical' models of data into physical, product specific, implementations.

Many new services have failed because poor data quality has not been addressed during the development of the service, or because a particular development created its own data and metadata, without consultation with other service owners, or with Data Management.

5.2.5 Valuing Data
Data is an asset and has value. Clearly in some organizations this is more obvious than in others. Organizations that are providers of data to other organizations - Yell, Dun and Bradstreet, and Reuters - can value data as an 'output' in terms of the price that they are charging external organizations to receive it. It's also possible to think of value in terms of what the internal data would be worth to another organization. It's more common to value data in terms of what it's worth to the owner organization. There are a number of suggested ways of doing this:

5.2.6 Classifying Data
Data can be initially classified as operational, tactical or strategic:

An alternative method is to use a security classification of data and documents. This is normally adopted as a corporate policy within an organization.

An orthogonal classification distinguishes between organization-wide data, functional-area data and service specific data.

5.2.7 Setting Data Standards
One of the critical aspects of data administration is to ensure that standards for metadata are in place - for example, what metadata is to be kept for different underlying 'data types'. Different details are kept about structured tabular data than for other areas. 'Ownership' is a critical item of this metadata, some sort of unique identifier is another, a description in business meaningful terms another, and a format might be another. The custodian or steward, someone in the IT department who takes responsibility for the day-to-day management of the data, is also recorded.

Another benefit of a Data Management process would be in the field of reference data. Certain types of data, such as postcodes or names of countries, may be needed across a variety of systems and need to be consistent. It is part of the responsibility of data administration to manage reference data on behalf of the whole business, and to make sure that the same reference data is used by all systems in the organization. Standards for naming must be in place; so, for example, if a new type of data is requested in a new service, then there is a need to use names that meet these standards. An example standard might be 'all capitals, no underlining and no abbreviations'.

5.2.8 Data Ownership
Data administration can assist the service developer by making sure responsibilities for data ownership are taken seriously by the business and by the IT department. One of the most successful ways of doing this is to get the business and the IT department to sign up to a data charter - a set of procedural standards and guidance for the careful management of data in the organization, by adherence to corporately defined standards. Responsibilities of a data owner are often defined here and may include:

5.2.9 Data Migration
Data migration is an issue where a new service is replacing a number of (possibly just one) existing services, and it's necessary to carry across, into the new service, good-quality data from the existing systems and services. There are two types of data migration of interest to projects here: one is the data migration into data warehouses etc., for business intelligence/analytics purposes; the other is data migration to a new transactional, operational service. In both cases it will be beneficial if data migration standards, procedures and processes are laid down by Data Management. Data migration tools may have already been purchased on behalf of the organization by the Data Management team. Without this support, it's very easy to underestimate the amount of effort that's required, particularly if data consolidation and cleaning has to take place between multiple source systems, and the quality of the existing services' data is known to be questionable.

5.2.10 Data Storage
One area where technology has moved on very rapidly is in the area of storage of data. There is a need to consider different storage media - for example, optical storage - and be aware of the size and cost implications associated with this. The main reason for understanding the developments in this area is that they make possible many types of data management areas that were considered too expensive before. For example, to store real-time video, which uses an enormous bandwidth, has, until the last two to three years, been regarded as too expensive. The same is true of the scanning of large numbers of paper documents, particularly where those documents are not text-based but contain detailed diagrams or pictures. Understanding technology developments with regard to electronic storage of data is critical to understanding the opportunities for the business to exploit the information resource effectively by making the best use of new technology.

5.2.11 Data Capture
It is also very important to work with Data Management on effective measures for data capture. The aim here is to capture data as quickly and accurately as possible. There is a need to ensure that the data capture processes require the minimum amount of keying, and exploit the advantages that graphical user interfaces provide in terms of minimizing the number of keystrokes needed, also decreasing the opportunity for errors during data capture. It is reasonable to expect that the Data Management process has standards for, and can provide expertise on, effective methods of data capture in various environments, including 'non-structured' data capture using mechanisms such as scanning.

5.2.12 Data Retrieval And Usage
Once the data has been captured and stored, the next aspect to consider is the retrieval of information from the data. Services to allow easy access to structured data via query tools of various levels of sophistication are needed by all organizations, and generate their own specific architectural demands. The whole area of searching within scanned text and other non-structured data such as video, still images or sound is a major area of expansion. Techniques such as automatic indexing, and the use of search engines to give efficient access via keywords to relevant parts of a document, are essential technologies that have been widely implemented, particularly on the internet. Expertise in the use of data or content within websites should exist within the Data Management as well as Content Management - standards and procedures that are vital for websites.

5.2.13 Data Integrity And Related Issues
When defining requirements for IT services, it is vital that management and operational requirements related to data are considered. In particular, the following areas must be addressed:

Data integrity is concerned with ensuring that the data is of high quality and uncorrupted. It is also about preventing uncontrolled data duplication, and hence avoiding any confusion about what is the valid version of the data. There are several approaches that may assist with this. Various technology devices such as 'database locking' are used to prevent multiple, inconsistent, updating of data. In addition, prevention of illegal updating may be achieved through access control mechanisms.

[To top of Page]

5.3 Application Management

An application is defined here as the software program(s) that perform those specific functions that directly support the execution of business processes and/or procedures. Applications, along with data and infrastructure components such as hardware, the operating system and middleware, make up the technology components that are part of an IT service. The application itself is only one component, albeit an important one of the service. Therefore it is important that the application delivered matches the agreed requirements of the business. However, too many organizations spend too much time focusing on the functional requirements of the new service and application, and insufficient time is spent designing the management and operational requirements (non-functional requirements) of the service. This means that when the service becomes operational, it meets all of the functionality required, but totally fails to meet the expectation of the business and the customers in terms of its quality and performance, and therefore becomes unusable.

Two alternative approaches are necessary to fully implement Application Management. One approach employs an extended Service Development Lifecycle (SDLC) to support the development of an IT service. SDLC is a systematic approach to problem solving and is composed of the following steps:

The other approach takes a global view of all services to ensure the ongoing maintainability and manageability of the applications:

Application nameIT operations ownerNew development cost
Application identifierIT development ownerAnnual operational costs
Application descriptionSupport contactsAnnual support cost
Business process supportedDatabase technologiesAnnual maintenance costs
IT services supportedDependent applicationsOutsourced components
Executive sponsorIT systems supportedOutsource partners
Geographies supportedUser interfacesProduction metrics
Business criticalityIT Architecture, including Network topologyOLA link
SLA linkApplication technologies usedSupport metrics
Business ownerNumber of users 
Table 5.5 Application Portfolio attributes example

5.3.1 The Application Portfolio
This is simply a full record of all applications within the organization and is dynamic in its content.

5.3.2 Linking Application and Service Portfolios
Some organizations maintain a separate Application Portfolio with separate attributes, while in other organizations the Application Portfolio is stored within the CMS, together with the appropriate relationships. Other organizations combine the Application Portfolio together with the Service Portfolio. It is for each organization to decide the most appropriate strategy for its own needs. What is clear is that there should be very close relationships and links between the applications and the services they support and the infrastructure components used.

5.3.3 Application Frameworks
The concept of an application framework is a very powerful one. The application framework covers all management and operational aspects and actually provides solutions for all the management and operational requirements that surround an application.

Implied in the use of application frameworks is the concept of standardization. If an organization uses and has to maintain an application framework for every single application, there will not be many benefits of the use of an application framework.

An organization that wants to develop and maintain application frameworks, and to ensure the application frameworks comply with the needs of the application developers, must invest in doing so. It is essential that applications framework architectures are not developed in isolation, but are closely related and integrated with all other framework and architectural activities. The Service, Infrastructure, Environment and Data Architectures must all be closely integrated with the Application Architecture and framework.

Architecture, application frameworks and standards
Architecture-related activities have to be planned and managed separately from individual system-based software projects. It is also important that architecture-related activities be performed for the benefit of more than just one application. Application developers should focus on a single application, while application framework developers should focus on more than one application, and on the common features of those applications in particular.

A common practice is to distinguish between various types of applications. For instance, not every application can be built on top of a Microsoft@ Windows operating system platform, connected to a UNIX server, using HTML, Java applets, JavaBeans and a relational database. The various types of applications can be regarded as application families. All applications in the same family are based on the same application framework.

Utilizing the concept of an application framework, the first step of the application design phase is to identify the appropriate application framework. If the application framework is mature, a large number of the design decisions are given. If it is not mature, and all management and operational requirements cannot be met on top of an existing application framework, the preferred strategy is to collect and analyse the requirements that cannot be dealt with in the current version of the application framework. Based on the application requirements, new requirements can be defined for the application framework. Next, the application framework can be modified so that it can cope with the application requirements. In fact, the whole family of applications that corresponds to the application framework can then use the newly added or changed framework features.

Developing and maintaining an application framework is a demanding task and, like all other design activities, should be performed by competent and experienced people. Alternatively, application frameworks can be acquired from third parties.

5.3.4 The Need For Case Tools And Repositories
One important aspect of that overall alignment is the need to align applications with their underlying support structures. Application development environments traditionally have their own Computer Assisted/Aided Software Engineering (CASE) tools that offer the means to specify requirements, draw design diagrams (according to particular modeling standards), or even generate complete applications, or nearly complete application skeletons, almost ready to be deployed. These environments also provide a central location for storing and managing all the elements that are created during application development, generally called a repository. Repository functionality includes version control and consistency checking across various models. The current approach is to use metacase-tools to model domain specific languages and use these to make the CASE-work more aligned to the needs of the business.

5.3.5 Design Of Specific Applications
The requirements phase of the lifecycle was addressed earlier in the requirements engineering section. The design phase is one of the most important phases within the application lifecycle. It ensures that an application is conceived with operability and Application Management in mind. This phase takes the outputs from the requirements phase and turns them into the specification that will be used to develop the application. The goal for designs should be satisfying the organization's requirements. Design includes the design of the application itself, and the design of the infrastructure and environment within which the application operates. Architectural considerations are the most important aspect of this phase, since they can impact on the structure and content of both application and operational model. Architectural considerations for the application (design of the Application Architecture) and architectural considerations for the environment (design of the IT Architecture) are strongly related and need to be aligned. Application Architecture and design should not be considered in isolation but should form an overall integrated component of service architecture and design.

Generally, in the design phase, the same models will be produced as have been delivered in the requirements phase, but during design many more details are added. New models include the architecture models, where the way in which the different functional components are mapped to the physical components (e.g. desktops, servers, databases and network) needs to be defined. The mapping, together with the estimated load of the system, should allow for the sizing of the infrastructure required.

Another important aspect of the architecture model is the embedding of the application in the existing environment. Which pieces of the existing infrastructure will be used to support the required new functions? Can existing servers or networks be used? With what impact? Are required functions available in existing applications that can be utilized? Do packages exist that offer the functionality needed or should the functions be built from scratch?

The design phase takes all requirements into consideration and starts assembling them into an initial design for the solution. Doing this not only gives developers a basis to begin working: it is also likely to bring up questions that need to be asked of the customers/users. If possible, application frameworks should be applied as a starting point.

It is not always possible to foresee every aspect of a solution's design ahead of time. As a solution is developed, new things will be learned about how to do things and also how not to. The key is to create a flexible design, so that making a change does not send developers all the way back to the beginning of the design phase. There are a number of approaches that can minimize the chance of this happening, including:

Design for management and operational requirements means giving management and operational requirements a level of importance similar to that for the functional requirements, and including them as a mandatory part of the design phase. This includes a number of management and operational requirements such as availability, capacity, maintainability, reliability and security. It is now inconceivable in modern application development projects that user interface design (usability requirements) would be omitted as a key design activity. However, many organizations ignore or forget manageability. Details of the necessary management and operational requirements are contained within the SDP and SAC in Appendices A and B.

5.3.6 Managing Trade-offs
Managing trade-off decisions focuses on balancing the relationship among resources, the project schedule, and those features that need to be included in the application for the sake of quality. When development teams try to complete this balancing, it is often at the expense of the management and operational requirements. One way to avoid that is to include management and operational requirements in the application-independent design guidelines - for example, in the form of an application framework. Operability and manageability effectively become standard components of all design processes - for example, in the form of an application framework - and get embedded into the working practices and culture of the development organization.

5.3.7 Typical Design Outputs
The following are examples of the outputs from an applications design forming part of the overall Service Design:

There are guidelines and frameworks that can be adopted to determine and define design outputs within Applications Management, such as Capability Maturity Model Integration (CMMI).

5.3.8 Design Patterns
A design pattern is a general, repeatable solution to a commonly occurring problem in software design. Object oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Design patterns describe both a problem and a solution for common issues encountered during application development.

An important design principle used as the basis for a large number of the design patterns found in recent literature is that of separation of concern. Separation of concerns will lead to applications divided into components, with a strong cohesion and minimal coupling between components. The advantage of such an application is that modification can be made to individual components with little or no impact on other components.

In typical application development projects, more than 70% of the effort is spent on designing and developing generic functions and on satisfying the management and operational requirements. That is because each individual application needs to provide a solution for such generic features as printing, error handling and security.

Among others, the Object Management Group (OMG, www.omg.com) defined a large number of services that are needed in every application. OMG's Object Management Architecture (OMA) clearly distinguishes between functional and management and operational aspects of an application. It builds on the concept of providing a run-time environment that offers all sorts of facilities to an application.

In this concept, the application covers the functional aspects, and the environment covers all management and operational aspects. Application developers should, by definition, focus on the functional aspects of an application, while others can focus on the creation of the environment that provides the necessary management and operational services. This means that the application developers focus on the requirements of the business, while the architecture developers or application framework developers focus on the requirements of the application developers.

5.3.9 Developing Individual Applications
Once the design phase is completed, the application development team will take the designs that have been produced and move on to developing the application. Both the application and the related environment are .made ready for deployment. Application components are coded or acquired, integrated and tested.

To ensure that the application is developed with management at the core, the development team needs to focus on ensuring that the developing phase continues to correctly address the management and operational aspects of the design (e.g. responsiveness, availability, security).

The development phase guidance covers the following topics:

5.3.10 Consistent Coding Conventions
The main reason for using a consistent set of design and coding conventions is to standardize the structure and coding style of an application so that everyone can easily read, understand and manage the application development process. Good design and coding conventions result in precise, readable and unambiguous source code that is consistent with the organizational coding and management standards and is as intuitive to follow as possible. Adding application operability into this convention ensures that all applications are built in a way that ensures that they can be fully managed all the way through their lifecycles.

A coding convention itself can be a significant aid to managing the application, as consistency allows the management tools to interact with the application in a known way. It is better to introduce a minimum set of conventions that everyone will follow rather than to create an overly complex set that encompasses every facet but is not followed or used consistently across the organization.

5.3.11 Templates And Code Generation
A number of development tools provide a variety of templates for creating common application components. Rather than creating all the pieces of an application from scratch, developers can customize an existing template. They can also re-use custom components in multiple applications by creating their own templates. Other development tools will generate large pieces of code (skeletons) based on the design models and coding conventions. The code could include hooks at the code pieces that need to be added.

In this respect, templates and application frameworks should be considered IT assets. These assets not only guide the developing of applications, but also incorporate the lessons learned or intellectual capital from previous application development efforts. The more that standard components are designed into the solution, the faster applications can be developed, against lower costs in the long term (not ignoring the fact that development of ! templates, code generators and application frameworks requires significant investment).

5.3.12 Embedded Application Instrumentation
The development phase deals with incorporating instrumentation into the fabric of the application. Developers need a consistent way to provide instrumentation for application drivers/middleware components (e.g. database drivers) and applications that is efficient and easy to implement. To keep application developers from reinventing the wheel with every new application they develop, the computer industry provides methods and technologies to simplify and facilitate the instrumentation process.

These include:

Each of these technologies provides a consistent and richly descriptive model of the configuration, status and operational aspects of applications and services. These are provided through programming Application Program Interfaces (APIs) that the developer incorporates into an application, normally through the use of standard programming templates.

It is important to ensure that all applications are built to conform to some level of compliance for the application instrumentation. Ways to do this could include:

5.3.13 Diagnostic Hooks
Diagnostic hooks are of greatest value during testing and when an error has been discovered in the production service. Diagnostic hooks mainly provide the information necessary to solve problems and application errors rapidly and restore service. They can also be used to provide measurement and management information of applications.

The three main categories are:

5.3.14 Major Service Outputs From Development
The major outputs from the development phase are:

[To top of Page]

 


Visit my web site