|
Support - Quality ControlPractice ID | Artifacts | Considerations
| | Direct | Indirect
| Goal 1: Adherence of the performed process and associated work products and services to applicable process descriptions, standards, and procedures is objectively evaluated.
| | Practice 1.1 - Objectively evaluate the designated performed processes against the applicable process descriptions, standard, and procedures.
Practice 1.2 - Objectively evaluate the designated work products and services against the applicable process descriptions, standards, and procedures.
| - Evaluation reports
- Noncompliance reports
| - Corrective actions include:
- Quality assurance plan, identifying the processes subject to evaluation, and procedures for performing evaluations.
- Applicable process descriptions, standards, and procedures.
- Action items for noncompliance issues, tracked to closure.
- Criteria and checklists used for process and work product evaluations (e.g. what, when, how, who).
- Schedule for performing process evaluations (planned, actual) at selected milestones throughout the product development life cycle.
- Quality assurance records, reports, or database.
- Records of reviews or events indicating QA involvement (e.g. attendance lists, signature)
| - Objectivity in quality assurance evaluations is critical to the success of
the project. A description of the quality assurance reporting chain and
how it ensures objectivity should be defined.
- The frequency of evaluations or audits is typically defined in a quality assurance plan. Look for evaluations performed throughout the lifecycle, not just at the end a project or in close proximity to the appraisal. For example, management processes lend themselves to periodic examination; technical processes lend themselves to event-driven examination based on project schedules and progress.
- A typical implementation of this practice is through the development and use of a quality assurance plan that may be a standalone document or incorporated into another plan.
- Depending on the culture of the organization, the process and product quality assurance role may be performed, partially or completely, by peers, and the quality assurance function may be embedded in the process. Where the quality assurance function is embedded in the process, there must be a review of the objectivity aspect to ensure that the evaluation mechanism is being adequately applied.
- A common misunderstanding is that the existence of a work product indicates satisfactory execution of the process. For all process evaluations, examine how the evaluation was done to ensure that the evaluation mechanism considers more than just work product existence.
| | Goal 2: Noncompliance issues are objectively tracked and communicated, and resolution is ensured.
| | Practice 2.1 - Communicate quality issues and ensure resolution of noncompliance issues with the staff and managers.
| - Corrective action and evaluation reports
| - Quality trends
- Action items for noncompliance issues, tracked to closure.
- Revised work products, standards and procedures, or waivers issued to resolve noncompliance issues.
- Reports or briefings communicating noncompliance issues to relevant stakeholders.
- Evidence of reviews held periodically to receive and act upon noncompliance issues.
- Quality metrics and trend analyses.
- Tracking system or database for noncompliance issues.
| - The status of noncompliance issues provides an indication of quality trends
- When local resolution of noncompliance issues cannot be obtained, established escalation mechanisms should be used to ensure that the appropriate level of management can resolve the issue. Track noncompliance issues to resolution.
| | Practice 2.2 - Establish and maintain records of the quality assurance activities.
| - Evaluation logs
- Quality assurance reports and records of activities.
| - Status reports of corrective actions and Status reports of activities
- Reports of quality trends
- Noncompliance actions, reports, logs, or database
- Completed evaluation checklists
- Schedule for performing process and product evaluations (planned, actual).
- Records of reviews or events indicating QA involvement (e.g. attendance lists, signature)
- Metrics or analyses used for quality assurance of processes and work products.
| - Recording of activities in sufficient detail such that status and results are known
- Examination of records to ensure that appropriate actions are being taken, and resources being applied, to manage the effectiveness of the quality assurance functions.
| |
![[To top of Page]](../images/up.gif)
|
Support - Measurement & AnalysisPractice ID | Artifacts | Considerations
| | Direct | Indirect
| Goal 1: Measurement objectives and activities are aligned with identified information needs and objectivesN.
| | Practice 1.1 - Establish and maintain measurement objectives that are derived from identified information needs and objectiveN.
|
| - Alignment between business goals, measurement objectives/goals, information needs/objectives
- Identified information needs, objectives, and priorities
- Documented sources of information needs
- Reviews of measurement objectives with affected stakeholders.
| - The sources for measurement objectives may be management,
technical, project, product, or process implementation needs
- Measurement objectives may be constrained by existing
processes, available resources, or other measurement considerations
- Judgments may need to be made about whether the value of the results
will be commensurate with the resources devoted to doing the work
- Modifications to identified information needs and objectives may, in
turn, be indicated as a consequence of the process and results of
measurement and analysis
- Sources of information needs and objectives:
- Project plans
- Monitoring of project performance
- Interviews with managers and others who have information needs
- Established management objectives
- Strategic plans
- Business plans
- Formal requirements or contractual obligations
- Recurring or other troublesome management or technical problems
- Experiences of other projects or organizational entities
- External industry benchmarks
- Process-improvement plans
-
- Reference the Project Planning process area for more information about
estimating project attributes and other planning information needs
- Reference the Project Monitoring and Control process area for more
information about project performance information needs
- Reference the Requirements Development process area for more
information about meeting customer requirements and related
information needs
- Reference the Requirements Management process area for more
information about maintaining requirements traceability and related
information needs.
| Practice 1,2 - Specify measures to address the measurement objectives.
|
| - Linkage between measures and project / organization measurement objectives and information needs
- Algorithms, templates, checklists, procedures, ways of consistently collecting and recording measures for the product, project and process attributes identified. (?)
- Evidence of review of proposed specifications with stakeholders and other end users.
- List of prioritized measures.
| - Measures may be:
- base: obtained by direct measurement
- derived: from other data, typically by combining two or more base measureN.
- The measurement objectives are refined into specific measures:
- KEY GOAL INDICATORS (KGI): define measures that tell management — after the fact — whether an IT process has achieved its business requirements, usually expressed in terms of information criteria:
- Availability of information needed to support the business needs
- Absence of integrity and confidentiality risks
- Cost-efficiency of processes and operations
- Confirmation of reliability, effectiveness and compliance.
- KEY PERFORMANCE INDICATORS (KPI): define measures to determine how well the IT process is performing in enabling the goal to be reached; are lead indicators of whether a goal will likely be reached or not; and are good indicators of capabilities, practices and skills.R
- Operational definitions are stated in precise and unambiguous terms. They
address two important criteria:
- Communication: What has been measured, how was it measured, what are the
units of measure, and what has been included or excluded?
- Repeatability: Can the measurement be repeated, given the same definition, to
get the same results?
- Proposed specifications of the measures are reviewed for their appropriateness
with potential end users and other relevant stakeholders. Priorities are set or
changed, and specifications of the measures are updated as necessary.
| | Practice 1.3 - Specify how measurement data will be obtained and stored.
| - Data collection and storage procedures - who (responsibilities), how (procedures and tools), when (frequency), where (repository).
| - Data collection tools
- Data collection mechanisms and supporting tools
- Raw data collected, time tagged, and stored.
- Analysis reports and trending indicating completeness of collected data.
- Measurement repository.
- Reports of invalid or discarded data.
| - Explicit specifications are made of how, where, and when the data will be
collected. Procedures for collecting valid data are specified. The data are stored in
an accessible manner for analysis, and it is determined whether they will be saved
for possible re-analysis or documentation purposes.
- Explicit specification of collection methods helps ensure that the right
data are collected properly. It may also aid in further clarifying
information needs and measurement objectives
- Proper attention to storage and retrieval procedures helps ensure that
data are available and accessible for future use.
| | Practice 1.4 - Specify how measurement data will be analyzed and reportedN.
| - Analysis specification and procedures - Analysis descriptions, including who (responsibilities), how (procedures and tools), when (frequency), where (repository), and how the results will be used.
| - Data analysis tools:
- Results of data analyses
- Alignment of data analyses with measurement objectives
- Evidence of evaluations or meetings held to review measurement analyses
- Criteria for evaluating the utility of measurement and analysis data
- Revisions to measures and measurement objectives.
| - Early attention should be paid to the analyses that will be conducted and to the
manner in which the results will be reported. These should explicitly address the documented measurement objectives and presentation of the results should be clearly understandable by the target audiences
- Clarification of analysis criteria
can affect measurement. Specifications for some measures may be refined further
based on the specifications established for data analysis procedures. Other
measures may prove to be unnecessary, or a need for additional measures may
be recognized.
- Criteria for evaluating the utility of the analysis:
- The results are (1) provided on a timely basis, (2) understandable, and (3) used
for decision making
- The work does not cost more to perform than is justified by the benefits that it
provides.
- Criteria for evaluating the conduct of the measurement and analysis:
- The amount of missing data or the number of flagged inconsistencies is beyond
specified thresholds.
- There is selection bias in samplingN
- The measurement data are repeatable
- Statistical assumptions are acceptable
| | Goal 2: Measurement results that address identified information needs and objectives are provided.
| | Practice 2.1 - Obtain specified measurement data.
| - Base and derived measurement data sets
- Raw data collected, time tagged, and stored in accordance with defined data collection procedures (Practice 1.3)
- Derived measures calculated from collected base measures.
| - Results of data integrity tests
- Measurement repository populated with the specified measures
- Analysis reports and trending indicating completeness of collected data
- Results of integrity checks (e.g., tools, forms, reviews); reports of invalid or discarded data.
| - All measurements are subject to error in specifying or recording data. It is always
better to identify such errors and to identify sources of missing data early in the
measurement and analysis cycle.
- Checks can include scans for missing data, out-of-bounds data values, and
unusual patterns and correlation across measures.
- It is important to test and correct for inconsistency of classifications made by human judgment
(i.e., to determine how frequently people make differing classification decisions
based on the same information, otherwise known as “inter-coder reliability”).
- It is important to empirically examine the relationships among the measures that are used to
calculate additional derived measuresN.
| | Practice 2.2 - Analyze and interpret measurement data.
| - Analysis results and draft reports.
| - Representations for analysis results
- Evidence of evaluations or meetings held to review measurement analyses
- Follow-up analyses performed to address areas of concern, if necessary.
- Revisions of criteria for future analysis,
| - The results of data analyses are rarely self evident. Criteria for interpreting the
results and drawing conclusions should be stated explicitly
- The results of planned analyses may suggest (or require) additional, unanticipated
analyses, or, they may identify needs to refine existing measures, to
calculate additional derived measures, or even to collect data for additional
primitive measures to properly complete the planned analysis
| | Practice 2.3 - Manage and store measurement data, measurement specifications, and analysis resultsN.
| - Measurement repository with historical data and results.
| - Contextual information for understanding and interpreting the measures, and assessing them for reasonableness and applicability.
- Measurement repository, with access restriction to the stored data
| - Information stored typically includes Measurement plans, Specifications of measures, sets of data that have been collected and analysis reports and presentations
- The stored information contains or references the information needed to
understand and interpret the measures and assess them for
reasonableness and applicability
- Data sets for derived measures typically can be recalculated and need
not be stored. However, it may be appropriate to store summaries
based on derived measures Interim analysis results need not be stored separately if they can be
efficiently reconstructed
- Projects may choose to store project-specific data and results in a
project-specific repository. When data are shared more widely across
projects, the data may reside in the organization’s measurement
repository.
- Reference the Establish the Organization’s Measurement Repository
specific practice of the Organizational Process Definition process area
for more information about establishing the organization’s measurement
repository
- Reference the Configuration Management process area for information on
managing measurement work products.
| | Practice 2.4 - Report results of measurement and analysis activities to all relevant stakeholdersN.
| - Delivered reports and related analysis results.
| - Contextual data or guidance to aid in interpretation of analysis results
- Presentations of data analyses and reports
- Measurement indicator templates
- Distribution lists or web pages for communicating measurement results.
| - Measurement results are communicated in time to be used for their intended
purposes. Reports are unlikely to be used if they are distributed with little effort to
follow up with those who need to know the results
- To the extent possible and as part of the normal way they do business, users of
measurement results are kept personally involved in setting objectives and
deciding on plans of action for measurement and analysis. The users are regularly
kept apprised of progress and interim results.
- Reference the Project Monitoring and Control process area for more
information on the use of measurement results.
|
| |
![[To top of Page]](../images/up.gif)
|
Support - Decision Analysis & ResolutionPractice ID | Artifacts | Considerations
| | Direct | Indirect
| Goal 1: Decisions are based on an evaluation of alternatives using established
criteria.
| | Practice 1.1 - Establish and maintain guidelines to determine which issues are subject to a formal evaluation process.
| - Guidelines for when to apply a formal evaluation process
| - Criteria or checklists for determining when to apply a formal evaluation process
- Process description for conducting formal evaluations and selection of applicable decision-making techniques
- Identified set of typical issues subject to a formal evaluation process
| - Organizations may have different approaches for how the formal evaluation processes are architected and documented. They could be embedded within several associated processes (e.g., supplier selection process or trade studies) rather than a separate "Decision and Resolution process". If distributed across several processes, this may also already indicate the decision reached by the organization as to which processes need formal evaluation techniques, rather than a separate, integrated set of guidelines.
- Identification of the issues subject to formal evaluation (i.e., applying these guidelines) is not explicitly addressed in other Decision and Resolution Practice Areas; therefore, it should be considered here. An outcome of this would be the identified set of issues subject to application of the formal evaluation process (which is detailed in the remainder of the Decision and Resolution Practice Areas).
- There may be a variety of suitable evaluation processes that could be selected from, as appropriate to the situation. Formal evaluation processes can vary in formality, type of criteria, and methods employed.N.
- Typical guidelines for determining when to require a formal evaluation
process include:
- When a decision is directly related to topics assessed as being of
medium or high risk
- When a decision is related to changing work products under
configuration management
- When a decision would cause schedule delays over a certain
percentage or specific amount of time
- When a decision affects the ability to achieve project objectives
- When the costs of the formal evaluation process are reasonable
when compared to the decision’s impact
- Reference the Risk Management process area for more information about determining which issues are medium or high risk.
| Practice 1.2 - Establish and maintain the criteria for evaluating alternatives, and the relative ranking of these criteriaN.
| - Documented evaluation criteria
- Rankings of criteria importance
| - Traceability of criteria to documented sources (e.g., requirements, assumptions, business objectives)
- Guidance for determining and applying evaluation criteria (e.g., ranges, scales, formulas, rationale)
- Rationale for selection and rejection of evaluation criteria.
| - This process area is referenced by many other process areas in the
model, and there are many contexts in which a formal evaluation
process can be used. Therefore, in some situations you may find that
criteria have already been defined as part of another process. This
specific practice does not suggest that a second development of criteria
be conducted
- Document the evaluation criteria to minimize the possibility that
decisions will be second-guessed, or that the reason for making the
decision will be forgotten. Decisions based on criteria that are explicitly
defined and established remove barriers to stakeholder buy-in.
- Criteria should be traceable to requirements, scenarios, business case
assumptions, business objectives, or other documented sources
- Scales of relative importance for evaluation criteria can be established with nonnumeric
values or with formulas that relate the evaluation parameter to a
numerical weight
- Documentation of selection criteria and rationale may be needed to justify
solutions or for future reference and use.
| | Practice 1.3 - Identify alternative solutions to address issues.
|
| - Results of Brainstorming sessions, interviews, or other techniques used to identify potential solutions
- Research resources and references (e.g. literature surveys)
| - A wider range of alternatives can surface by soliciting as many
stakeholders as practical for input. Input from stakeholders with diverse
skills and backgrounds can help teams identify and address
assumptions, constraints, and biases
- Evaluation criteria are an effective starting point for identifying alternatives. The
evaluation criteria identify the priorities of the relevant stakeholders and the
importance of technical challenges
- Combining key attributes of existing alternatives can generate additional and
sometimes stronger alternatives.
- Brainstorming sessions may
stimulate innovative alternatives through rapid interaction and feedback.
- As the analysis proceeds, other alternatives should be added to the list of
potential candidate solutions. The generation and consideration of
multiple alternatives early in a decision analysis and resolution process
increases the likelihood that an acceptable decision will be made, and
that consequences of the decision will be understood.
| | Practice 1.4 - Select the evaluation methods.
| - Selected evaluation methods.
| - List of candidate or preferred evaluation method
- Guidance on selection of appropriate evaluation methods
| - Methods for evaluating alternative solutions against established criteria
can range from simulations to the use of probabilistic models and
decision theory. These methods need to be carefully selected. The level
of detail of a method should be commensurate with cost, schedule,
performance, and risk impacts
- While many problems may need only one evaluation method, some
problems may require multiple methods. For instance, simulations may
augment a trade study to determine which design alternative best
meets a given criterion.
| | Practice 1.5 - Evaluate alternative solutions using the documented criteria.
| - Conclusions or findings from evaluations.
| - Evaluated assumptions and constraints for application of evaluation criteria or interpretation of results (e.g., uncertainty, significance)
- Completed evaluation forms, checklists, or assigned criteria.
- Results of simulations, modeling, prototypes, pilots, life cycle cost analyses, studies, etc., performed on potential solutions.
| - Iterative cycles of analysis are sometimes necessary.
- Supporting analyses, experimentation, prototyping, or simulations may
be needed to substantiate scoring and conclusions
- The relative importance of criteria may be imprecise and the total effect
on a solution may not apparent until after the analysis is performed. In
cases where the resulting scores differ by relatively small amounts, the
best selection among alternative solutions may not be clearcut
- Challenges to criteria and assumptions should be encouraged.
- Untested criteria, their relative importance, and supporting data or functions may
cause the validity of solutions to be questioned
- Criteria and their relative priorities
and scales can be tested with trial runs against a set of alternatives. These trial
runs of a select set of criteria allow for the evaluation of the cumulative impact of
the criteria on a solution. If the trials reveal problems, different criteria or
alternatives might be considered to avoid biases.
| | Practice 1.6 - Select solutions from the alternatives based on the evaluation criteria.
| - Recommended solutions to address significant issues
- Documented results and rationale of the decision.
|
| - Decisions must often be made with incomplete information. There can be
substantial risk associated with the decision resulting from incomplete information
- Identified risks should be monitored
- When decisions must be made according to a specific schedule, time and
resources may not be available for gathering complete information. Consequently,
risky decisions made with incomplete information may require re-analysis later
- It is important to record both why a solution is selected and why another solution
was rejected.
|
| |
![[To top of Page]](../images/up.gif)
|
Support - Organizational Environment for IntegrationPractice ID | Artifacts | Considerations
| | Direct | Indirect
| Goal 1: An infrastructure that maximizes the productivity of people and affects the collaboration necessary for integrationN is provided.
| | Practice 1.1 - 1 Establish and maintain a shared vision for the organization.
| - Revision history for the organization's shared vision
| - Evaluations of the organization's shared vision
- Guidelines for shared-vision building within projects and integrated teams
- Minutes from meetings at which the organizational shared vision is developed or reviewed
- Documented agreement and commitment to the organization's shared vision
- Communications of organizational expectations for projects and integrated teams (newsletters, email, bulletin board postings, presentations, posters, etc.)
- Creating a shared vision involves establishing and actively maintaining
agreement and commitment about what is to be done and how it will be
accomplished, both procedurally and behaviorally.
- The organization’s shared vision must speak to every element of the
organization
- The shared vision should set reasonable
expectations on the rate of change in an organization. Unrealistic
proclamations can transform the shared vision into a source of
frustration and cause the organization to retreat from it after initial pilot
demonstrations.
| | Practice 1.2 - 1 Establish and maintain an integrated work environment that supports IPPD by enabling collaboration and concurrent development.
| - Integrated work environment
- Tools and facilities designed and used to facilitate and enable collaboration and teamwork in the organization.
| - \Requirements for the integrated work environment
- Design of the integrated work environment
- Sufficient meeting rooms, email, fax, FTP or web sites, video teleconferencing facilities
- Evaluations of effectiveness of the integrated work environment
- Maintenance and support of the integrated work environment
- Training materials and records for the integrated work environment
- Evidence of awareness of current and emerging integrated work environment technologies, resources and tools.
| - The integrated work environment must accommodate both collocated
and distributed integrated teams as required. Two-way communications
media should be easily accessible by all relevant stakeholders
- Integrated communication tool sets reduce time spent converting
information from one medium or platform to another, and correcting
transcriptions or misunderstandings when people do the conversions.
Requirements for product and process information usability throughout
- Integrated work environments are developed with the same, or greater,
rigor as that used to develop a specific product or service. Integrated
work environments are capital assets that are often expensive, have
unique implementations, are irreversible (their implementation can
destroy or make unusable the assets being replaced), and whose
modification disrupts ongoing activities. The rigor appropriate to the
development should be matched to the magnitude of the needs to be
resolved and the deployment risks
- The work environment should be monitored throughout its existence to ascertain
if, and when, its performance degrades below that expected (or specified) as well
as to identify opportunities for improvements.
| | Practice 1.3 - Identify the unique skills needed to support the IPPD environment.
| - IPPD strategic and tactical training needs.
| - Skills matrix for people to collaborate, integrate, and lead others
- Training modules to develop and reinforce collaborative skills in an IPPD environment
- Plan or schedule for training to support the introduction and sustainment of skills for IPPD
| - Leadership challenges
include:
- ensuring that all team members mutually understand their roles
and responsibilities;
- employing people in their intended roles; and
- effectively accessing the depth and wealth of specific expertise resident
in the organization and integrating it into the overall integrated team effort.
- Reference the Organizational Training process area for more information about determining training needs and delivering the training.
- The organization’s leadership and work force will typically need to develop new skills.
IPPD requires integrative leadership, and interpersonal skills beyond
those typically found in traditional environments where people tend to
work alone or primarily interact with others from their own, or similar,
functions, or disciplines.
| Goal 2: People are managed to nurture the integrative and collaborative behaviors of an IPPD environment.
| | Practice 2.1 - Establish and maintain leadership mechanisms to enable timely collaboration.
| - Guidelines for determining the degree of empowerment of people and integrated teams
- Guidelines for setting leadership and decision-making context
- Organizational process documentation for issue resolution
| - Process descriptions and training for leadership mechanisms in an IPPD environment
- Documented reporting relationships and decision-making authority.
| - Standard processes enable, promote, and reinforce the integrative behaviors expected from projects, integrated teams, and people
- Organizational guidelines that scope the degree of empowerment for
integrated teams serve an issue-prevention role. Best practices
promote documented and deployed organizational guidelines that can
preclude issues arising from empowerment and authority
misinterpretation
- In establishing the context for decision making,
the various kinds of issues are described and agreements are reached
on the decision type that will be used to resolve each kind of issue.
- An organizational process for issue resolution can form the basis for
project- and integrated-team-specific procedures and help ensure that
basic issue-resolution avenues are available to projects and integrated
teams when unresolved issues must be escalated.
| | Practice 2.2 - Establish and maintain incentives for adopting and demonstrating integrative and collaborative behaviors at all levels of the organizationN.
| - Policies and procedures for performance appraisal and recognition that reinforce collaboration
- Integrated team and individual recognition and rewards - Incentive program that recognizes individual achievement as well as team performance.
| - Criteria for distinguishing collaborative behaviors
- Performance review process that considers both functional supervisor and team leader input.
| - Individual excellence still should be recognized, but criteria should
discern whether such excellence was achieved at the expense of the
integrative behaviors expected or in support of them.
- Incentives should be consistent with the objectives of the organization
and applied to achieve desired behavior at all levels of the organization.
- Criteria can establish guidelines for the reassignment of people who are
unable to demonstrate desired behavior and the selection of people
who can exhibit desired behavior for challenging or important jobs.
| | Practice 2.3 - Establish and maintain organizational guidelines to balance team and home organization responsibilities.
| - Organizational guidelines for balancing team and home organization responsibilities.
| - Performance review process that considers both functional supervisor and team leader input
- Role descriptions and organization charts that identify responsibilities to both the project team and home organization
- Department budgets and financial reports identifying individual participation in home department activities
- Compensation policies, procedures, and criteria that recognize both individual and team achievement.
- The balance must be reflected in the personal or career development
plans for each individual. The knowledge and skills needed for an
individual to succeed in both their functional and integrated team role
should be honed, taking into account current and future assignments.
- Guidelines should also be in place for disbanding teams and
maintaining home organizations. It has been observed that sometimes
teams attempt to remain in place beyond their productive life in
organizations that do not have a home organization for the team
members to report back to after the team is dissolved.
|
|
|
| |
![[To top of Page]](../images/up.gif)
|
Support - Causal Analysis & ResolutionPractice ID | Artifacts | Considerations
| | Direct | Indirect
| Goal 1: Root causes of defects and other problems are systematically determined.
| | Practice 1.1 - Select the defects and other problems for analysis.
| - Defect and problem data selected for further analysis.
| - Defects and other problems to be analyzed further
- Defect selection criteria
|
| | Practice 1.2 - Perform causal analysis of selected defects and other problems and propose actions to address them.
|
| - Defect categories
- Enhanced infrastructure integrity
- Fewer latent errors in environment
| - Causal analysis is performed with those people who have an understanding of the
selected defect or problem under study, typically in meetingsN.
- Depending on the type and number of defects, it may make sense to first group
the defects before identifying their root causes.
- Reference the Quantitative Project Management process area for
more information about achieving the project’s quality and process performance objectives.
| | Goal 2: Root causes of defects and other problems are systematically addressed to
prevent their future occurrence.
| | Practice 2.1 - Implement the selected action proposals that were developed in causal analysis.
| - Action proposals selected for implementation
- Improvement proposals
| - Criteria for prioritizing action proposals
- Removal of like-defects (current and future) through implementaiton of the action proposal
- Information provided in an action item
| - Only changes that prove to be of value should be considered for broad implementation.
- Reference the Organizational Innovation and Deployment process
area for more information about the selection and deployment of
improvement proposals for the set of standard
processes.
| | Practice 2.2 - Evaluate the effect of changes on process performanceN.
| - Measures of performance and performance change.
| - influence on the ability of a process to meet its quality and process-performance
objectives
|
| | Practice 2.3 - Record causal analysis and resolution data for use across the project and organizationN.
| - Causal analysis and resolution records
| - Data on defects and other problems that were analyzed
- Rationale for decisions
- Action proposals from causal analysis meetings
- Action items resulting from action proposals
- Cost of the analysis and resolution activities
- Measures of changes to the performance of the defined process resulting from resolutions
|
| |
![[To top of Page]](../images/up.gif)
| | | | |