Metrics and Models in Software Quality Engineering, Second Edition

Book description

"This is the single best book on software quality engineering and metrics that I've encountered."
--Capers Jones, from the Foreword

Metrics and Models in Software Quality Engineering, Second Edition, is the definitive book on this essential topic of software development. Comprehensive in scope with extensive industry examples, it shows how to measure software quality and use measurements to improve the software development process. Four major categories of quality metrics and models are addressed: quality management, software reliability and projection, complexity, and customer view. In addition, the book discusses the fundamentals of measurement theory, specific quality metrics and tools, and methods for applying metrics to the software development process.

New chapters bring coverage of critical topics, including:

  • In-process metrics for software testing

  • Metrics for object-oriented software development

  • Availability metrics

  • Methods for conducting in-process quality assessments and software project assessments

  • Dos and Don'ts of Software Process Improvement, by Patrick O'Toole

  • Using Function Point Metrics to Measure Software Process Improvement, by Capers Jones

  • In addition to the excellent balance of theory, techniques, and examples, this book is highly instructive and practical, covering one of the most important topics in software development--quality engineering.



    0201729156B08282002

    Table of contents

    1. Copyright
      1. Dedication
    2. Foreword to the Second Edition
    3. Foreword to the First Edition
    4. Preface
      1. Themes of This Book
      2. Organization of This Book
      3. Suggested Ways to Read This Book
      4. Acknowledgments
    5. 1. What Is Software Quality?
      1. 1.1. Quality: Popular Views
      2. 1.2. Quality: Professional Views
        1. 1.2.1. The Role of the Customer
      3. 1.3. Software Quality
      4. 1.4. Total Quality Management
      5. 1.5. Summary
      6. References
    6. 2. Software Development Process Models
      1. 2.1. The Waterfall Development Model
        1. High-Level Design
        2. Low-Level Design
        3. Code Stage
        4. Unit Test
        5. Component Test
        6. System-Level Test
        7. Early Customer Programs
      2. 2.2. The Prototyping Approach
        1. Rapid Throwaway Prototyping
        2. Evolutionary Prototyping
      3. 2.3. The Spiral Model
      4. 2.4. The Iterative Development Process Model
      5. 2.5. The Object-Oriented Development Process
      6. 2.6. The Cleanroom Methodology
      7. 2.7. The Defect Prevention Process
      8. 2.8. Process Maturity Framework and Quality Standards
        1. 2.8.1. The SEI Process Capability Maturity Model
          1. Level 1: Initial
          2. Level 2: Repeatable
          3. Level 3: Defined
          4. Level 4: Managed
          5. Level 5: Optimizing
          6. Maturity Level 1: Initial
          7. Maturity Level 2: Managed
          8. Maturity Level 3: Defined
          9. Level 4: Quantitatively Managed
          10. Level 5: Optimizing
        2. 2.8.2. The SPR Assessment
        3. 2.8.3. The Malcolm Baldrige Assessment
        4. 2.8.4. ISO 9000
      9. 2.9. Summary
      10. References
    7. 3. Fundamentals of Measurement Theory
      1. 3.1. Definition, Operational Definition, and Measurement
      2. 3.2. Level of Measurement
        1. Nominal Scale
        2. Ordinal Scale
        3. Interval and Ratio Scales
      3. 3.3. Some Basic Measures
        1. Ratio
        2. Proportion
        3. Percentage
        4. Rate
        5. Six Sigma
      4. 3.4. Reliability and Validity
      5. 3.5. Measurement Errors
        1. 3.5.1. Assessing Reliability
        2. 3.5.2. Correction for Attenuation
      6. 3.6. Be Careful with Correlation
      7. 3.7. Criteria for Causality
      8. 3.8. Summary
      9. References
    8. 4. Software Quality Metrics Overview
      1. 4.1. Product Quality Metrics
        1. 4.1.1. The Defect Density Metric
          1. Lines of Code
          2. Example: Lines of Code Defect Rates
          3. Customer’s Perspective
          4. Function Points
          5. Example: Function Point Defect Rates
        2. 4.1.2. Customer Problems Metric
        3. 4.1.3. Customer Satisfaction Metrics
      2. 4.2. In-Process Quality Metrics
        1. 4.2.1. Defect Density During Machine Testing
        2. 4.2.2. Defect Arrival Pattern During Machine Testing
        3. 4.2.3. Phase-Based Defect Removal Pattern
        4. 4.2.4. Defect Removal Effectiveness
      3. 4.3. Metrics for Software Maintenance
        1. 4.3.1. Fix Backlog and Backlog Management Index
        2. 4.3.2. Fix Response Time and Fix Responsiveness
        3. 4.3.3. Percent Delinquent Fixes
        4. 4.3.4. Fix Quality
      4. 4.4. Examples of Metrics Programs
        1. 4.4.1. Motorola
        2. 4.4.2. Hewlett-Packard
        3. 4.4.3. IBM Rochester
      5. 4.5. Collecting Software Engineering Data
      6. 4.6. Summary
      7. References
    9. 5. Applying the Seven Basic Quality Tools in Software Development
      1. 5.1. Ishikawa’s Seven Basic Tools
      2. 5.2. Checklist
      3. 5.3. Pareto Diagram
      4. 5.4. Histogram
      5. 5.5. Run Charts
      6. 5.6. Scatter Diagram
      7. 5.7. Control Chart
      8. 5.8. Cause-and-Effect Diagram
      9. 5.9. Relations Diagram
      10. 5.10. Summary
      11. References
    10. 6. Defect Removal Effectiveness
      1. 6.1. Literature Review
      2. 6.2. A Closer Look at Defect Removal Effectiveness
      3. 6.3. Defect Removal Effectiveness and Quality Planning
        1. 6.3.1. Phase-Based Defect Removal Model
        2. 6.3.2. Some Characteristics of a Special Case Two-Phase Model
      4. 6.4. Cost Effectiveness of Phase Defect Removal
      5. 6.5. Defect Removal Effectiveness and Process Maturity Level
      6. 6.6. Summary
      7. References
    11. 7. The Rayleigh Model
      1. 7.1. Reliability Models
      2. 7.2. The Rayleigh Model
      3. 7.3. Basic Assumptions
      4. 7.4. Implementation
      5. 7.5. Reliability and Predictive Validity
      6. 7.6. Summary
      7. References
    12. 8. Exponential Distribution and Reliability Growth Models
      1. 8.1. The Exponential Model
      2. 8.2. Reliability Growth Models
        1. 8.2.1. Jelinski-Moranda Model
        2. 8.2.2. Littlewood Models
        3. 8.2.3. Goel-Okumoto Imperfect Debugging Model
        4. 8.2.4. Goel-Okumoto Nonhomogeneous Poisson Process Model
        5. 8.2.5. Musa-Okumoto Logarithmic Poisson Execution Time Model
        6. 8.2.6. The Delayed S and Inflection S Models
      3. 8.3. Model Assumptions
      4. 8.4. Criteria for Model Evaluation
      5. 8.5. Modeling Process
        1. Step 1
        2. Step 2
        3. Step 3
        4. Step 4
        5. Step 5
        6. Step 6
      6. 8.6. Test Compression Factor
      7. 8.7. Estimating the Distribution of Total Defects over Time
      8. 8.8. Summary
      9. References
    13. 9. Quality Management Models
      1. 9.1. The Rayleigh Model Framework
      2. 9.2. Code Integration Pattern
      3. 9.3. The PTR Submodel
      4. 9.4. The PTR Arrival and Backlog Projection Model
      5. 9.5. Reliability Growth Models
      6. 9.6. Criteria for Model Evaluation
      7. 9.7. In-Process Metrics and Reports
      8. 9.8. Orthogonal Defect Classification
      9. 9.9. Summary
      10. References
    14. 10. In-Process Metrics for Software Testing
      1. 10.1. In-Process Metrics for Software Testing
        1. 10.1.1. Test Progress S Curve (Planned, Attempted, Actual)
        2. 10.1.2. Testing Defect Arrivals over Time
        3. 10.1.3. Testing Defect Backlog over Time
        4. 10.1.4. Product Size over Time
        5. 10.1.5. CPU Utilization During Test
        6. 10.1.6. System Crashes and Hangs
        7. 10.1.7. Mean Time to Unplanned IPL
        8. 10.1.8. Critical Problems: Showstoppers
      2. 10.2. In-Process Metrics and Quality Management
        1. 10.2.1. Effort/Outcome Model
      3. 10.3. Possible Metrics for Acceptance Testing to Evaluate Vendor-Developed Software
      4. 10.4. How Do You Know Your Product Is Good Enough to Ship?
      5. 10.5. Summary
      6. References
    15. 11. Complexity Metrics and Models
      1. 11.1. Lines of Code
      2. 11.2. Halstead’s Software Science
      3. 11.3. Cyclomatic Complexity
      4. 11.4. Syntactic Constructs
      5. 11.5. Structure Metrics
      6. 11.6. An Example of Module Design Metrics in Practice
      7. 11.7. Summary
      8. References
    16. 12. Metrics and Lessons Learned for Object-Oriented Projects
      1. 12.1. Object-Oriented Concepts and Constructs
      2. 12.2. Design and Complexity Metrics
        1. 12.2.1. Lorenz Metrics and Rules of Thumb
        2. 12.2.2. Some Metrics Examples
        3. 12.2.3. The CK OO Metrics Suite
        4. 12.2.4. Validation Studies and Further Examples
      3. 12.3. Productivity Metrics
      4. 12.4. Quality and Quality Management Metrics
      5. 12.5. Lessons Learned from OO Projects
        1. Education and Skills Level
        2. Tools and Development Environment
        3. Project Management
        4. Reuse
        5. Performance
        6. Quality and Development Practices
      6. Summary
      7. References
    17. 13. Availability Metrics
      1. 13.1 Definition and Measurements of System Availability
      2. 13.2. Reliability, Availability, and Defect Rate
      3. 13.3. Collecting Customer Outage Data for Quality Improvement
      4. 13.4. In-process Metrics for Outage and Availability
      5. Summary
      6. References
    18. 14. Measuring and Analyzing Customer Satisfaction
      1. 14.1. Customer Satisfaction Surveys
        1. 14.1.1. Methods of Survey Data Collection
        2. 14.1.2. Sampling Methods
        3. 14.1.3. Sample Size
      2. 14.2. Analyzing Satisfaction Data
        1. 14.2.1. Specific Attributes and Overall Satisfaction
      3. 14.3. Satisfaction with Company
      4. 14.4. How Good Is Good Enough
      5. 14.5. Summary
      6. References
    19. 15. Conducting In-Process Quality Assessments
      1. 15.1. The Preparation Phase
        1. 15.1.1. What Data Should I Look At?
        2. 15.1.2. Don’t Overlook Qualitative Data
      2. 15.2. The Evaluation Phase
        1. 15.2.1. Quantitative Data
        2. 15.2.2. Qualitative Data
        3. 15.2.3. Evaluation Criteria
      3. 15.3. The Summarization Phase
        1. 15.3.1. Summarization Strategy
        2. 15.3.2. The Overall Assessment
      4. 15.4. Recommendations and Risk Mitigation
      5. 15.5. Summary
      6. References
    20. 16. Conducting Software Project Assessments
      1. 16.1. Audit and Assessment
      2. 16.2. Software Process Maturity Assessment and Software Project Assessment
      3. 16.3. Software Process Assessment Cycle
      4. 16.4. A Proposed Software Project Assessment Method
        1. 16.4.1. Preparation Phase
        2. 16.4.2. Facts Gathering Phase 1
        3. 16.4.3. Questionnaire Customization and Finalization
        4. 16.4.4. Facts Gathering Phase 2
        5. 16.4.5. Possible Improvement Opportunities and Recommendations
        6. 16.4.6. Team Discussions of Assessment Results and Recommendations
        7. 16.4.7. Assessment Report
        8. 16.4.8. Summary
      5. 16.5. Summary
      6. References
    21. 17. Dos and Don’ts of Software Process Improvement
      1. 17.1. Measuring Process Maturity
      2. 17.2. Measuring Process Capability
      3. 17.3. Staged versus Continuous—Debating Religion
      4. 17.4. Measuring Levels Is Not Enough
      5. 17.5. Establishing the Alignment Principle
      6. 17.6. Take Time Getting Faster
      7. 17.7. Keep It Simple — or Face Decomplexification
      8. 17.8. Measuring the Value of Process Improvement
      9. 17.9. Measuring Process Adoption
      10. 17.10. Measuring Process Compliance
      11. 17.11. Celebrate the Journey, Not Just the Destination
      12. 17.12. Summary
      13. References
    22. 18. Using Function Point Metrics to Measure Software Process Improvements
      1. 18.1. Software Process Improvement Sequences
        1. 18.1.1. Stage 0: Software Process Assessment and Baseline
        2. 18.1.2. Stage 1: Focus on Management Technologies
        3. 18.1.3. Stage 2: Focus on Software Processes and Methodologies
        4. 18.1.4. Stage 3: Focus on New Tools and Approaches
        5. 18.1.5. Stage 4: Focus on Infrastructure and Specialization
        6. 18.1.6. Stage 5: Focus on Reusability
        7. 18.1.7. Stage 6: Focus on Industry Leadership
      2. 18.2. Process Improvement Economics
      3. 18.3. Measuring Process Improvements at Activity Levels
      4. 18.4. Summary
      5. References
    23. 19. Concluding Remarks
      1. 19.1. Data Quality Control
      2. 19.2. Getting Started with a Software Metrics Program
      3. 19.3. Software Quality Engineering Modeling
      4. 19.4. Statistical Process Control in Software Development
      5. 19.5. Measurement and the Future
      6. References
    24. A Project Assessment Questionnaire

    Product information

    • Title: Metrics and Models in Software Quality Engineering, Second Edition
    • Author(s):
    • Release date: September 2002
    • Publisher(s): Addison-Wesley Professional
    • ISBN: 9780201729153