Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd Edition

Book description

This classic text on multiple regression is noted for its nonmathematical, applied, and data-analytic approach. Readers profit from its verbal-conceptual exposition and frequent use of examples.

The applied emphasis provides clear illustrations of the principles and provides worked examples of the types of applications that are possible. Researchers learn how to specify regression models that directly address their research questions. An overview of the fundamental ideas of multiple regression and a review of bivariate correlation and regression and other elementary statistical concepts provide a strong foundation for understanding the rest of the text. The third edition features an increased emphasis on graphics and the use of confidence intervals and effect size measures, and an accompanying website with data for most of the numerical examples along with the computer code for SPSS, SAS, and SYSTAT, at www.psypress.com/9780805822236 . 

Applied Multiple Regression serves as both a textbook for graduate students and as a reference tool for researchers in psychology, education, health sciences, communications, business, sociology, political science, anthropology, and economics. An introductory knowledge of statistics is required. Self-standing chapters minimize the need for researchers to refer to previous chapters.

Table of contents

  1. Cover
  2. Halftitle
  3. Title
  4. Copyright
  5. Dedication
  6. Contents
  7. Preface
  8. Chapter 1: Introduction
    1. 1.1 Multiple Regression/Correlation as a General Data-Analytic System
      1. 1.1.1 Overview
      2. 1.1.2 Testing Hypotheses Using Multiple Regression/Correlation: Some Examples
      3. 1.1.3 Multiple Regression/Correlation in Prediction Models
    2. 1.2 A Comparison of Multiple Regression/Correlation and Analysis of Variance Approaches
      1. 1.2.1 Historical Background
      2. 1.2.2 Hypothesis Testing and Effect Sizes
    3. 1.3 Multiple Regression/Correlation and the Complexity of Behavioral Science
      1. 1.3.1 Multiplicity of Influences
      2. 1.3.2 Correlation Among Research Factors and Partialing
      3. 1.3.3 Form of Information
      4. 1.3.4 Shape of Relationship
      5. 1.3.5 General and Conditional Relationships
    4. 1.4 Orientation of the Book
      1. 1.4.1 Nonmathematical
      2. 1.4.2 Applied
      3. 1.4.3 Data-Analytic
      4. 1.4.4 Inference Orientation and Specification Error
    5. 1.5 Computation, the Computer, and Numerical Results
      1. 1.5.1 Computation
      2. 1.5.2 Numerical Results: Reporting and Rounding
      3. 1.5.3 Significance Tests, Confidence Intervals, and Appendix Tables
    6. 1.6 The Spectrum of Behavioral Science
    7. 1.7 Plan for the Book
      1. 1.7.1 Content
      2. 1.7.2 Structure: Numbering of Sections, Tables, and Equations
    8. 1.8 Summary
  9. Chapter 2: Bivariate Correlation and Regression
    1. 2.1 Tabular and Graphic Representations of Relationships
    2. 2.2 The Index of Linear Correlation Between Two Variables: The Pearson Product Moment Correlation Coefficient
      1. 2.2.1 Standard Scores: Making Units Comparable
      2. 2.2.2 The Product Moment Correlation as a Function of Differences Between z Scores
    3. 2.3 Alternative Formulas for the Product Moment Correlation Coefficient
      1. 2.3.1 r as the Average Product of z Scores
      2. 2.3.2 Raw Score Formulas for r
      3. 2.3.3 Point Biserial r
      4. 2.3.4 Phi (Φ) Coefficient
      5. 2.3.5 Rank Correlation
    4. 2.4 Regression Coefficients: Estimating Y From X
    5. 2.5 Regression Toward the Mean
    6. 2.6 The Standard Error of Estimate and Measures of the Strength of Association
    7. 2.7 Summary of Definitions and Interpretations
    8. 2.8 Statistical Inference With Regression and Correlation Coefficients
      1. 2.8.1 Assumptions Underlying Statistical Inference With BYX, B0, Ŷi, and rXY
      2. 2.8.2 Estimation With Confidence Intervals
      3. 2.8.3 Null Hypothesis Significance Tests (NHSTs)
      4. 2.8.4 Confidence Limits and Null Hypothesis Significance Testing
    9. 2.9 Precision and Power
      1. 2.9.1 Precision of Estimation
      2. 2.9.2 Power of Null Hypothesis Significance Tests
    10. 2.10 Factors Affecting the Size of r
      1. 2.10.1 The Distributions of X and Y
      2. 2.10.2 The Reliability of the Variables
      3. 2.10.3 Restriction of Range
      4. 2.10.4 Part-Whole Correlations
      5. 2.10.5 Ratio or Index Variables
      6. 2.10.6 Curvilinear Relationships
    11. 2.11 Summary
  10. Chapter 3: Multiple Regression/Correlation With Two or More Independent Variables
    1. 3.1 Introduction: Regression and Causal Models
      1. 3.1.1 What Is a Cause?
      2. 3.1.2 Diagrammatic Representation of Causal Models
    2. 3.2 Regression With Two Independent Variables
    3. 3.3 Measures of Association With Two Independent Variables
      1. 3.3.1 Multiple R and R2
      2. 3.3.2 Semipartial Correlation Coefficients and Increments to R2
      3. 3.3.3 Partial Correlation Coefficients
    4. 3.4 Patterns of Association Between Y and Two Independent Variables
      1. 3.4.1 Direct and Indirect Effects
      2. 3.4.2 Partial Redundancy
      3. 3.4.3 Suppression in Regression Models
      4. 3.4.4 Spurious Effects and Entirely Indirect Effects
    5. 3.5 Multiple Regression/Correlation With k Independent Variables
      1. 3.5.1 Introduction: Components of the Prediction Equation
      2. 3.5.2 Partial Regression Coefficients
      3. 3.5.3 R, R2, and Shrunken R2
      4. 3.5.4 sr and sr2
      5. 3.5.5 pr and pr2
      6. 3.5.6 Example of Interpretation of Partial Coefficients
    6. 3.6 Statistical Inference With k Independent Variables
      1. 3.6.1 Standard Errors and Confidence Intervals for B and β
      2. 3.6.2 Confidence Intervals for R2
      3. 3.6.3 Confidence Intervals for Differences Between Independent R2s
      4. 3.6.4 Statistical Tests on Multiple and Partial Coefficients
    7. 3.7 Statistical Precision and Power Analysis
      1. 3.7.1 Introduction: Research Goals and the Null Hypothesis
      2. 3.7.2 The Precision and Power of R2
      3. 3.7.3 Precision and Power Analysis for Partial Coefficients
    8. 3.8 Using Multiple Regression Equations in Prediction
      1. 3.8.1 Prediction of Y for a New Observation
      2. 3.8.2 Correlation of Individual Variables With Predicted Values
      3. 3.8.3 Cross-Validation and Unit Weighting
      4. 3.8.4 Multicollinearity
    9. 3.9 Summary
  11. Chapter 4: Data Visualization, Exploration, and Assumption Checking: Diagnosing and Solving Regression Problems I
    1. 4.1 Introduction
    2. 4.2 Some Useful Graphical Displays of the Original Data
      1. 4.2.1 Univariate Displays
      2. 4.2.2 Bivariate Displays
      3. 4.2.3 Correlation and Scatterplot Matrices
    3. 4.3 Assumptions and Ordinary Least Squares Regression
      1. 4.3.1 Assumptions Underlying Multiple Linear Regression
      2. 4.3.2 Ordinary Least Squares Estimation
    4. 4.4 Detecting Violations of Assumptions
      1. 4.4.1 Form of the Relationship
      2. 4.4.2 Omitted Independent Variables
      3. 4.4.3 Measurement Error
      4. 4.4.4 Homoscedasticity of Residuals
      5. 4.4.5 Nonindependence of Residuals
      6. 4.4.6 Normality of Residuals
    5. 4.5 Remedies: Alternative Approaches When Problems Are Detected
      1. 4.5.1 Form of the Relationship
      2. 4.5.2 Inclusion of All Relevant Independent Variables
      3. 4.5.3 Measurement Error in the Independent Variables
      4. 4.5.4 Nonconstant Variance
      5. 4.5.5 Nonindependence of Residuals
    6. 4.6 Summary
  12. Chapter 5: Data-Analytic Strategies Using Multiple Regression/Correlation
    1. 5.1 Research Questions Answered by Correlations and Their Squares
      1. 5.1.1 Net Contribution to Prediction
      2. 5.1.2 Indices of Differential Validity
      3. 5.1.3 Comparisons of Predictive Utility
      4. 5.1.4 Attribution of a Fraction of the XY Relationship to a Third Variable
      5. 5.1.5 Which of Two Variables Accounts for More of the XY Relationship?
      6. 5.1.6 Are the Various Squared Correlations in One Population Different From Those in Another Given the Same Variables?
    2. 5.2 Research Questions Answered by B Or β
      1. 5.2.1 Regression Coefficients as Reflections of Causal Effects
      2. 5.2.2 Alternative Approaches to Making BYX Substantively Meaningful
      3. 5.2.3 Are the Effects of a Set of Independent Variables on Two Different Outcomes in a Sample Different?
      4. 5.2.4 What Are the Reciprocal Effects of Two Variables on One Another?
    3. 5.3 Hierarchical Analysis Variables in Multiple Regression/Correlation
      1. 5.3.1 Causal Priority and the Removal of Confounding Variables
      2. 5.3.2 Research Relevance
      3. 5.3.3 Examination of Alternative Hierarchical Sequences of Independent Variable Sets
      4. 5.3.4 Stepwise Regression
    4. 5.4 The Analysis of Sets of Independent Variables
      1. 5.4.1 Types of Sets
      2. 5.4.2 The Simultaneous and Hierarchical Analyses of Sets
      3. 5.4.3 Variance Proportions for Sets and the Ballantine Again
      4. 5.4.4 B and β Coefficients for Variables Within Sets
    5. 5.5 Significance Testing for Sets
      1. 5.5.1 Application in Hierarchical Analysis
      2. 5.5.2 Application in Simultaneous Analysis
      3. 5.5.3 Using Computer Output to Determine Statistical Significance
      4. 5.5.4 An Alternative F Test: Using Model 2 Error Estimate From the Final Model
    6. 5.6 Power Analysis for Sets
      1. 5.6.1 Determining n* for the F Test of sR2B with Model 1 or Model 2 Error
      2. 5.6.2 Estimating the Population sR2 Values
      3. 5.6.3 Setting Power for n*
      4. 5.6.4 Reconciling Different n* s
      5. 5.6.5 Power as a Function of n
      6. 5.6.6 Tactics of Power Analysis
    7. 5.7 Statistical Inference Strategy in Multiple Regression/Correlation
      1. 5.7.1 Controlling and Balancing Type I and Type II Errors in Inference
      2. 5.7.2 Less Is More
      3. 5.7.3 Least Is Last
      4. 5.7.4 Adaptation of Fisher’s Protected t Test
      5. 5.7.5 Statistical Inference and the Stage of Scientific Investigations
    8. 5.8 Summary
  13. Chapter 6: Quantitative Scales, Curvilinear Relationships, and Transformations
    1. 6.1 Introduction
      1. 6.1.1 What Do We Mean by Linear Regression?
      2. 6.1.2 Linearity in the Variables and Linear Multiple Regression
      3. 6.1.3 Four Approaches to Examining Nonlinear Relationships in Multiple Regression
    2. 6.2 Power Polynomials
      1. 6.2.1 Method
      2. 6.2.2 An Example: Quadratic Fit
      3. 6.2.3 Centering Predictors in Polynomial Equations
      4. 6.2.4 Relationship of Test of Significance of Highest Order Coefficient and Gain in Prediction
      5. 6.2.5 Interpreting Polynomial Regression Results
      6. 6.2.6 Another Example: A Cubic Fit
      7. 6.2.7 Strategy and Limitations
      8. 6.2.8 More Complex Equations
    3. 6.3 Orthogonal Polynomials
      1. 6.3.1 The Cubic Example Revisited
      2. 6.3.2 Unequal n and Unequal Intervals
      3. 6.3.3 Applications and Discussion
    4. 6.4 Nonlinear Transformations
      1. 6.4.1 Purposes of Transformation and the Nature of Transformations
      2. 6.4.2 The Conceptual Basis of Transformations and Model Checking Before and After Transformation—Is It Always Ideal to Transform?
      3. 6.4.3 Logarithms and Exponents; Additive and Proportional Relationships
      4. 6.4.4 Linearizing Relationships
      5. 6.4.5 Linearizing Relationships Based on Strong Theoretical Models
      6. 6.4.6 Linearizing Relationships Based on Weak Theoretical Models
      7. 6.4.7 Empirically Driven Transformations in the Absence of Strong or Weak Models
      8. 6.4.8 Empirically Driven Transformation for Linearization: The Ladder of Re-expression and the Bulging Rule
      9. 6.4.9 Empirically Driven Transformation for Linearization in the Absence of Models: Box-Cox Family of Power Transformations on Y
      10. 6.4.10 Empirically Driven Transformation for Linearization in the Absence of Models: Box-Tidwell Family of Power Transformations on X
      11. 6.4.11 Linearization of Relationships With Correlations: Fisher z′ Transform of r
      12. 6.4.12 Transformations That Linearize Relationships for Counts and Proportions
      13. 6.4.13 Variance Stabilizing Transformations and Alternatives for Treatment of Heteroscedasticity
      14. 6.4.14 Transformations to Normalize Variables
      15. 6.4.15 Diagnostics Following Transformation
      16. 6.4.16 Measuring and Comparing Model Fit
      17. 6.4.17 Second-Order Polynomial Numerical Example Revisited
      18. 6.4.18 When to Transform and the Choice of Transformation
    5. 6.5 Nonlinear Regression
    6. 6.6 Nonparametric Regression
    7. 6.7 Summary
  14. Chapter 7: Interactions Among Continuous Variables
    1. 7.1 Introduction
      1. 7.1.1 Interactions Versus Additive Effects
      2. 7.1.2 Conditional First-Order Effects in Equations Containing Interactions
    2. 7.2 Centering Predictors and the Interpretation of Regression Coefficients in Equations Containing Interactions
      1. 7.2.1 Regression with Centered Predictors
      2. 7.2.2 Relationship Between Regression Coefficients in the Uncentered and Centered Equations
      3. 7.2.3 Centered Equations With No Interaction
      4. 7.2.4 Essential Versus Nonessential Multicollinearity
      5. 7.2.5 Centered Equations With Interactions
      6. 7.2.6 The Highest Order Interaction in the Centered Versus Uncentered Equation
      7. 7.2.7 Do Not Center Y
      8. 7.2.8 A Recommendation for Centering
    3. 7.3 Simple Regression Equations and Simple Slopes
      1. 7.3.1 Plotting Interactions
      2. 7.3.2 Moderator Variables
      3. 7.3.3 Simple Regression Equations
      4. 7.3.4 Overall Regression Coefficient and Simple Slope at the Mean
      5. 7.3.5 Simple Slopes From Uncentered Versus Centered Equations Are Identical
      6. 7.3.6 Linear by Linear Interactions
      7. 7.3.7 Interpreting Interactions in Multiple Regression and Analysis of Variance
    4. 7.4 Post Hoc Probing of Interactions
      1. 7.4.1 Standard Error of Simple Slopes
      2. 7.4.2 Equation Dependence of Simple Slopes and Their Standard Errors
      3. 7.4.3 Tests of Significance of Simple Slopes
      4. 7.4.4 Confidence Intervals Around Simple Slopes
      5. 7.4.5 A Numerical Example
      6. 7.4.6 The Uncentered Regression Equation Revisited
      7. 7.4.7 First-Order Coefficients in Equations Without and With Interactions
      8. 7.4.8 Interpretation and the Range of Data
    5. 7.5 Standardized Estimates for Equations Containing Interactions
    6. 7.6 Interactions as Partialed Effects: Building Regression Equations With Interactions
    7. 7.7 Patterns of First-Order and Interactive Effects
      1. 7.7.1 Three Theoretically Meaningful Patterns of First-Order and Interaction Effects
      2. 7.7.2 Ordinal Versus Disordinal Interactions
    8. 7.8 Three-Predictor Interactions in Multiple Regression
    9. 7.9 Curvilinear by Linear Interactions
    10. 7.10 Interactions Among Sets of Variables
    11. 7.11 Issues in the Detection of Interactions: Reliability, Predictor Distributions, Model Specification
      1. 7.11.1 Variable Reliability and Power to Detect Interactions
      2. 7.11.2 Sampling Designs to Enhance Power to Detect Interactions—Optimal Design
      3. 7.11.3 Difficulty in Distinguishing Interactions Versus Curvilinear Effects
    12. 7.12 Summary
  15. Chapter 8: Categorical or Nominal Independent Variables
    1. 8.1 Introduction
      1. 8.1.1 Categories as a Set of Independent Variables
      2. 8.1.2 The Representation of Categories or Nominal Scales
    2. 8.2 Dummy-Variable Coding
      1. 8.2.1 Coding the Groups
      2. 8.2.2 Pearson Correlations of Dummy Variables With Y
      3. 8.2.3 Correlations Among Dummy-Coded Variables
      4. 8.2.4 Multiple Correlation of the Dummy-Variable Set With Y
      5. 8.2.5 Regression Coefficients for Dummy Variables
      6. 8.2.6 Partial and Semipartial Correlations for Dummy Variables
      7. 8.2.7 Dummy-Variable Multiple Regression/Correlation and One-Way Analysis of Variance
      8. 8.2.8 A Cautionary Note: Dummy-Variable-Like Coding Systems
      9. 8.2.9 Dummy-Variable Coding When Groups Are Not Mutually Exclusive
    3. 8.3 Unweighted Effects Coding
      1. 8.3.1 Introduction: Unweighted and Weighted Effects Coding
      2. 8.3.2 Constructing Unweighted Effects Codes
      3. 8.3.3 The R2 and the rYis for Unweighted Effects Codes
      4. 8.3.4 Regression Coefficients and Other Partial Effects in Unweighted Code Sets
    4. 8.4 Weighted Effects Coding
      1. 8.4.1 Selection Considerations for Weighted Effects Coding
      2. 8.4.2 Constructing Weighted Effects
      3. 8.4.3 The R2 and R2 for Weighted Effects Codes
      4. 8.4.4 Interpretation and Testing of B With Unweighted Codes
    5. 8.5 Contrast Coding
      1. 8.5.1 Considerations in the Selection of a Contrast Coding Scheme
      2. 8.5.2 Constructing Contrast Codes
      3. 8.5.3 The R2 and R2
      4. 8.5.4 Partial Regression Coefficients
      5. 8.5.5 Statistical Power and the Choice of Contrast Codes
    6. 8.6 Nonsense Coding
    7. 8.7 Coding Schemes in the Context of Other Independent Variables
      1. 8.7.1 Combining Nominal and Continuous Independent Variables
      2. 8.7.2 Calculating Adjusted Means for Nominal Independent Variables
      3. 8.7.3 Adjusted Means for Combinations of Nominal and Quantitative Independent Variables
      4. 8.7.4 Adjusted Means for More Than Two Groups and Alternative Coding Methods
      5. 8.7.5 Multiple Regression/Correlation With Nominal Independent Variables and the Analysis of Covariance
    8. 8.8 Summary
  16. Chapter 9: Interactions With Categorical Variables
    1. 9.1 Nominal Scale by Nominal Scale Interactions
      1. 9.1.1 The 2 by 2 Design
      2. 9.1.2 Regression Analyses of Multiple Sets of Nominal Variables With More Than Two Categories
    2. 9.2 Interactions Involving More Than Two Nominal Scales
      1. 9.2.1 An Example of Three Nominal Scales Coded by Alternative Methods
      2. 9.2.2 Interactions Among Nominal Scales in Which Not All Combinations Are Considered
      3. 9.2.3 What If the Categories for One or More Nominal “Scales” Are Not Mutually Exclusive?
      4. 9.2.4 Consideration of pr, β, and Variance Proportions for Nominal Scale Interaction Variables
      5. 9.2.5 Summary of Issues and Recommendations for Interactions Among Nominal Scales
    3. 9.3 Nominal Scale by Continuous Variable Interactions
      1. 9.3.1 A Reminder on Centering
      2. 9.3.2 Interactions of a Continuous Variable With Dummy-Variable Coded Groups
      3. 9.3.3 Interactions Using Weighted or Unweighted Effects Codes
      4. 9.3.4 Interactions With a Contrast-Coded Nominal Scale
      5. 9.3.5 Interactions Coded to Estimate Simple Slopes of Groups
      6. 9.3.6 Categorical Variable Interactions With Nonlinear Effects of Scaled Independent Variables
      7. 9.3.7 Interactions of a Scale With Two or More Categorical Variables
    4. 9.4 Summary
  17. Chapter 10: Outliers and Multicollinearity: Diagnosing and Solving Regression Problems II
    1. 10.1 Introduction
    2. 10.2 Outliers: Introduction and Illustration
    3. 10.3 Detecting Outliers: Regression Diagnostics
      1. 10.3.1 Extremity on the Independent Variables: Leverage
      2. 10.3.2 Extremity on Y: Discrepancy
      3. 10.3.3 Influence on the Regression Estimates
      4. 10.3.4 Location of Outlying Points and Diagnostic Statistics
      5. 10.3.5 Summary and Suggestions
    4. 10.4 Sources of Outliers and Possible Remedial Actions
      1. 10.4.1 Sources of Outliers
      2. 10.4.2 Remedial Actions
    5. 10.5 Multicollinearity
      1. 10.5.1 Exact Collinearity
      2. 10.5.2 Multicollinearity: A Numerical Illustration
      3. 10.5.3 Measures of the Degree of Multicollinearity
    6. 10.6 Remedies for Multicollinearity
      1. 10.6.1 Model Respecification
      2. 10.6.2 Collection of Additional Data
      3. 10.6.3 Ridge Regression
      4. 10.6.4 Principal Components Regression
      5. 10.6.5 Summary of Multicollinearity Considerations
    7. 10.7 Summary
  18. Chapter 11: Missing Data
    1. 11.1 Basic Issues in Handling Missing Data
      1. 11.1.1 Minimize Missing Data
      2. 11.1.2 Types of Missing Data
      3. 11.1.3 Traditional Approaches to Missing Data
    2. 11.2 Missing Data in Nominal Scales
      1. 11.2.1 Coding Nominal Scale X for Missing Data
      2. 11.2.2 Missing Data on Two Dichotomies
      3. 11.2.3 Estimation Using the EM Algorithm
    3. 11.3 Missing Data in Quantitative Scales
      1. 11.3.1 Available Alternatives
      2. 11.3.2 Imputation of Values for Missing Cases
      3. 11.3.3 Modeling Solutions to Missing Data in Scaled Variables
      4. 11.3.4 An Illustrative Comparison of Alternative Methods
      5. 11.3.5 Rules of Thumb
    4. 11.4 Summary
  19. Chapter 12: Multiple Regression/Correlation and Causal Models
    1. 12.1 Introduction
      1. 12.1.1 Limits on the Current Discussion and the Relationship Between Causal Analysis and Analysis of Covariance
      2. 12.1.2 Theories and Multiple Regression/Correlation Models That Estimate and Test Them
      3. 12.1.3 Kinds of Variables in Causal Models
      4. 12.1.4 Regression Models as Causal Models
    2. 12.2 Models Without Reciprocal Causation
      1. 12.2.1 Direct and Indirect Effects
      2. 12.2.2 Path Analysis and Path Coefficients
      3. 12.2.3 Hierarchical Analysis and Reduced Form Equations
      4. 12.2.4 Partial Causal Models and the Hierarchical Analysis of Sets
      5. 12.2.5 Testing Model Elements
    3. 12.3 Models With Reciprocal Causation
    4. 12.4 Identification and Overidentification
      1. 12.4.1 Just Identified Models
      2. 12.4.2 Overidentification
      3. 12.4.3 Underidentification
    5. 12.5 Latent Variable Models
      1. 12.5.1 An Example of a Latent Variable Model
      2. 12.5.2 How Latent Variables Are Estimated
      3. 12.5.3 Fixed and Free Estimates in Latent Variable Models
      4. 12.5.4 Goodness-of-Fit Tests of Latent Variable Models
      5. 12.5.5 Latent Variable Models and the Correction for Attenuation
      6. 12.5.6 Characteristics of Data Sets That Make Latent Variable Analysis the Method of Choice
    6. 12.6 A Review of Causal Model and Statistical Assumptions
      1. 12.6.1 Specification Error
      2. 12.6.2 Identification Error
    7. 12.7 Comparisons of Causal Models
      1. 12.7.1 Nested Models
      2. 12.7.2 Longitudinal Data in Causal Models
    8. 12.8 Summary
  20. Chapter 13: Alternative Regression Models: Logistic, Poisson Regression, and the Generalized Linear Model
    1. 13.1 Ordinary Least Squares Regression Revisited
      1. 13.1.1 Three Characteristics of Ordinary Least Squares Regression
      2. 13.1.2 The Generalized Linear Model
      3. 13.1.3 Relationship of Dichotomous and Count Dependent Variables Y to a Predictor
    2. 13.2 Dichotomous Outcomes and Logistic Regression
      1. 13.2.1 Extending Linear Regression: The Linear Probability Model and Discriminant Analysis
      2. 13.2.2 The Nonlinear Transformation From Predictor to Predicted Scores: Probit and Logistic Transformation
      3. 13.2.3 The Logistic Regression Equation
      4. 13.2.4 Numerical Example: Three Forms of the Logistic Regression Equation
      5. 13.2.5 Understanding the Coefficients for the Predictor in Logistic Regression
      6. 13.2.6 Multiple Logistic Regression
      7. 13.2.7 Numerical Example
      8. 13.2.8 Confidence Intervals on Regression Coefficients and Odds Ratios
      9. 13.2.9 Estimation of the Regression Model: Maximum Likelihood
      10. 13.2.10 Deviances: Indices of Overall Fit of the Logistic Regression Model
      11. 13.2.11 Multiple R2 Analogs in Logistic Regression
      12. 13.2.12 Testing Significance of Overall Model Fit: The Likelihood Ratio Test and the Test of Model Deviance
      13. 13.2.13 χ2 Test for the Significance of a Single Predictor in a Multiple Logistic Regression Equation
      14. 13.2.14 Hierarchical Logistic Regression: Likelihood Ratio χ2 Test for the Significance of a Set of Predictors Above and Beyond Another Set
      15. 13.2.15 Akaike’s Information Criterion and the Bayesian Information Criterion for Model Comparison
      16. 13.2.16 Some Treachery in Variable Scaling and Interpretation of the Odds Ratio
      17. 13.2.17 Regression Diagnostics in Logistic Regression
      18. 13.2.18 Sparseness of Data
      19. 13.2.19 Classification of Cases
    3. 13.3 Extensions of Logistic Regression to Multiple Response Categories: Polytomous Logistic Regression and Ordinal Logistic Regression
      1. 13.3.1 Polytomous Logistic Regression
      2. 13.3.2 Nested Dichotomies
      3. 13.3.3 Ordinal Logistic Regression
    4. 13.4 Models for Count Data: Poisson Regression and Alternatives
      1. 13.4.1 Linear Regression Applied to Count Data
      2. 13.4.2 Poisson Probability Distribution
      3. 13.4.3 Poisson Regression Analysis
      4. 13.4.4 Overdispersion and Alternative Models
      5. 13.4.5 Independence of Observations
      6. 13.4.6 Sources on Poisson Regression
    5. 13.5 Full Circle: Parallels Between Logistic and Poisson Regression, and the Generalized Linear Model
      1. 13.5.1 Parallels Between Poisson and Logistic Regression
      2. 13.5.2 The Generalized Linear Model Revisited
    6. 13.6 Summary
  21. Chapter 14: Random Coefficient Regression and Multilevel Models
    1. 14.1 Clustering Within Data Sets
      1. 14.1.1 Clustering, Alpha Inflation, and the Intraclass Correlation
      2. 14.1.2 Estimating the Intraclass Correlation
    2. 14.2 Analysis of Clustered Data With Ordinary Least Squares Approaches
      1. 14.2.1 Numerical Example, Analysis of Clustered Data With Ordinary Least Squares Regression
    3. 14.3 The Random Coefficient Regression Model
    4. 14.4 Random Coefficient Regression Model and Multilevel Data Structure
      1. 14.4.1 Ordinary Least Squares (Fixed Effects) Regression Revisited
      2. 14.4.2 Fixed and Random Variables
      3. 14.4.3 Clustering and Hierarchically Structured Data
      4. 14.4.4 Structure of the Random Coefficient Regression Model
      5. 14.4.5 Level 1 Equations
      6. 14.4.6 Level 2 Equations
      7. 14.4.7 Mixed Model Equation for Random Coefficient Regression
      8. 14.4.8 Variance Components—New Parameters in the Multilevel Model
      9. 14.4.9 Variance Components and Random Coefficient Versus Ordinary Least Squares (Fixed Effects) Regression
      10. 14.4.10 Parameters of the Random Coefficient Regression Model: Fixed and Random Effects
    5. 14.5 Numerical Example: Analysis of Clustered Data With Random Coefficient Regression
      1. 14.5.1 Unconditional Cell Means Model and the Intraclass Correlation
      2. 14.5.2 Testing the Fixed and Random Parts of the Random Coefficient Regression Model
    6. 14.6 Clustering as a Meaningful Aspect of the Data
    7. 14.7 Multilevel Modeling With a Predictor at Level
      1. 14.7.1 Level 1 Equations
      2. 14.7.2 Revised Level 2 Equations
      3. 14.7.3 Mixed Model Equation With Level 1 Predictor and Level 2 Predictor of Intercept and Slope and the Cross-Level Interaction
    8. 14.8 An Experimental Design as a Multilevel Data Structure: Combining Experimental Manipulation With Individual Differences
    9. 14.9 Numerical Example: Multilevel Analysis
    10. 14.10 Estimation of the Multilevel Model Parameters: Fixed Effects, Variance Components, and Level 1 Equations
      1. 14.10.1 Fixed Effects and Variance Components
      2. 14.10.2 An Equation for Each Group: Empirical Bayes Estimates of Level 1 Coefficients
    11. 14.11 Statistical Tests in Multilevel Models
      1. 14.11.1 Fixed Effects
      2. 14.11.2 Variance Components
    12. 14.12 Some Model Specification Issues
      1. 14.12.1 The Same Variable at Two Levels
      2. 14.12.2 Centering in Multilevel Models
    13. 14.13 Statistical Power of Multilevel Models
    14. 14.14 Choosing Between the Fixed Effects Model and the Random Coefficient Model
    15. 14.15 Sources on Multilevel Modeling
    16. 14.16 Multilevel Models Applied to Repeated Measures Data
    17. 14.17 Summary
  22. Chapter 15: Longitudinal Regression Methods
    1. 15.1 Introduction
      1. 15.1.1 Chapter Goals
      2. 15.1.2 Purposes of Gathering Data on Multiple Occasions
    2. 15.2 Analyses of Two-Time-Point Data
      1. 15.2.1 Change or Regressed Change?
      2. 15.2.2 Alternative Regression Models for Effects Over a Single Unit of Time
      3. 15.2.3 Three- or Four-Time-Point Data
    3. 15.3 Repeated Measure Analysis of Variance
      1. 15.3.1 Multiple Error Terms in Repeated Measure Analysis of Variance
      2. 15.3.2 Trend Analysis in Analysis of Variance
      3. 15.3.3 Repeated Measure Analysis of Variance in Which Time Is Not the Issue
    4. 15.4 Multilevel Regression of Individual Changes Over Time
      1. 15.4.1 Patterns of Individual Change Over Time
      2. 15.4.2 Adding Other Fixed Predictors to the Model
      3. 15.4.3 Individual Differences in Variation Around Individual Slopes
      4. 15.4.4 Alternative Developmental Models and Error Structures
      5. 15.4.5 Alternative Link Functions for Predicting Y From Time
      6. 15.4.6 Unbalanced Data: Variable Timing and Missing Data
    5. 15.5 Latent Growth Models: Structural Equation Model Representation of Multilevel Data
      1. 15.5.1 Estimation of Changes in True Scores
      2. 15.5.2 Representation of Latent Growth Models in Structural Equation Model Diagrams
      3. 15.5.3 Comparison of Multilevel Regression and Structural Equation Model Analysis of Change
    6. 15.6 Time Varying Independent Variables
    7. 15.7 Survival Analysis
      1. 15.7.1 Regression Analysis of Time Until Outcome and the Problem of Censoring
      2. 15.7.2 Extension to Time-Varying Independent Variables
      3. 15.7.3 Extension to Multiple Episode Data
      4. 15.7.4 Extension to a Categorical Outcome: Event-History Analysis
    8. 15.8 Time Series Analysis
      1. 15.8.1 Units of Observation in Time Series Analyses
      2. 15.8.2 Time Series Analyses Applications
      3. 15.8.3 Time Effects in Time Series
      4. 15.8.4 Extension of Time Series Analyses to Multiple Units or Subjects
    9. 15.9 Dynamic System Analysis
    10. 15.10 Statistical Inference and Power Analysis in Longitudinal Analyses
    11. 15.11 Summary
  23. Chapter 16: Multiple Dependent Variables: Set Correlation
    1. 16.1 Introduction to Ordinary Least Squares Treatment of Multiple Dependent Variables
      1. 16.1.1 Set Correlation Analysis
      2. 16.1.2 Canonical Analysis
      3. 16.1.3 Elements of Set Correlation
    2. 16.2 Measures of Multivariate Association
      1. 16.2.1 R2Y,X, the Proportion of Generalized Variance
      2. 16.2.2 T2Y,X and P2Y,X, Proportions of Additive Variance
    3. 16.3 Partialing in Set Correlation
      1. 16.3.1 Frequent Reasons for Partialing Variable Sets From the Basic Sets
      2. 16.3.2 The Five Types of Association Between Basic Y and X Sets
    4. 16.4 Tests of Statistical Significance and Statistical Power
      1. 16.4.1 Testing the Null Hypothesis
      2. 16.4.2 Estimators of the Population R2Y,X, T2Y,X and P2Y,X
      3. 16.4.3 Guarding Against Type I Error Inflation
    5. 16.5 Statistical Power Analysis in Set Correlation
    6. 16.6 Comparison of Set Correlation With Multiple Analysis of Variance
    7. 16.7 New Analytic Possibilities With Set Correlation
    8. 16.8 Illustrative Examples
      1. 16.8.1 A Simple Whole Association
      2. 16.8.2 A Multivariate Analysis of Partial Variance
      3. 16.8.3 A Hierarchical Analysis of a Quantitative Set and Its Unique Components
      4. 16.8.4 Bipartial Association Among Three Sets
    9. 16.9 Summary
  24. Appendices
    1. Appendix 1: The Mathematical Basis for Multiple Regression/Correlation and Identification of the Inverse Matrix Elements
      1. A1.1 Alternative Matrix Methods
      2. A1.2 Determinants
    2. Appendix 2: Determination of the Inverse Matrix and Applications Thereof
      1. A2.1 Hand Calculation of the Multiple Regression/Correlation Problem
      2. A2.2 Testing the Difference Between Partial βs and Bs From the Same Sample
      3. A2.3 Testing the Difference Between βs for Different Dependent Variables From a Single Sample
    3. Appendix Tables
      1. Table A t Values for α = .01, .05 (Two Tailed)
      2. Table B z′ Transformation of r
      3. Table C Normal Distribution
      4. Table D F Values for α = .01, .05
      5. Table E L Values for α = .01, .05
      6. Table F Power of Significance Test of r at α = .01, .05 (Two Tailed)
      7. Table G n* to Detect r by t Test at α = .01, .05 (Two Tailed)
  25. References
  26. Glossary
  27. Statistical Symbols and Abbreviations
  28. Author Index
  29. Subject Index

Product information

  • Title: Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd Edition
  • Author(s): Jacob Cohen, Patricia Cohen, Stephen G. West, Leona S. Aiken
  • Release date: June 2013
  • Publisher(s): Routledge
  • ISBN: 9781134801015