JMP® Start Statistics: A Guide to Statistics and Data Analysis Using JMP®, 4th Edition

Book description

JMP Start Statistics: A Guide to Statistics and Data Analysis Using JMP, Fourth Edition, is a complete and orderly introduction to analyzing data using JMP statistical discovery software from SAS. A mix of software manual and statistics text, this book provides hands-on tutorials with just the right amount of conceptual and motivational material to illustrate how to use JMP's intuitive interface for data analysis. Each chapter features concept-specific tutorials, examples, brief reviews of concepts, step-by-step illustrations, and exercises. Written by John Sall, Lee Creighton, and Ann Lehman, this book is a great tool for statistics students or practitioners needing a software-related statistics review.

Table of contents

  1. Copyright
  2. Preface
    1. The Software
    2. JMP Start Statistics, Fourth Edition
    3. SAS
    4. This Book
  3. 1. Preliminaries
    1. 1.1. What You Need to Know
      1. 1.1.1. ...about your computer
      2. 1.1.2. ...about statistics
    2. 1.2. Learning About JMP
      1. 1.2.1. ...on your own with JMP Help
      2. 1.2.2. ...hands-on examples
      3. 1.2.3. ...using Tutorials
      4. 1.2.4. ...reading about JMP
    3. 1.3. Chapter Organization
    4. 1.4. Typographical Conventions
  4. 2. JMP Right In
    1. 2.1. Hello!
    2. 2.2. First Session
      1. 2.2.1. Open a JMP Data Table
      2. 2.2.2. Launch an Analysis Platform
      3. 2.2.3. Interact with the Surface of the Report
        1. 2.2.3.1. Row Highlighting
        2. 2.2.3.2. Disclosure Icons
        3. 2.2.3.3. Contextual Popup Menus
        4. 2.2.3.4. Resizing Graphs
      4. 2.2.4. Special Tools
    3. 2.3. Modeling Type
      1. 2.3.1. Analyze and Graph
      2. 2.3.2. The Analyze Menu
        1. 2.3.2.1. Modeling
        2. 2.3.2.2. Multivariate Methods
        3. 2.3.2.3. Survival and Reliability
      3. 2.3.3. The Graph Menu
      4. 2.3.4. Navigating Platforms and Building Context
      5. 2.3.5. Contexts for a Histogram
      6. 2.3.6. Contexts for the t-Test
      7. 2.3.7. Contexts for a Scatterplot
      8. 2.3.8. Contexts for Nonparametric Statistics
    4. 2.4. The Personality of JMP
  5. 3. Data Tables, Reports, and Scripts
    1. 3.1. Overview
    2. 3.2. The Ins and Outs of a JMP Data Table
      1. 3.2.1. Selecting and Deselecting Rows and Columns
      2. 3.2.2. Mousing Around a Spreadsheet: Cursor Forms
        1. 3.2.2.1. Arrow cursor ()
        2. 3.2.2.2. I-beam cursor ()
        3. 3.2.2.3. Selection cursor ()
        4. 3.2.2.4. Double Arrow cursor ()
        5. 3.2.2.5. List Check and Range Check cursors ()
        6. 3.2.2.6. Popup Pointer cursor ()
    3. 3.3. Creating a New JMP Table
      1. 3.3.1. Define Rows and Columns
        1. 3.3.1.1. Add Columns
        2. 3.3.1.2. Set Column Characteristics
        3. 3.3.1.3. Add Rows
      2. 3.3.2. Enter Data
      3. 3.3.3. The New Column Command
      4. 3.3.4. Plot the Data
      5. 3.3.5. Importing Data
      6. 3.3.6. Importing Text Files
      7. 3.3.7. Importing Microsoft Excel Files
      8. 3.3.8. Using ODBC
      9. 3.3.9. Opening Other File Types
      10. 3.3.10. Copy, Paste, and Drag Data
        1. 3.3.10.1. Copy
        2. 3.3.10.2. Paste
        3. 3.3.10.3. Drag
    4. 3.4. Moving Data Out of JMP
      1. 3.4.1. Windows
      2. 3.4.2. Macintosh
      3. 3.4.3. Linux
    5. 3.5. Working with Graphs and Reports
      1. 3.5.1. Copy and Paste
      2. 3.5.2. Drag Report Elements
      3. 3.5.3. Context Menu Commands
        1. 3.5.3.1. Context Commands for Report Tables
    6. 3.6. Juggling Data Tables
      1. 3.6.1. Data Management
      2. 3.6.2. Give New Shape to a Table: Stack Columns
    7. 3.7. The Summary Command
      1. 3.7.1. Create a Table of Summary Statistics
    8. 3.8. Working with Scripts
      1. 3.8.1. Opening and Running Scripts on Windows and Linux
      2. 3.8.2. Opening and Running Scripts on the Macintosh
  6. 4. Formula Editor Adventures
    1. 4.1. Overview
    2. 4.2. The Formula Editor Window
    3. 4.3. A Quick Example
    4. 4.4. Formula Editor: Pieces and Parts
      1. 4.4.1. Terminology
      2. 4.4.2. The Formula Editor Control Panel
    5. 4.5. The Keypad Functions
      1. 4.5.1. Section21
        1. 4.5.1.1. Arithmetic keys
        2. 4.5.1.2. Insert and Delete keys
        3. 4.5.1.3. Raise to a Power
        4. 4.5.1.4. Root
        5. 4.5.1.5. Switch Terms
        6. 4.5.1.6. Unary Sign Function
        7. 4.5.1.7. Local Variable Assignment Key
        8. 4.5.1.8. Peel Expression
    6. 4.6. The Formula Display Area
    7. 4.7. Function Browser Definitions
      1. 4.7.1. Row Function Examples
        1. 4.7.1.1. Lag(column,n)
        2. 4.7.1.2. Row()
        3. 4.7.1.3. Dif(column,n)
        4. 4.7.1.4. NRow ()
        5. 4.7.1.5. Subscript
        6. 4.7.1.6. Using a Subscript
      2. 4.7.2. Conditional Expressions and Comparison Operators
        1. 4.7.2.1. Using the If function
        2. 4.7.2.2. Using the Match Function
      3. 4.7.3. Summarize Down Columns or Across Rows
        1. 4.7.3.1. The Quantile Function
        2. 4.7.3.2. Using the Summation Function
      4. 4.7.4. Random Number Functions
        1. 4.7.4.1. The Uniform Distribution
        2. 4.7.4.2. The Normal Distribution
        3. 4.7.4.3. The Col Shuffle Command
        4. 4.7.4.4. Local Variables and Table Variables
    8. 4.8. Tips on Building Formulas
      1. 4.8.1. Examining Expression Values
      2. 4.8.2. Cutting, Dragging, and Pasting Formulas
      3. 4.8.3. Selecting Expressions
      4. 4.8.4. Tips on Editing a Formula
    9. 4.9. Exercises
  7. 5. What Are Statistics?
    1. 5.1. Overview
    2. 5.2. Ponderings
      1. 5.2.1. The Business of Statistics
      2. 5.2.2. The Yin and Yang of Statistics
      3. 5.2.3. The Faces of Statistics
      4. 5.2.4. Don't Panic
        1. 5.2.4.1. Abstract Mathematics
        2. 5.2.4.2. Lingo
        3. 5.2.4.3. Awkward Phrasing
        4. 5.2.4.4. A Bad Reputation
        5. 5.2.4.5. Uncertainty
    3. 5.3. Preparations
      1. 5.3.1. Three Levels of Uncertainty
        1. 5.3.1.1. Random Events
        2. 5.3.1.2. Unknown Parameters
        3. 5.3.1.3. Unknown Models
      2. 5.3.2. Probability and Randomness
      3. 5.3.3. Assumptions
        1. 5.3.3.1. Ceteris Paribus
        2. 5.3.3.2. Is the Model Correct?
        3. 5.3.3.3. Is the Sample Valid?
      4. 5.3.4. Data Mining?
    4. 5.4. Statistical Terms
      1. 5.4.1. Model
      2. 5.4.2. Parameters
      3. 5.4.3. Hypotheses
      4. 5.4.4. Two-Sided versus One-Sided, Two-Tailed versus One-Tailed
      5. 5.4.5. Statistical Significance
      6. 5.4.6. Significance Level, p-value, α-level
      7. 5.4.7. Power, ß-level
      8. 5.4.8. Confidence Intervals
      9. 5.4.9. Biased, Unbiased
      10. 5.4.10. Sample Mean versus True Mean
      11. 5.4.11. Variance and Standard Deviation, Standard Error
      12. 5.4.12. Degrees of Freedom
  8. 6. Simulations
    1. 6.1. Overview
    2. 6.2. Rolling Dice
      1. 6.2.1. Rolling Several Dice
      2. 6.2.2. Flipping Coins, Sampling Candy, or Drawing Marbles
    3. 6.3. Probability of Making a Triangle
    4. 6.4. Confidence Intervals
  9. 7. Univariate Distributions: One Variable, One Sample
    1. 7.1. Overview
    2. 7.2. Looking at Distributions
      1. 7.2.1. Probability Distributions
      2. 7.2.2. True Distribution Function or Real-World Sample Distribution
      3. 7.2.3. The Normal Distribution
    3. 7.3. Describing Distributions of Values
      1. 7.3.1. Generating Random Data
      2. 7.3.2. Histograms
      3. 7.3.3. Stem-and-Leaf Plots
      4. 7.3.4. Outlier and Quantile Box Plots
      5. 7.3.5. Mean and Standard Deviation
      6. 7.3.6. Median and Other Quantiles
      7. 7.3.7. Mean versus Median
      8. 7.3.8. Higher Moments: Skewness and Kurtosis
      9. 7.3.9. Extremes, Tail Detail
    4. 7.4. Statistical Inference on the Mean
      1. 7.4.1. Standard Error of the Mean
      2. 7.4.2. Confidence Intervals for the Mean
      3. 7.4.3. Testing Hypotheses: Terminology
      4. 7.4.4. The Normal z- Test for the Mean
      5. 7.4.5. Case Study: The Earth's Ecliptic
      6. 7.4.6. Student's t-Test
      7. 7.4.7. Comparing the Normal and Student's t Distributions
      8. 7.4.8. Testing the Mean
      9. 7.4.9. The p-Value Animation
      10. 7.4.10. Power of the t-Test
    5. 7.5. Practical Significance vs. Statistical Significance
    6. 7.6. Examining for Normality
      1. 7.6.1. Normal Quantile Plots
      2. 7.6.2. Statistical Tests for Normality
    7. 7.7. Special Topic: Practical Difference
    8. 7.8. Special Topic: Simulating the Central Limit Theorem
    9. 7.9. Seeing Kernel Density Estimates
    10. 7.10. Exercises
  10. 8. The Difference between Two Means
    1. 8.1. Overview
    2. 8.2. Two Independent Groups
      1. 8.2.1. When the Difference Isn't Significant
      2. 8.2.2. Check the Data
      3. 8.2.3. Launch the Fit Y by X Platform
      4. 8.2.4. Examine the Plot
      5. 8.2.5. Display and Compare the Means
      6. 8.2.6. Inside the Student's t-Test
      7. 8.2.7. Equal or Unequal Variances?
      8. 8.2.8. One-Sided Version of the Test
      9. 8.2.9. Analysis of Variance and the All-Purpose F-Test
      10. 8.2.10. How Sensitive Is the Test?
      11. 8.2.11. How Many More Observations Are Needed?
      12. 8.2.12. When the Difference Is Significant
    3. 8.3. Normality and Normal Quantile Plots
    4. 8.4. Testing Means for Matched Pairs
      1. 8.4.1. Thermometer Tests
      2. 8.4.2. Look at the Data
      3. 8.4.3. Look at the Distribution of the Difference
      4. 8.4.4. Student's t-Test
      5. 8.4.5. The Matched Pairs Platform for a Paired t-Test
      6. 8.4.6. Optional Topic: An Equivalent Test for Stacked Data
    5. 8.5. The Normality Assumption
    6. 8.6. Two Extremes of Neglecting the Pairing Situation: A Dramatization
    7. 8.7. A Nonparametric Approach
      1. 8.7.1. Introduction to Nonparametric Methods
      2. 8.7.2. Paired Means: The Wilcoxon Signed-Rank Test
      3. 8.7.3. Independent Means: The Wilcoxon Rank Sum Test
    8. 8.8. Exercises
  11. 9. Comparing Many Means: One-Way Analysis of Variance
    1. 9.1. overview
    2. 9.2. What Is a One-Way Layout?
    3. 9.3. Comparing and Testing Means
    4. 9.4. Means Diamonds: A Graphical Description of Group Means
    5. 9.5. Statistical Tests to Compare Means
    6. 9.6. Means Comparisons for Balanced Data
    7. 9.7. Means Comparisons for Unbalanced Data
    8. 9.8. Adjusting for Multiple Comparisons
    9. 9.9. Are the Variances Equal Across the Groups?
      1. 9.9.1. Testing Means with Unequal Variances
    10. 9.10. Nonparametric Methods
      1. 9.10.1. Review of Rank-Based Nonparametric Methods
      2. 9.10.2. The Three Rank Tests in JMP
    11. 9.11. Exercises
  12. 10. Fitting Curves through Points: Regression
    1. 10.1. Overview
    2. 10.2. Regression
      1. 10.2.1. Least Squares
      2. 10.2.2. Seeing Least Squares
      3. 10.2.3. Fitting a Line and Testing the Slope
      4. 10.2.4. Testing the Slope by Comparing Models
        1. 10.2.4.1. C Total
        2. 10.2.4.2. Error
        3. 10.2.4.3. Model
        4. 10.2.4.4. Mean Square
        5. 10.2.4.5. Root Mean Square Error
      5. 10.2.5. The Distribution of the Parameter Estimates
        1. 10.2.5.1. Std Error
        2. 10.2.5.2. t-Ratio
        3. 10.2.5.3. Prob>|t|
      6. 10.2.6. Confidence Intervals on the Estimates
      7. 10.2.7. Examine Residuals
      8. 10.2.8. Exclusion of Rows
      9. 10.2.9. Time to Clean Up
    3. 10.3. Polynomial Models
      1. 10.3.1. Look at the Residuals
      2. 10.3.2. Higher-Order Polynomials
      3. 10.3.3. Distribution of Residuals
    4. 10.4. Transformed Fits
      1. 10.4.1. Spline Fit
    5. 10.5. Are Graphics Important?
    6. 10.6. Why It's Called Regression
    7. 10.7. What Happens When X and Y Are Switched?
    8. 10.8. Curiosities
      1. 10.8.1. Sometimes It's the Picture That Fools You
      2. 10.8.2. High-Order Polynomial Pitfall
      3. 10.8.3. The Pappus Mystery on the Obliquity of the Ecliptic
    9. 10.9. Exercises
  13. 11. Categorical Distributions
    1. 11.1. Overview
    2. 11.2. Categorical Situations
    3. 11.3. Categorical Responses and Count Data: Two Outlooks
    4. 11.4. A Simulated Categorical Response
      1. 11.4.1. Simulating Some Categorical Response Data
      2. 11.4.2. Variability in the Estimates
      3. 11.4.3. Larger Sample Sizes
      4. 11.4.4. Monte Carlo Simulations for the Estimators
      5. 11.4.5. Distribution of the Estimates
    5. 11.5. The X2 Pearson Chi-Square Test Statistic
    6. 11.6. The G2 Likelihood-Ratio Chi-Square Test Statistic
      1. 11.6.1. Likelihood Ratio Tests
      2. 11.6.2. The G2 Likelihood Ratio Chi-Square Test
    7. 11.7. Univariate Categorical Chi-Square Tests
      1. 11.7.1. Comparing Univariate Distributions
      2. 11.7.2. Charting to Compare Results
    8. 11.8. Exercises
  14. 12. Categorical Models
    1. 12.1. Overview
    2. 12.2. Fitting Categorical Responses to Categorical Factors: Contingency Tables
      1. 12.2.1. Testing with G2 and X2
      2. 12.2.2. Looking at Survey Data
        1. 12.2.2.1. Contingency Table: Country by Sex
        2. 12.2.2.2. Mosaic Plot
        3. 12.2.2.3. Testing Marginal Homogeneity
      3. 12.2.3. Car Brand by Marital Status
      4. 12.2.4. Car Brand by Size of Vehicle
    3. 12.3. Two-Way Tables: Entering Count Data
      1. 12.3.1. Expected Values Under Independence
      2. 12.3.2. Entering Two-Way Data into JMP
      3. 12.3.3. Testing for Independence
    4. 12.4. If You Have a Perfect Fit
    5. 12.5. Special Topic: Correspondence Analysis— Looking at Data with Many Levels
    6. 12.6. Continuous Factors with Categorical Responses: Logistic Regression
      1. 12.6.1. Fitting a Logistic Model
      2. 12.6.2. Degrees of Fit
      3. 12.6.3. A Discriminant Alternative
      4. 12.6.4. Inverse Prediction
      5. 12.6.5. Polytomous Responses: More Than Two Levels
      6. 12.6.6. Ordinal Responses: Cumulative Ordinal Logistic Regression
    7. 12.7. Surprise: Simpson's Paradox: Aggregate Data versus Grouped Data
    8. 12.8. Generalized Linear Models
    9. 12.9. Exercises
  15. 13. Multiple Regression
    1. 13.1. Overview
    2. 13.2. Parts of a Regression Model
      1. 13.2.1. response, Y
      2. 13.2.2. regressors, X's
      3. 13.2.3. coefficients, parameters
      4. 13.2.4. intercept term
      5. 13.2.5. error, residual
    3. 13.3. A Multiple Regression Example
      1. 13.3.1. Residuals and Predicted Values
      2. 13.3.2. The Analysis of Variance Table
      3. 13.3.3. The Whole Model F-Test
      4. 13.3.4. Whole-Model Leverage Plot
      5. 13.3.5. Details on Effect Tests
      6. 13.3.6. Effect Leverage Plots
    4. 13.4. Collinearity
      1. 13.4.1. Exact Collinearity, Singularity, Linear Dependency
    5. 13.5. The Longley Data: An Example of Collinearity
    6. 13.6. The Case of the Hidden Leverage Point
    7. 13.7. Mining Data with Stepwise Regression
    8. 13.8. Exercises
  16. 14. Fitting Linear Models
    1. 14.1. Overview
    2. 14.2. The General Linear Model
      1. 14.2.1. Kinds of Effects in Linear Models
        1. 14.2.1.1. Intercept term
        2. 14.2.1.2. Continuous effects
        3. 14.2.1.3. Categorical effects
        4. 14.2.1.4. Interactions
        5. 14.2.1.5. Nested effects
      2. 14.2.2. Coding Scheme to Fit a One-Way anova as a Linear Model
      3. 14.2.3. Regressor Construction
      4. 14.2.4. Interpretation of Parameters
      5. 14.2.5. Predictions Are the Means
      6. 14.2.6. Parameters and Means
      7. 14.2.7. Analysis of Covariance: Putting Continuous and Classification Terms into the Same Model
      8. 14.2.8. The Prediction Equation
      9. 14.2.9. The Whole-Model Test and Leverage Plot
      10. 14.2.10. Effect Tests and Leverage Plots
      11. 14.2.11. Least Squares Means
      12. 14.2.12. Lack of Fit
      13. 14.2.13. Separate Slopes: When the Covariate Interacts with the Classification Effect
    3. 14.3. Two-Way Analysis of Variance and Interactions
    4. 14.4. Optional Topic: Random Effects and Nested Effects
      1. 14.4.1. Nesting
      2. 14.4.2. Repeated Measures
      3. 14.4.3. Method 1: Random Effects-Mixed Model
      4. 14.4.4. Method 2: Reduction to the Experimental Unit
      5. 14.4.5. Method 3: Correlated Measurements-Multivariate Model
      6. 14.4.6. Varieties of Analysis
      7. 14.4.7. Summary
    5. 14.5. Exercises
  17. 15. Bivariate and Multivariate Relationships
    1. 15.1. Overview
    2. 15.2. Bivariate Distributions
    3. 15.3. Density Estimation
      1. 15.3.1. Bivariate Density Estimation
      2. 15.3.2. Mixtures, Modes, and Clusters
      3. 15.3.3. The Elliptical Contours of the Normal Distribution
    4. 15.4. Correlations and the Bivariate Normal
      1. 15.4.1. Simulation Exercise
      2. 15.4.2. Correlations Across Many Variables
      3. 15.4.3. Bivariate Outliers
    5. 15.5. Three and More Dimensions
      1. 15.5.1. Principal Components
      2. 15.5.2. Principal Components for Six Variables
      3. 15.5.3. Correlation Patterns in Biplots
      4. 15.5.4. Outliers in Six Dimensions
    6. 15.6. Summary
    7. 15.7. Exercises
  18. 16. Design of Experiments
    1. 16.1. Overview
    2. 16.2. Introduction
      1. 16.2.1. Experimentation Is Learning
      2. 16.2.2. Controlling Experimental Conditions Is Essential
      3. 16.2.3. Experiments Manage Random Variation within A Statistical Framework
    3. 16.3. JMP DOE
    4. 16.4. A Simple Design
      1. 16.4.1. The Experiment
      2. 16.4.2. The Response
      3. 16.4.3. The Factors
      4. 16.4.4. The Budget
      5. 16.4.5. Enter and Name the Factors
      6. 16.4.6. Define the Model
      7. 16.4.7. Is the Design Balanced?
      8. 16.4.8. Perform Experiment and Enter Data
        1. 16.4.8.1. Examine the Response Data
      9. 16.4.9. Analyze the Model
      10. 16.4.10. Details of the Design
        1. 16.4.10.1. Confounding Structure
      11. 16.4.11. Using the Custom Designer
        1. 16.4.11.1. Modify a Design Interactively
        2. 16.4.11.2. How the Custom Designer Works
      12. 16.4.12. Using the Screening Platform
        1. 16.4.12.1. Contrasts and p-values
        2. 16.4.12.2. Half-Normal Plot
    5. 16.5. Screening for Interactions: The Reactor Data
    6. 16.6. Response Surface Designs
      1. 16.6.1. The Experiment
      2. 16.6.2. Response Surface Designs in JMP
      3. 16.6.3. Plotting Surface Effects
      4. 16.6.4. Designating RSM Designs Manually
      5. 16.6.5. The Prediction Variance Profiler
        1. 16.6.5.1. A Quadratic Model
        2. 16.6.5.2. A Cubic Model
    7. 16.7. Design Issues
    8. 16.8. Routine Screening Examples
      1. 16.8.1. Main Effects Only
        1. 16.8.1.1. All Two-Factor Interactions Involving Only One Factor
        2. 16.8.1.2. All Two-Factor Interactions
    9. 16.9. Design Strategies Glossary
  19. 17. Exploratory Modeling
    1. 17.1. Overview
    2. 17.2. The Partition Platform
      1. 17.2.1. Modeling with Recursive Trees
      2. 17.2.2. Viewing Large Trees
      3. 17.2.3. Saving Results
    3. 17.3. Neural Networks
      1. 17.3.1. Modeling with Neural Networks
      2. 17.3.2. Profiles in Neural Nets
      3. 17.3.3. Using Cross-Validation
      4. 17.3.4. Saving Columns
    4. 17.4. Exercises
  20. 18. Discriminant and Cluster Analysis
    1. 18.1. Overview
    2. 18.2. Discriminant Analysis
      1. 18.2.1. Canonical Plot
      2. 18.2.2. Discriminant Scores
    3. 18.3. Cluster Analysis
      1. 18.3.1. A Real-World Example
    4. 18.4. Exercises
  21. 19. Statistical Quality Control
    1. 19.1. Overview
    2. 19.2. Control Charts and Shewhart Charts
      1. 19.2.1. Variables Charts
      2. 19.2.2. Attributes Charts
    3. 19.3. The Control Chart Launch Dialog
      1. 19.3.1. Process Information
        1. 19.3.1.1. Process
        2. 19.3.1.2. Sample Label
      2. 19.3.2. Chart Type Information
      3. 19.3.3. Limits Specification Panel
        1. 19.3.3.1. K Sigma
        2. 19.3.3.2. Alpha
      4. 19.3.4. Using Known Statistics
      5. 19.3.5. Types of Control Charts for Variables
        1. 19.3.5.1. Mean, R, and S Charts
        2. 19.3.5.2. Individual Measurement and Moving Range Charts
      6. 19.3.6. Types of Control Charts for Attributes
        1. 19.3.6.1. p- and np-Charts
        2. 19.3.6.2. u-Charts
      7. 19.3.7. Moving Average Charts
        1. 19.3.7.1. Uniformly Weighted Moving Average (UWMA) Charts
        2. 19.3.7.2. Exponentially Weighted Moving Average (EWMA) Chart
      8. 19.3.8. Levey-Jennings Plots
      9. 19.3.9. Tailoring the Horizontal Axis
      10. 19.3.10. Tests for Special Causes
      11. 19.3.11. Westgard Rules
    4. 19.4. Multivariate Control Charts
  22. 20. Time Series
    1. 20.1. Overview
    2. 20.2. Introduction
    3. 20.3. Lagged Values
      1. 20.3.1. Testing for Autocorrelation
    4. 20.4. White Noise
    5. 20.5. Autoregressive Processes
      1. 20.5.1. Correlation Plots of AR Series
    6. 20.6. Estimating the Parameters of an Autoregressive Process
    7. 20.7. Moving Average Processes
      1. 20.7.1. Correlation Plots of MA Series
    8. 20.8. Example of Diagnosing a Time Series
    9. 20.9. ARMA Models and the Model Comparison Table
    10. 20.10. Stationarity and Differencing
    11. 20.11. Seasonal Models
    12. 20.12. Spectral Density
    13. 20.13. Forecasting
    14. 20.14. Exercises
  23. 21. Machines of Fit
    1. 21.1. Overview
    2. 21.2. Springs for Continuous Responses
      1. 21.2.1. Fitting a Mean
      2. 21.2.2. Testing a Hypothesis
      3. 21.2.3. One-Way Layout
      4. 21.2.4. Effect of Sample Size Significance
      5. 21.2.5. Effect of Error Variance on Significance
      6. 21.2.6. Experimental Design's Effect on Significance
      7. 21.2.7. Simple Regression
      8. 21.2.8. Leverage
      9. 21.2.9. Multiple Regression
      10. 21.2.10. Summary: Significance and Power
    3. 21.3. Machine of Fit for Categorical Responses
      1. 21.3.1. How Do Pressure Cylinders Behave?
      2. 21.3.2. Estimating Probabilities
      3. 21.3.3. One-Way Layout for Categorical Data
      4. 21.3.4. Logistic Regression
  24. References and Data Sources
  25. Answers to Selected Exercises
    1. 21.4. Chapter 4, "Formula Editor Adventures"
    2. 21.5. Chapter 7, "Univariate Distributions: One Variable, One Sample"
    3. 21.6. Chapter 8, "The Difference between Two Means"
    4. 21.7. Chapter 9, "Comparing Many Means: One-Way Analysis of Variance"
    5. 21.8. Chapter 10, "Fitting Curves through Points: Regression"
    6. 21.9. Chapter 11, "Categorical Distributions"
    7. 21.10. Chapter 12, "Categorical Models"
    8. 21.11. Chapter 13, "Multiple Regression"
    9. 21.12. Chapter 14, "Fitting Linear Models"
    10. 21.13. Chapter 15, "Bivariate and Multivariate Relationships"
    11. 21.14. Chapter 17, "Exploratory Modeling"
    12. 21.15. Chapter 18, "Discriminant and Cluster Analysis"
    13. 21.16. Chapter 20, "Time Series"
  26. Technology License Notices

Product information

  • Title: JMP® Start Statistics: A Guide to Statistics and Data Analysis Using JMP®, 4th Edition
  • Author(s): John Sall, Lee Creighton, Ann Lehman
  • Release date: September 2007
  • Publisher(s): SAS Institute
  • ISBN: 9781599945729