Book description
JMP Start Statistics: A Guide to Statistics and Data Analysis Using JMP, Fourth Edition, is a complete and orderly introduction to analyzing data using JMP statistical discovery software from SAS. A mix of software manual and statistics text, this book provides hands-on tutorials with just the right amount of conceptual and motivational material to illustrate how to use JMP's intuitive interface for data analysis. Each chapter features concept-specific tutorials, examples, brief reviews of concepts, step-by-step illustrations, and exercises. Written by John Sall, Lee Creighton, and Ann Lehman, this book is a great tool for statistics students or practitioners needing a software-related statistics review.
Table of contents
- Copyright
- Preface
- 1. Preliminaries
-
2. JMP Right In
- 2.1. Hello!
- 2.2. First Session
- 2.3. Modeling Type
- 2.4. The Personality of JMP
-
3. Data Tables, Reports, and Scripts
- 3.1. Overview
- 3.2. The Ins and Outs of a JMP Data Table
- 3.3. Creating a New JMP Table
- 3.4. Moving Data Out of JMP
- 3.5. Working with Graphs and Reports
- 3.6. Juggling Data Tables
- 3.7. The Summary Command
- 3.8. Working with Scripts
-
4. Formula Editor Adventures
- 4.1. Overview
- 4.2. The Formula Editor Window
- 4.3. A Quick Example
- 4.4. Formula Editor: Pieces and Parts
- 4.5. The Keypad Functions
- 4.6. The Formula Display Area
- 4.7. Function Browser Definitions
- 4.8. Tips on Building Formulas
- 4.9. Exercises
-
5. What Are Statistics?
- 5.1. Overview
- 5.2. Ponderings
- 5.3. Preparations
-
5.4. Statistical Terms
- 5.4.1. Model
- 5.4.2. Parameters
- 5.4.3. Hypotheses
- 5.4.4. Two-Sided versus One-Sided, Two-Tailed versus One-Tailed
- 5.4.5. Statistical Significance
- 5.4.6. Significance Level, p-value, α-level
- 5.4.7. Power, ß-level
- 5.4.8. Confidence Intervals
- 5.4.9. Biased, Unbiased
- 5.4.10. Sample Mean versus True Mean
- 5.4.11. Variance and Standard Deviation, Standard Error
- 5.4.12. Degrees of Freedom
- 6. Simulations
-
7. Univariate Distributions: One Variable, One Sample
- 7.1. Overview
- 7.2. Looking at Distributions
- 7.3. Describing Distributions of Values
-
7.4. Statistical Inference on the Mean
- 7.4.1. Standard Error of the Mean
- 7.4.2. Confidence Intervals for the Mean
- 7.4.3. Testing Hypotheses: Terminology
- 7.4.4. The Normal z- Test for the Mean
- 7.4.5. Case Study: The Earth's Ecliptic
- 7.4.6. Student's t-Test
- 7.4.7. Comparing the Normal and Student's t Distributions
- 7.4.8. Testing the Mean
- 7.4.9. The p-Value Animation
- 7.4.10. Power of the t-Test
- 7.5. Practical Significance vs. Statistical Significance
- 7.6. Examining for Normality
- 7.7. Special Topic: Practical Difference
- 7.8. Special Topic: Simulating the Central Limit Theorem
- 7.9. Seeing Kernel Density Estimates
- 7.10. Exercises
-
8. The Difference between Two Means
- 8.1. Overview
-
8.2. Two Independent Groups
- 8.2.1. When the Difference Isn't Significant
- 8.2.2. Check the Data
- 8.2.3. Launch the Fit Y by X Platform
- 8.2.4. Examine the Plot
- 8.2.5. Display and Compare the Means
- 8.2.6. Inside the Student's t-Test
- 8.2.7. Equal or Unequal Variances?
- 8.2.8. One-Sided Version of the Test
- 8.2.9. Analysis of Variance and the All-Purpose F-Test
- 8.2.10. How Sensitive Is the Test?
- 8.2.11. How Many More Observations Are Needed?
- 8.2.12. When the Difference Is Significant
- 8.3. Normality and Normal Quantile Plots
- 8.4. Testing Means for Matched Pairs
- 8.5. The Normality Assumption
- 8.6. Two Extremes of Neglecting the Pairing Situation: A Dramatization
- 8.7. A Nonparametric Approach
- 8.8. Exercises
-
9. Comparing Many Means: One-Way Analysis of Variance
- 9.1. overview
- 9.2. What Is a One-Way Layout?
- 9.3. Comparing and Testing Means
- 9.4. Means Diamonds: A Graphical Description of Group Means
- 9.5. Statistical Tests to Compare Means
- 9.6. Means Comparisons for Balanced Data
- 9.7. Means Comparisons for Unbalanced Data
- 9.8. Adjusting for Multiple Comparisons
- 9.9. Are the Variances Equal Across the Groups?
- 9.10. Nonparametric Methods
- 9.11. Exercises
-
10. Fitting Curves through Points: Regression
- 10.1. Overview
-
10.2. Regression
- 10.2.1. Least Squares
- 10.2.2. Seeing Least Squares
- 10.2.3. Fitting a Line and Testing the Slope
- 10.2.4. Testing the Slope by Comparing Models
- 10.2.5. The Distribution of the Parameter Estimates
- 10.2.6. Confidence Intervals on the Estimates
- 10.2.7. Examine Residuals
- 10.2.8. Exclusion of Rows
- 10.2.9. Time to Clean Up
- 10.3. Polynomial Models
- 10.4. Transformed Fits
- 10.5. Are Graphics Important?
- 10.6. Why It's Called Regression
- 10.7. What Happens When X and Y Are Switched?
- 10.8. Curiosities
- 10.9. Exercises
-
11. Categorical Distributions
- 11.1. Overview
- 11.2. Categorical Situations
- 11.3. Categorical Responses and Count Data: Two Outlooks
- 11.4. A Simulated Categorical Response
- 11.5. The X2 Pearson Chi-Square Test Statistic
- 11.6. The G2 Likelihood-Ratio Chi-Square Test Statistic
- 11.7. Univariate Categorical Chi-Square Tests
- 11.8. Exercises
-
12. Categorical Models
- 12.1. Overview
- 12.2. Fitting Categorical Responses to Categorical Factors: Contingency Tables
- 12.3. Two-Way Tables: Entering Count Data
- 12.4. If You Have a Perfect Fit
- 12.5. Special Topic: Correspondence Analysis— Looking at Data with Many Levels
- 12.6. Continuous Factors with Categorical Responses: Logistic Regression
- 12.7. Surprise: Simpson's Paradox: Aggregate Data versus Grouped Data
- 12.8. Generalized Linear Models
- 12.9. Exercises
- 13. Multiple Regression
-
14. Fitting Linear Models
- 14.1. Overview
-
14.2. The General Linear Model
- 14.2.1. Kinds of Effects in Linear Models
- 14.2.2. Coding Scheme to Fit a One-Way anova as a Linear Model
- 14.2.3. Regressor Construction
- 14.2.4. Interpretation of Parameters
- 14.2.5. Predictions Are the Means
- 14.2.6. Parameters and Means
- 14.2.7. Analysis of Covariance: Putting Continuous and Classification Terms into the Same Model
- 14.2.8. The Prediction Equation
- 14.2.9. The Whole-Model Test and Leverage Plot
- 14.2.10. Effect Tests and Leverage Plots
- 14.2.11. Least Squares Means
- 14.2.12. Lack of Fit
- 14.2.13. Separate Slopes: When the Covariate Interacts with the Classification Effect
- 14.3. Two-Way Analysis of Variance and Interactions
- 14.4. Optional Topic: Random Effects and Nested Effects
- 14.5. Exercises
- 15. Bivariate and Multivariate Relationships
-
16. Design of Experiments
- 16.1. Overview
- 16.2. Introduction
- 16.3. JMP DOE
-
16.4. A Simple Design
- 16.4.1. The Experiment
- 16.4.2. The Response
- 16.4.3. The Factors
- 16.4.4. The Budget
- 16.4.5. Enter and Name the Factors
- 16.4.6. Define the Model
- 16.4.7. Is the Design Balanced?
- 16.4.8. Perform Experiment and Enter Data
- 16.4.9. Analyze the Model
- 16.4.10. Details of the Design
- 16.4.11. Using the Custom Designer
- 16.4.12. Using the Screening Platform
- 16.5. Screening for Interactions: The Reactor Data
- 16.6. Response Surface Designs
- 16.7. Design Issues
- 16.8. Routine Screening Examples
- 16.9. Design Strategies Glossary
- 17. Exploratory Modeling
- 18. Discriminant and Cluster Analysis
-
19. Statistical Quality Control
- 19.1. Overview
- 19.2. Control Charts and Shewhart Charts
-
19.3. The Control Chart Launch Dialog
- 19.3.1. Process Information
- 19.3.2. Chart Type Information
- 19.3.3. Limits Specification Panel
- 19.3.4. Using Known Statistics
- 19.3.5. Types of Control Charts for Variables
- 19.3.6. Types of Control Charts for Attributes
- 19.3.7. Moving Average Charts
- 19.3.8. Levey-Jennings Plots
- 19.3.9. Tailoring the Horizontal Axis
- 19.3.10. Tests for Special Causes
- 19.3.11. Westgard Rules
- 19.4. Multivariate Control Charts
-
20. Time Series
- 20.1. Overview
- 20.2. Introduction
- 20.3. Lagged Values
- 20.4. White Noise
- 20.5. Autoregressive Processes
- 20.6. Estimating the Parameters of an Autoregressive Process
- 20.7. Moving Average Processes
- 20.8. Example of Diagnosing a Time Series
- 20.9. ARMA Models and the Model Comparison Table
- 20.10. Stationarity and Differencing
- 20.11. Seasonal Models
- 20.12. Spectral Density
- 20.13. Forecasting
- 20.14. Exercises
-
21. Machines of Fit
- 21.1. Overview
-
21.2. Springs for Continuous Responses
- 21.2.1. Fitting a Mean
- 21.2.2. Testing a Hypothesis
- 21.2.3. One-Way Layout
- 21.2.4. Effect of Sample Size Significance
- 21.2.5. Effect of Error Variance on Significance
- 21.2.6. Experimental Design's Effect on Significance
- 21.2.7. Simple Regression
- 21.2.8. Leverage
- 21.2.9. Multiple Regression
- 21.2.10. Summary: Significance and Power
- 21.3. Machine of Fit for Categorical Responses
-
References and Data Sources
-
Answers to Selected Exercises
- 21.4. Chapter 4, "Formula Editor Adventures"
- 21.5. Chapter 7, "Univariate Distributions: One Variable, One Sample"
- 21.6. Chapter 8, "The Difference between Two Means"
- 21.7. Chapter 9, "Comparing Many Means: One-Way Analysis of Variance"
- 21.8. Chapter 10, "Fitting Curves through Points: Regression"
- 21.9. Chapter 11, "Categorical Distributions"
- 21.10. Chapter 12, "Categorical Models"
- 21.11. Chapter 13, "Multiple Regression"
- 21.12. Chapter 14, "Fitting Linear Models"
- 21.13. Chapter 15, "Bivariate and Multivariate Relationships"
- 21.14. Chapter 17, "Exploratory Modeling"
- 21.15. Chapter 18, "Discriminant and Cluster Analysis"
- 21.16. Chapter 20, "Time Series"
- Technology License Notices
Product information
- Title: JMP® Start Statistics: A Guide to Statistics and Data Analysis Using JMP®, 4th Edition
- Author(s):
- Release date: September 2007
- Publisher(s): SAS Institute
- ISBN: 9781599945729
You might also like
book
Data Analysis with R - Second Edition
Learn, by example, the fundamentals of data analysis as well as several intermediate to advanced methods …
book
Fundamentals of Statistical Experimental Design and Analysis
Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. …
book
Handbook of Statistical Analysis and Data Mining Applications
The Handbook of Statistical Analysis and Data Mining Applications is a comprehensive professional reference book that …
book
R: Data Analysis and Visualization
Master the art of building analytical models using R About This Book Load, wrangle, and analyze …