JMP Example 5 Three samples of size 10 are taken from an API (Active Pharmaceutical Ingredient) plant. The first one was taken at a batch reactor pressure.

Slides:



Advertisements
Similar presentations
Topic 12 – Further Topics in ANOVA
Advertisements

This PowerPoint presentation shows you how to use the NRM 1.0.xls Excel Workbook to fit several popular regression models to experimental data. The models.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Class 16: Thursday, Nov. 4 Note: I will you some info on the final project this weekend and will discuss in class on Tuesday.
Chapter 11 Contingency Table Analysis. Nonparametric Systems Another method of examining the relationship between independent (X) and dependant (Y) variables.
By Hrishikesh Gadre Session II Department of Mechanical Engineering Louisiana State University Engineering Equation Solver Tutorials.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #19 3/8/02 Taguchi’s Orthogonal Arrays.
Lecture 23: Tues., Dec. 2 Today: Thursday:
Class 15: Tuesday, Nov. 2 Multiple Regression (Chapter 11, Moore and McCabe).
Lecture 23: Tues., April 6 Interpretation of regression coefficients (handout) Inference for multiple regression.
Chapter 12b Testing for significance—the t-test Developing confidence intervals for estimates of β 1. Testing for significance—the f-test Using Excel’s.
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Linear Regression and Linear Prediction Predicting the score on one variable.
Relationships Among Variables
Spreadsheet Modeling & Decision Analysis A Practical Introduction to Management Science 5 th edition Cliff T. Ragsdale.
Lecture 15 Basics of Regression Analysis
1 Doing Statistics for Business Doing Statistics for Business Data, Inference, and Decision Making Marilyn K. Pelosi Theresa M. Sandifer Chapter 11 Regression.
LEARNING PROGRAMME Hypothesis testing Intermediate Training in Quantitative Analysis Bangkok November 2007.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved Section 10-1 Review and Preview.
1 1 Slide © 2012 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
1 1 Slide Simple Linear Regression Part A n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n.
1 1 Slide © 2004 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Regression Analysis. Scatter plots Regression analysis requires interval and ratio-level data. To see if your data fits the models of regression, it is.
Copyright © 2010 Pearson Education, Inc Chapter Seventeen Correlation and Regression.
Simple Linear Regression One reason for assessing correlation is to identify a variable that could be used to predict another variable If that is your.
Regression For the purposes of this class: –Does Y depend on X? –Does a change in X cause a change in Y? –Can Y be predicted from X? Y= mX + b Predicted.
Correlation and Linear Regression. Evaluating Relations Between Interval Level Variables Up to now you have learned to evaluate differences between the.
Design of Engineering Experiments Part 5 – The 2k Factorial Design
Research Project Statistical Analysis. What type of statistical analysis will I use to analyze my data? SEM (does not tell you level of significance)
Production Planning and Control. A correlation is a relationship between two variables. The data can be represented by the ordered pairs (x, y) where.
JMP Example 2 Say you take 5 measurements for the yield from a granulator and find the mean measurement. You repeat this process 50 times, to generate.
Multiple Linear Regression. Purpose To analyze the relationship between a single dependent variable and several independent variables.
Statistical analysis Outline that error bars are a graphical representation of the variability of data. The knowledge that any individual measurement.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
The Central Tendency is the center of the distribution of a data set. You can think of this value as where the middle of a distribution lies. Measure.
Numerical Statistics Given a set of data (numbers and a context) we are interested in how to describe the entire set without listing all the elements.
Correlation & Regression Chapter 15. Correlation It is a statistical technique that is used to measure and describe a relationship between two variables.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Section 12.3 Regression Analysis HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2008 by Hawkes Learning Systems/Quant Systems, Inc. All.
HAWKES LEARNING SYSTEMS Students Matter. Success Counts. Copyright © 2013 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Section 12.3.
ISCG8025 Machine Learning for Intelligent Data and Information Processing Week 3 Practical Notes Application Advice *Courtesy of Associate Professor Andrew.
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Sampling Design and Analysis MTH 494 Lecture-22 Ossam Chohan Assistant Professor CIIT Abbottabad.
Chapter 8: Simple Linear Regression Yang Zhenlin.
Regression. Outline of Today’s Discussion 1.Coefficient of Determination 2.Regression Analysis: Introduction 3.Regression Analysis: SPSS 4.Regression.
Regression For the purposes of this class: –Does Y depend on X? –Does a change in X cause a change in Y? –Can Y be predicted from X? Y= mX + b Predicted.
Continuous Outcome, Dependent Variable (Y-Axis) Child’s Height
PreCalculus 1-7 Linear Models. Our goal is to create a scatter plot to look for a mathematical correlation to this data.
Data Screening. What is it? Data screening is very important to make sure you’ve met all your assumptions, outliers, and error problems. Each type of.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
Stats Methods at IC Lecture 3: Regression.
Introduction In mathematics, the word “remainder” is often used in relation to the process of long division. You are probably familiar with dividing whole.
Regression Analysis.
Online Templates for Basic Statistics: Rubric Lines 5 & 6 & (4)
Understanding Results
Regression Analysis.
Tutorial for using Case It for qPCR analyses
Elementary Statistics
Correlation and Regression
Statistical Analysis Determining the Significance of Data
MBF1413 | Quantitative Methods Prepared by Dr Khairul Anuar
Hypothesis testing and Estimation
Simple Linear Regression
Two samples of size 10 are taken from a dissolution apparatus.
Online Templates for Basic Statistics: Rubric Lines 5 & 6 & (4)
Essentials of Statistics for Business and Economics (8e)
JMP Example 5 Use the previous yield data from different dissolution temperatures. Make a model that describes the effect of temperature on the yield.
MGS 3100 Business Analysis Regression Feb 18, 2016
JMP Example 6 Three samples of size 10 are taken from an API (Active Pharmaceutical Ingredient) plant. The first one was taken at a batch reactor pressure.
Presentation transcript:

JMP Example 5 Three samples of size 10 are taken from an API (Active Pharmaceutical Ingredient) plant. The first one was taken at a batch reactor pressure of 3 bar, the second at 3.5 bar, and the final at 4 bar. Use regression analysis to build a model describing the effect of pressure on the yield of the API, using a squared term if necessary.

As has been shown in previous examples, open JMP and insert the data. Note: if you would like to extend the range of the data table, you may maximize the window. Left click on the middle box in the upper right hand corner of any window to do this.

This is what the maximized window looks like in JMP. Although it is useful to see all the data in the data table, sometimes it is easier to keep the window at its original size, for easy access to other operations.

Again, we will be using the “Fit Model” operation on this data.

As was shown in the previous example, the model specifications are made.

And, as before, we run the model.

The primary purpose of this example is to show how a regression plot sometimes needs to be altered to account for the data. A simple variance in pressure from 3 bar to 4 bar would have created a very simple straight line regression model. However, by including a ‘centre-point’ such as 3.5 bar, we can see that this model is definitely not appropriate for all ranges of pressure. This concept will crop up again in the Experimental Design section of the course. Note the very small “R- Square” value as well. This indicates the plot is wholly unsatisfactory.

As can be seen from the error data in the ANOVA table, this regression plot is unacceptable. There is something more that needs to be accounted for in the plot. The simplest way to account for a lack of fit is by adding or taking away factors. Let’s try to add a squared term.

We return to the “Fit Model” Window. To add a squared term, left click on the “Pressure (bar)” in the “Select Columns” box. Then left click in the “Pressure (bar)” under the “Construct Model Effects” box.

These two items should now be highlighted, as shown. Now, left click on the “Cross” button under “Construct Model Effects”.

This will add what is known as a cross-term or a squared term. It is the equivalent of [Pressure (bar)] 2, a polynomial term of the second order. We then run the model.

It is easy to see how the model has changed. The regression plot fits the data much better (note the “R-Square” value). The actual by predicted plot also gives a much tighter confidence interval.

A good way of determining a real increase in the accuracy of the plot is by looking at the “Error” data in the ANOVA table. The “Error Sum of Squares” value has been reduced by almost 1300, and the “Error Mean Square” value has also significantly decreased.

A comparison of the two plots more starkly reveals the greater accuracy of the plot including a squared term. Note: The “R-Square” value (a.k.a. coefficient of determination) should never be used alone to measure the appropriateness of the model, as it can be artificially inflated by the addition of higher-order polynomial terms. However, it is a good indicator, and this can be validated by the ANOVA table data discussed earlier.