North Group/Quiz 3 Thamer AbuDiak Thamer AbuDiak Reynald Benoit Jose Lopez Rosele Lynn Dave Neal Deyanira Pena Professor Lawrence MIS 680.

Slides:



Advertisements
Similar presentations
Quiz Number 2 Group 1 – North of Newark Thamer AbuDiak Reynald Benoit Jose Lopez Rosele Lynn Dave Neal Deyanira Pena Professor Kenneth D. Lawerence New.
Advertisements

Qualitative predictor variables
Copyright ©2006 Brooks/Cole A division of Thomson Learning, Inc. Introduction to Probability and Statistics Twelfth Edition Robert J. Beaver Barbara M.
More on understanding variance inflation factors (VIFk)
Forecasting Using the Simple Linear Regression Model and Correlation
Simple Linear Regression. G. Baker, Department of Statistics University of South Carolina; Slide 2 Relationship Between Two Quantitative Variables If.
Linear regression models
EPI 809/Spring Probability Distribution of Random Error.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 13 Nonlinear and Multiple Regression.
Simple Linear Regression and Correlation
1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Simple Linear Regression Estimates for single and mean responses.
Note 14 of 5E Statistics with Economics and Business Applications Chapter 12 Multiple Regression Analysis A brief exposition.
Statistics for Business and Economics
SIMPLE LINEAR REGRESSION
Chapter Topics Types of Regression Models
Chapter 11 Multiple Regression.
Multiple Linear Regression
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Objectives of Multiple Regression
Introduction to Linear Regression and Correlation Analysis
CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.
Simple Linear Regression Models
Statistics for Business and Economics Chapter 10 Simple Linear Regression.
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
1 1 Slide © 2003 Thomson/South-Western Chapter 13 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple Coefficient of Determination.
CHAPTER 14 MULTIPLE REGRESSION
Introduction to Linear Regression
Chapter 11 Linear Regression Straight Lines, Least-Squares and More Chapter 11A Can you pick out the straight lines and find the least-square?
Introduction to Probability and Statistics Thirteenth Edition Chapter 12 Linear Regression and Correlation.
14- 1 Chapter Fourteen McGraw-Hill/Irwin © 2006 The McGraw-Hill Companies, Inc., All Rights Reserved.
1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Summarizing Bivariate Data Non-linear Regression Example.
Multiple regression. Example: Brain and body size predictive of intelligence? Sample of n = 38 college students Response (Y): intelligence based on the.
[1] Simple Linear Regression. The general equation of a line is Y = c + mX or Y =  +  X.  > 0  > 0  > 0  = 0  = 0  < 0  > 0  < 0.
Multiple Regression I 1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 4 Multiple Regression Analysis (Part 1) Terry Dielman.
1 Doing Statistics for Business Doing Statistics for Business Data, Inference, and Decision Making Marilyn K. Pelosi Theresa M. Sandifer Chapter 12 Multiple.
Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved.
Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved.
Inference for regression - More details about simple linear regression IPS chapter 10.2 © 2006 W.H. Freeman and Company.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
732G21/732G28/732A35 Lecture 4. Variance-covariance matrix for the regression coefficients 2.
Linear Regression Models Andy Wang CIS Computer Systems Performance Analysis.
The “Big Picture” (from Heath 1995). Simple Linear Regression.
Chapter 15 Inference for Regression. How is this similar to what we have done in the past few chapters?  We have been using statistics to estimate parameters.
Chapter 20 Linear and Multiple Regression
Statistics for Managers using Microsoft Excel 3rd Edition
Least Square Regression
Chapter 13 Created by Bethany Stubbe and Stephan Kogitz.
Least Square Regression
John Loucks St. Edward’s University . SLIDES . BY.
Relationship with one independent variable
Chapter 13 Simple Linear Regression
Lecture 12 More Examples for SLR More Examples for MLR 9/19/2018
Business Statistics Multiple Regression This lecture flows well with
Lecture 18 Outline: 1. Role of Variables in a Regression Equation
Linear Regression Models
Inference for Regression Lines
CHAPTER 29: Multiple Regression*
Solutions for Tutorial 3
Solutions of Tutorial 10 SSE df RMS Cp Radjsq SSE1 F Xs c).
Prepared by Lee Revere and John Large
PENGOLAHAN DAN PENYAJIAN
Multiple Regression Chapter 14.
Relationship with one independent variable
SIMPLE LINEAR REGRESSION
CHAPTER 14 MULTIPLE REGRESSION
SIMPLE LINEAR REGRESSION
Solutions of Tutorial 9 SSE df RMS Cp Radjsq SSE1 F Xs c).
Chapter Fourteen McGraw-Hill/Irwin
Essentials of Statistics for Business and Economics (8e)
Presentation transcript:

North Group/Quiz 3 Thamer AbuDiak Thamer AbuDiak Reynald Benoit Jose Lopez Rosele Lynn Dave Neal Deyanira Pena Professor Lawrence MIS 680

Table of Content Ragsdale Book  Deyanira Pena, 7-8, 8-22  Rosele Lynn, 7-13, 8-12  Jose Lopez, 7-19, 8-4 Dielman's book Dielman's book  Dave Neal, 6-2  Thamer AbuDiak, 7-2.  Reynald Benoit, 8-1

Ragsdale 7-8 by Deyanira Pena Min: Q Subject to: 12x1 + 4x2 >= 48 } High-grade coal required 4x1 + 4x2 >= 28 } Medium-grade coal required 10x1 + 20x 2 >= } Low-grade coal required W1((40x1+32x2)-244)/244) <= Q } goal 1 MINIMAX constraint W2((800x1+1250x2-6950)/6950)<= Q } goal 2 MINIMAX constraint W3((.20x1+.45x2-2)/2) <= Q } goal 3 MINIMAX constraint X1x2 >= 0 } nonegativity conditions W1,w2,w3 are positive constraints

Ragsdale 7-8 by Deyanira Pena

Ragsdale 7-13 by Rosele Lynn Problem: Which combination of three types of coal should be used in order to maintain the EPA’s requirements for sulfur and coal dust levels? Decision variables: Which combination of coal should be used? X1= coal type 1 X2= coal type 2 X3= coal type 3

Objective Functions: MAX: 24,000X1 + 36,000X2 + 28,000X3 } maximize steam produced MIN: 1,100X1 + 3,500X2 + 1,300X3 } minimize sulfur emissions MIN: 1.7X X X3 } minimize coal dust emissions Constraints: X1 + X2 > 0 } non-negativity constraint X1+ X2 + X3/3 < 2,500 } for each ton of coal burned less than 2,500 ppm sulfur X1+ X2 + X3/3 < 2.8 } for each ton of coal burned less than 2.8 kg coal dust Ragsdale 7-13 by Rosele Lynn

Ragsdale 7-19 by Jose F. Lopez (A & B) OBJECTIVES Maximize: 11X1 + 8X X3 + 10X4 + 9X5 Average Yield on Funds Minimize: 8X1 + 1X2 + 7X3 + 6X4 + 2X5 Weighted Average Maturity Minimize: 5X1 + 2X2 + 1X3 + 5X4 + 3X5 Weighted Average Risk CONTRAINTS Subject to: 11X1 >= 0 10X4 >= 0 8X2 >= 0 9X5 >= 0 8.5X3 >= 0 11X1+8X2+8.5X3+10X4…. +9X5 = 1

Minimize: C16 By Changing: B5:B9, C16 Subject To: C14: D14 <= C16 B10 = 1 B5:B9 >= 0 Ragsdale 7-19 by Jose F. Lopez (A & B) Ragsdale 7-19 by Jose F. Lopez (A & B)

Minimize: C16 By Changing: B5:B9, C16 Subject To: C14: D14 <= C16 B10 = 1 B5:B9 >= 0 Ragsdale 7-19 by Jose F. Lopez (A & B)

Ragsdale 8-12 by Rosele Lynn Problem: How does Thom Pearman increase his life insurance coverage while keeping $6,000 in case of emergency? How does Pearman get the minimum amount of money to invest in order to have his after tax earnings cover his planned premium payments?

Ragsdale 8-12 by Rosele Lynn Ragsdale 8-12 by Rosele Lynn Spreadsheet before Solver

Ragsdale 8-12 by Rosele Lynn Ragsdale 8-12 by Rosele Lynn Solve for Annual Return

Ragsdale 8-12 by Rosele Lynn Ragsdale 8-12 by Rosele Lynn Minimum Investment with 15% Annual Rate

b. Solver tells us that this is a non linear model. Ragsdale 8-12 by Rosele Lynn

Ragsdale 8-22 by Deyanira Pena X1= location of new plant with respect to the x-axis Y1=location of new plant with respect to the y-axis Min:  (9-x1)^2 + (45-y1)^2) +  (2-x1)^2 + (28-y1)^2 +  (51-x1)^2 + (36-y1)^2 +  (19-X1)^2 + (4-Y1)^2 Subject to:  (9-x1)^2 + (45-y1)^2 } Dalton distance constraint  (9-x1)^2 + (45-y1)^2 } Dalton distance constraint  (2-x1)^2 + (28-y1)^2 }Rome distance constraint  (51-x1)^2 + (36-y1)^2 }Canton distance constraint  (19-X1)^2 + (4-Y1)^2}Kennesaw distance constraint  (19-X1)^2 + (4-Y1)^2}Kennesaw distance constraint

Minimize: C16 By Changing: B5:B9, C16 Subject To: C14: D14 <= C16 B10 = 1 B5:B9 >= 0 Ragsdale 8-22 by Deyanira Pena

Rugger Corporation Coordinates xyDistanceMaximum Plant to Plant to PlantAllowed Dalton Rome Canton Kennesaw Total

Dielman 6-2 Dave Neal RESEARCH AND DEVELOPMENT A company is interested in the relationship between profit (PROFIT) on a number of projects and 2 explanatory variables. These variables are the expenditure on research and development (RD) and a measure of risk assigned at the outset of the project (RISK). PROFIT is measured in thousands of dollars and RD is measured in hundreds of dollars.

Dielman 6-2 Dave Neal RESEARCH AND DEVELOPMENT (cont.) 1.Using any of the given outputs, does the linearity assumption appear to be violated? Justify your answer.  PROFIT vs. RD appears to be linear. R 2 is 95.6%.  PROFIT vs. RD and RISK appears to be linear. R 2 is 99.2%.  PROFIT vs. RISK appears to violate the linearity assumption. R 2 is only 50.6%. 2.If you answered yes, state how the violation might be corrected.  PROFIT vs. RISK can be corrected by trying a quadratic and cubic polynomial regression analysis to see if the R 2 value is improved. 3.Then try your correction using a computer regression routine.  See the attached quadratic and cubic polynomial regression analysis data and plots. 4.Does your model appear to be an improvement over the original model? Justify your answer.  Yes, the quadratic and cubic polynomial regression analysis appears to be an improvement over the original model. R 2 improved from 50.6% to 71.0% within a 95% Confidence Interval.

Dielman 6-2 Dave Neal RESEARCH AND DEVELOPMENT (cont.) Regression Analysis: PROFIT versus RD The regression equation is PROFIT = RD Predictor Coef SE Coef T P Constant RD S = R-Sq = 95.6% R-Sq(adj) = 95.3% Analysis of Variance Source DF SS MS F P Regression Residual Error Total ____________________________________________________________ Regression Analysis: PROFIT versus RISK The regression equation is PROFIT = RISK Predictor Coef SE Coef T P Constant RISK S = R-Sq = 50.6% R-Sq(adj) = 47.5% Analysis of Variance Source DF SS MS F P Regression Residual Error Total

Dielman 6-2 Dave Neal RESEARCH AND DEVELOPMENT (cont.) Regression Analysis: PROFIT versus RD, RISK The regression equation is PROFIT = RD RISK Predictor Coef SE Coef T P Constant RD RISK S = R-Sq = 99.2% R-Sq(adj) = 99.0% Analysis of Variance Source DF SS MS F P Regression Residual Error Total Source DF Seq SS RD RISK Unusual Observations Obs RD PROFIT Fit SE Fit Residual St Resid R R R denotes an observation with a large standardized residual.

Dielman 6-2 Dave Neal RESEARCH AND DEVELOPMENT (cont.)

Dielman 7-2 Thamer AbuDiak Graduation Rate  Variables:  y: Percentage of students who earned a bachelor degree in 4 years (GRADRATE4)  x 1 : Admission Rate expressed as a percentage (ADMINRATE)  x 2 : indicator variable coded as 1 for private and 0 for public school.  The regression equation is:  y = x x 2

Dielman 7-2 Thamer AbuDiak Graduation Rate a.F-test: i.F = (SSE R – SSE F )/(K-L)MSE F = ( ) / (2*.0195) = ii.Decision rule: i.H0 if F > 3.49 ii.Do not reject H0 if F <= 3.49 iii.Since 86 > 3.49, the null hypotheses is rejected. b.There are difference in the graduation rate between public and private schools.

Dielman 7-2 Thamer AbuDiak Graduation Rate c.Difference in graduation rates between public and private schools. Public school: y = x 1Public school: y = x 1 Private school y = x 1Private school y = x 1 Private schools have a higher graduation rate than public schools.Private schools have a higher graduation rate than public schools.

Dielman 7-2 Thamer AbuDiak Graduation Rate Sample graduation rate prediction d.

Dielman 7-2 Thamer AbuDiak Graduation Rate Regression without counting x2 as a factor Regression with counting x2 as a factor

S R-Sq R-Sq(adj) Mallows C-p StepConstant PaperT-valueP-value MachineT-valueP-value OverheadT-valueP-value LaborT-valueP-value Dielman 8-1 Reynald Benoit Backward elimination. Alpha-to-Remove: 0.1 Response is COST on 4 predictors, with N = 27

Dielman 8-1 Reynald Benoit -cont  A) What is the equation?  COST = PAPER MACHINE  B) What is the R2?  99.87%  C) What is the Adjusted R2?  99.86%  D) What is the standard error?  11.0  E) What variables were omitted? Are they related to cost?  Overhead and Labor. They are related to cost but paper and machine explains 99% of the variation in cost.