Cat Aeration Task Force Additional Prove Out Testing Proposal 1 PC-11 Statisticians Task Force July 1, 2014.

Slides:



Advertisements
Similar presentations
Internal Assessment Your overall IB mark (the one sent to universities after the IB test) in any IB science course is based upon two kinds of assessments.
Advertisements

Evaluating and Institutionalizing
Conclusion Epidemiology and what matters most
Important Points The SWF stands for Standard Workload Form. Workload is covered by Article 11 in the collective agreement. Your SWF is your personal contract.
1 Vendor Evaluation: Selecting for Success Dana McCormick Wells Fargo Home Mortgage Delivery Services Baltimore PCC Education Seminar April 27, 2007.
World Health Organization
Chapter 4 Validity.
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
The Design Process. Analysis Think – what should the final design do? List customer requirements Consider constraints – balance tradeoffs Define specifications.
Assessment of Systems Effort Factors Functionality Impact Factors Functionality Interface Usability What it does Collection Value to task Effectiveness.
ICS 463, Intro to Human Computer Interaction Design: 9. Experiments Dan Suthers.
Determination of System Equivalency – TaskForce Audi, EA-52, V4.0 WLTP-10-33e.
Are the results valid? Was the validity of the included studies appraised?
1 Dr. Jerrell T. Stracener EMIS 7370 STAT 5340 Probability and Statistics for Scientists and Engineers Department of Engineering Management, Information.
Measurement and Scaling
08/02/011 Statistical Summary of the Mack T10 Precision/BOI Matrix.
Analytical considerations
Research Design. Research is based on Scientific Method Propose a hypothesis that is testable Objective observations are collected Results are analyzed.
Board of Directors and Governance
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Evaluating A Systemic Therapy Psoriasis 1.Efficacy 2.Safety 3.Labeling.
© Chevron 2008 ASTM PCEOCP VID Matrix Design and Funding Subgroup September 4, 2008 Detroit, MI.
Phase 1: Preliminary Investigation or Feasibility study
Formulae For each severity adjustment entity, X i = i th test result in original units in end-of-test order T i = i th test result in appropriate units.
US Army Corps of Engineers BUILDING STRONG ® PLANNING FUNDAMENTALS Planning Principles & Procedures – FY11.
Chapter 7 Measurement and Scaling Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Stability of FPPs- Conducting, Bracketing, Matrixing Sultan Ghani.
EVALUATION OF HRD PROGRAMS Jayendra Rimal. The Purpose of HRD Evaluation HRD Evaluation – the systematic collection of descriptive and judgmental information.
Lubricant Test Monitoring System (LTMS) Quick Deck Draft 3 March 2, 2012.
QC THE MULTIRULE INTERPRETATION
Diploma in Procurement & Supply Business needs in Procurement & Supply Session 1 Business Needs and Procurement Decisions.
Chapter 21 Measurement Analysis. Measurement It is important to define and validate the measurement system before collecting data. –Without measurement.
ISM Test Development Task Force Report June 21, 2004.
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Purchasing Forum – May The integration of the activities, plans, attitudes, policies, and efforts of the people of an organization working together.
CCSSO Task Force Recommendations on Educator Preparation Idaho State Department of Education December 14, 2013 Webinar.
Comparing Performance on the Virtual Laparoscopic and Robotic Simulators Among Medical Students Pursuing Surgical versus Non-surgical Residencies Amanda.
Accuracy vs. Precision. 2 minutes: Write what you think accuracy and precision mean 1 minute: discuss at your table what you think.
ENGINEERING DESIGN PROCESS. OBJECTIVES IDENTIFY THE STEPS OF THE ENGINEERING DESIGN PROCESS. DETERMINE CRITERIA FOR THE DEVELOPMENT OF A NEW TECHNOLOGY.
Chapter 9 Audit Sampling – Part a.
© Chevron 2008 ASTM PCEOCP VID Matrix Design and Funding Subgroup April 25, 2008 Teleconference.
Task Group Study of Extended Time PEI Tests on an Oil Matrix ESCIT Meeting 2007 August 9 Midland, Michigan Presented by: Ted Selby Task Group Chair.
Cynthia Cherry Welcome to MT 140 Unit 6 - Control.
Draft 08/06/01 jar1 Draft Addendum to the Statistical Summary of the Mack T10 Precision/BOI Matrix Including IR Oxidation.
Samuel & Bryant Developmental Psychology The Core Studies.
SEMINAR ON PRESENTED BY BRAHMABHATT BANSARI K. M. PHARM PART DEPARTMENT OF PHARMACEUTICS AND PHARMACEUTICAL TECHNOLGY L. M. COLLEGE OF PHARMACY.
EQUIPMENT and METHOD VALIDATION
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Diploma in Procurement & Supply
Rigor and Transparency in Research
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Chapter 20: Standard Costing: A Managerial Control Tool
CLINICAL PROTOCOL DEVELOPMENT
Statistical Summary of the Mack T10 Precision/BOI Matrix
Agenda Introduction and membership
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
DO NOW Pick up notes sheet.
STROBE Statement revision
IB Chemistry Internal Assessment.
Assessing Risk in Sport
The Design Process.
R101 amendment Proposal from German
Scientific Inquiry Standard B – 1.7.
Assessing Risk in Sport
Chapter 12 Power Analysis.
Chapter 20: Standard Costing: A Managerial Control Tool
Progression and Advancement
Presenter: Kate Bell, MA PIP Reviewer
Scientific Inquiry Standards B – 1.7 and B – 1.8.
Presentation transcript:

Cat Aeration Task Force Additional Prove Out Testing Proposal 1 PC-11 Statisticians Task Force July 1, 2014

Testing: 2 1.Demonstrate repeatability, reproducibility and discrimination in the Cat Aeration test Demonstration does not imply acceptance The level of rigor used to investigate precision and discrimination is up to the task force Task force (and ultimately API, ACC and EMA) will have to agree whether test results support adequate precision and discrimination 2.Assess engine hours effect even though effect not expected to be observed over ~100 to ~250 engine hours 3.Prior to start of testing: Labs will have broken in their engines using the same procedure Engines will have been used approximately the same amount of hours (~75 to 100 hrs; TBD) 4.The same test procedure will be used to conduct testing 5.Task force indicated there is enough oil for: Four 1005 tests Five High Aeration tests

Testing: 3 1.Two of the MOA requirements for test acceptance: 1.“Each oil used to demonstrate discrimination has a minimum of two valid test results in the most current test procedure. The Test Development Task Force must approve these results.” 2.“Each Matrix Lab has run at least two operationally valid tests (shakedown runs are eligible) using the Test Matrix procedure. Shakedown runs are full-length, operationally valid runs on oils such as potential candidate or research oils. The Test Development Task Force will determine if these test results are satisfactory in terms of precision, Lubricant Test Monitoring System (LTMS) impact, and reasonable agreement among the results from each lab.”

Proposed Testing Protocol: 4 1.Each lab tests one of the oils twice for repeatability 2.Each lab tests both oils for discrimination 3.Each oil tested at all three labs to evaluate reproducibility

Comments: 5 1.Oil selection: 1.Using the low aeration (LA) oil and high aeration (HA) oil: Better chance of supporting test discrimination This is based on 1 test result per oil which may not be true indication of its performance Evaluates test repeatability over a wider range of Aeration 2.Given this set of testing is likely to be the last, using oils we have more experience with (like 1005) may provide less risk. 3.Three oils (HA, 1005 and LA) could be used to balance the tradeoffs, but 4 tests per stand would be preferred.

More Comments: 6 1.Ultimately, it is up to the task force (and subsequently ACC, EMA and API) to accept whether or not the test results demonstrate a satisfactory level of repeatability, reproducibility and discrimination. 2.Even though these tests have been put together as best as possible to assess repeatability, reproducibility, discrimination and an engine hours effect, a statistical analysis of this data will need to be used with caution. Discrimination oil is only run once at each lab Repeatability is based on 2 tests at each lab Within lab, repeatability is affected by any engine hours effect

More Comments: 7 1.Less testing would meet bare minimum requirements with less statistical rigor. 2.More testing will improve the statistical analysis, but it’s a balance of: Timing Oil quantities Level of data needed to accept test into the Matrix vs. the rigor of the Matrix Amount of hours on engines going into the Matrix 3.A benefit of the proposed design is the ability to augment it if needed. A 4 th and even 5 th test per lab will help the statistical analysis if desired. Additional testing can be decided upon review of the first 3 tests per lab. Example: STOP: Review results & decided if more testing desired A third oil could be included in test 4 or 5 depending on test results of prior data

Questions? Comments? 8

Appendix 9

Repeatability 10 Each lab tests one of the oils twice Made Up Aeration Data Approx. Engine Hours at Start of Test

Reproducibility 11 Each oil tested across labs Made Up Aeration Data Approx. Engine Hours at Start of Test

Discrimination 12 Each oil tested at each lab Made Up Aeration Data Approx. Engine Hours at Start of Test