LTMS Task Force Statistics Subgroup Report to Joint LTMS Open Forum San Antonio, TX May 11, 2010.

Slides:



Advertisements
Similar presentations
Quality control tools
Advertisements

Chapter 2 The Process of Experimentation
Quality Management in Diagnostic Imaging
Peer Support for Teaching 18 th March Peer Support for Teaching 2011 PST originated through a review of the University’s 2001 Peer Observation.
IIIG LTMS V2 Review. LTMS V2 Review Data Summary: – Includes 285 Chartable reference oil results from all test laboratories – Most recent chartable reference.
Ninth Replenishment of IFAD’s Resources Draft Resolution: Revisions made since the 2 nd Session of Consultation October 2011.
EVAAS EVALUATION EFFECTIVENESS …AND WHAT DOES IT SAY??? Brawley Middle School November 27, 2012.
Lecture 2: Null Hypothesis Significance Testing Continued Laura McAvinue School of Psychology Trinity College Dublin.
Pre-analysis plans Module 8.3. Recap on statistics If we find a result is significant at the 5% level, what does this mean? – there is a 5% or less probability.
Analyzing Measurement Data ENGR 1181 Class 8. Analyzing Measurement Data in the Real World As previously mentioned, data is collected all of the time,
The Comparison of the Software Cost Estimating Methods
Planning for Inquiry The Learning Cycle. What do I want the students to know and understand? Take a few minutes to observe the system to be studied. What.
Measuring Environmental Performance: Beyond the Beans – San Diego County’s EPIC Pilot Project 16 th ANNUAL CIHC CONFERENCE December 4, 2006 presented by:
Sampling Design.
Design and Robustness of Some Statistical Quality Control Tools Dr. Maria Calzada Loyola University New Orleans.
Effectiveness Day : Multi-professional vision and action planning Friday 29 th November 2013 Where People Matter Most.
CONCEPT PAPER RESULT BASED PLANNING. RESULT-ORIENTED PLANNING Overall Objective/ Goal Specific Objective/ Purposes Expected Result/ Output Activities.
Learning Objectives Copyright © 2004 John Wiley & Sons, Inc. Sample Size Determination CHAPTER Eleven.
© 2009 Chevron Oronite Companies. All rights reserved. Cummins ISM Reference Data Review for Cummins Surveillance Panel August 26, 2009 Jim Rutherford.
Understanding the Variability of Your Data: Dependent Variable Two "Sources" of Variability in DV (Response Variable) –Independent (Predictor/Explanatory)
Quality Assurance. Identified Benefits that the Core Skills Programme is expected to Deliver 1.Increased efficiency in the delivery of Core Skills Training.
Factorial Design of Experiments Kevin Leyton-Brown.
Action Planning Webinar September 12 th Your speaker today Matt Roddan Director, Employee Research ORC International.
Consumer behavior studies1 CONSUMER BEHAVIOR STUDIES STATISTICAL ISSUES Ralph B. D’Agostino, Sr. Boston University Harvard Clinical Research Institute.
© Chevron 2008 ASTM PCEOCP VID Matrix Design and Funding Subgroup September 4, 2008 Detroit, MI.
Formulae For each severity adjustment entity, X i = i th test result in original units in end-of-test order T i = i th test result in appropriate units.
1 The Need for Probabilistic Limits of Harmonics: Proposal for IEEE Std 519 Revision Paulo Ribeiro Calvin College / BWX Technologies, Inc Guide Carpinelli.
1 Psych 5500/6500 The t Test for a Single Group Mean (Part 1): Two-tail Tests & Confidence Intervals Fall, 2008.
1 Afton ESCIT Report An Analysis of Sequence IIIG Reference Oil Data Phosphorus Retention - and Volatile Phosphorus Throughput December 12, 2006.
1 The Second Addition of LTMS (Theoretical Sneak Peak for the VG) VG SP: May 2010.
1 Proposal for Phosphorus Emission Index Inclusion in GF-5 ESCIT Meeting April 24, 2007 Presented by: Ted Selby Savant, Inc.
Lubricant Test Monitoring System (LTMS) Quick Deck Draft 3 March 2, 2012.
L Berkley Davis Copyright 2009 MER301: Engineering Reliability Lecture 9 1 MER301:Engineering Reliability LECTURE 9: Chapter 4: Decision Making for a Single.
WP 9: 1 st Planning meeting summary Clarification between WP members of common objectives: Workshop planning and logistics with time- line Planning for.
Accelerating progress through guided writing
Marketing Planning.
© 2009 Chevron Oronite Companies. All rights reserved. New Liner and Ring Batch Effects in the Mack T-12 Presented to Mack Surveillance Panel Conference.
ISM Test Development Task Force Report June 21, 2004.
NEDC/WLTP correlation process Meeting of TCMV on 17 November 2015
Enhancements to IIIG LTMS By: Todd Dvorak
Ch 8 Estimating with Confidence 8.1: Confidence Intervals.
Pilot and Feasibility Studies NIHR Research Design Service Sam Norton, Liz Steed, Lauren Bell.
JACTest Monitoring Center T-10 PB Ad Hoc Task Force Report to the Mack Test Surveillance Panel September 10, 2004 Richmond, VA.
GF-5 Emissions System Compatibility Improvement Team Chris Engel Report to ILSAC / OIL 9/28/06.
Cat Aeration Task Force Additional Prove Out Testing Proposal 1 PC-11 Statisticians Task Force July 1, 2014.
© Chevron 2008 ASTM PCEOCP VID Matrix Design and Funding Subgroup April 25, 2008 Teleconference.
ASTM TECHNICAL GUIDANCE COMMITTEE Semi-Annual Report Presented by William A. Buscher III Updated March 6, 2016.
The symbol is a service mark of Afton Chemical Corporation. Support for a Phosphorus Volatility Specification in GF-5 Report to ESCIT June 14, 2007 Greg.
#1 Make sense of problems and persevere in solving them How would you describe the problem in your own words? How would you describe what you are trying.
Session 6: Data Flow, Data Management, and Data Quality.
Approaches to quantitative data analysis Lara Traeger, PhD Methods in Supportive Oncology Research.
ESCIT Ballot January 8, Ballot Results Sequence IIIG preferred by a majority of the members –Sequence IIIG-EOT was the preferred duration Secondary.
LTMS Task Force Statistics Subgroup Report to Joint LTMS Open Forum San Antonio, TX May 11, 2010.
Quality Assurance processes
Technical Guidance Committee Report
Organisation Control KPI’s & an industry Review
Formulae For each severity adjustment entity,
BUMP IT UP STRATEGY in NSW Public Schools
Agenda Introduction and membership
Principles of Calibrating HDM-4
Caterpillar C13 Matrix Data Analysis
Journalism 614: Reliability and Validity
LTMS Task Force Statistics Subgroup Report to Joint LTMS Open Forum
Test Monitoring Center Report to the Mack Test Surveillance Panel
Punch items for LTMS Version 2 Surveillance Panel Consideration
VIE Precision Matrix Analysis
Draft Resolution: Revisions made since the 2nd Session of Consultation
Reading Property Data Analysis – A Primer, Ch.9
Unemployment Insurance Integrity Performance Measures
Project Management.
Presentation transcript:

LTMS Task Force Statistics Subgroup Report to Joint LTMS Open Forum San Antonio, TX May 11, 2010

Outline 5/11/20102 Statistics Subgroup Expectations Concepts and Goals What’s new in LTMS Version 2 Formulae High level LTMS version 2 flowchart Examples Hot issues for discussion Reference intervals and spacing

Statistics Subgroup 5/11/20103 Arthur Andrews, ExxonMobil Doyle Boese, Infineum Janet Buckingham, SwRI Martin Chadwick, Intertek Jeff Clark, TMC Todd Dvorak, Afton Jo Martinez, Chevron Oronite Bob Mason, SwRI Allison Rajakumar, Lubrizol Jim Rutherford, Chevron Oronite Phil Scinto, Lubrizol Dan Worcester, SwRI Not Unanimous

Expectations 5/11/20104 Today Sharing with industry Understanding of our goals and approach Exploring implications and practical outcomes Gathering reactions, feedback, and suggestions Next Steps? In the following two days PC Surveillance Panels discuss application of version 2? At next HD Surveillance Panel face to face meetings (5/25&26?) HD Surveillance Panels discuss application of version 2? Beyond Extension to gear tests, bench tests?

Concepts and Goals 5/11/20105 Encourage consistency across test types Reduced need for industry corrections based on limited information More adaptive to parts and other uncontrolled test changes Improved LTMS should lead to less lost reference tests The goal is a more efficient and useful reference testing system – both testing and other industry efforts The greatest benefit of improved LTMS is in the precision and accuracy of candidate testing

What’s New in LTMS Version 2? 5/11/20106  Models more closely reflect real world by recognizing that laboratories might not operate at the same severity level and tests change over time  Focus on knowing where the laboratory is relative to target through the use of e i – if we can reasonably adjust non- reference results, we don’t need more references  Trigger additional tests not when the lab is “off target”, but when we don’t know where the lab is relative to target  Provide incentives in reduced reference frequency when a lab is consistent and close to target

What’s New in LTMS Version 2? (continued) 5/11/20107  Procedure for limiting impact of suspicious reference results through undue influence analysis  Tool for surveillance panels to better ensure that labs are measuring the same performance mechanism as each other and as when the test was used in category definition  Consistent definition of primary and secondary parameters

Formulae 5/11/20108 For each severity adjustment entity,  T i = i th test result in appropriate units  Y i = i th standardized test result where target and standard deviation are as currently defined for the reference oil used in the reference test

Formulae (continued) 5/11/20109 For each severity adjustment entity,  Z i = EWMA For default LTMS, λ =0.2 Fast start is used, i.e., Z 0 = average of Y1, Y2, and Y3  e i = prediction error from EWMA

High Level LTMS Version 2 Flowchart 5/11/201010

Examples 5/11/ Industry could maybe best understand LTMS proposals by using historical data from an existing test do demonstrate how it works and what happens. But we should be very careful in how we interpret this exercise. There is no way that historical data from the previous system can be manipulated to determine what would have happened if the revised LTMS system had been in place. Sequence VIII – Jo Sequence IVA – Doyle Sequence IIIG – Todd Sequence VG – Phil Sequence VID - Janet

Hot Issues for Discussion 5/11/ Chance of extending and reducing reference interval should be equal or just drop level 2 versus your test is only as good as your worst (primary) parameter. Are we allowing people to not move toward target? Should we just use the Sequence III type LTMS for everything? K values => limits Reference intervals and spacing

Reference Intervals and Spacing 5/11/  Old:  In order to remain qualified for non-reference testing, a test stand shall begin a reference oil test after no more than 10 test starts in the stand or no later than 18 months following the completion of the stand’s previous qualifying reference oil test, whichever comes first. In order to avoid clustering at the end of the 18 month period, a test stand will begin a reference oil test after no more than 5 test starts commencing after 9 months following the stand’s previous qualifying reference oil test. The time limits could be modified if appropriate by the Surveillance Panel. These intervals might be reduced or increased as a function of monitoring.  New:  In order to remain qualified for non-reference testing, a test stand shall begin a reference oil test after no more than 18 non-reference test starts in the stand or no later than 15 months following the completion of the stand’s previous qualifying reference oil test, whichever comes first. If more than 15 non-reference test starts or more than 12 months are allowed, then the laboratory is required to run 1 acceptable reference per six month interval. The time limits could be decreased if appropriate by the Surveillance Panel. These intervals might be reduced or increased as a function of monitoring.