LifeFlow Case Study: Comparing Traffic Agencies' Performance from Traffic Incident Logs This case study was conducted by John Alexis Guerra Gómez and Krist.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

Critical Reading Strategies: Overview of Research Process
Occupational Health and Safety Accident Investigation Training HS6_
Occupational Disease and Experience Rating. Overview  Statistics suggest a high prevalence of occupational disease (‘OD’) in New Zealand.  Our no-fault.
Abstract Travel time based performance measures are widely used for transportation systems and particularly freeways. However, it has become evident that.
Non-accidental death Definition Any case of death of a person either; where there is no identifiable incident or trauma involved, or which is the result.
Ohio QuickClear TIM Training subtitle. QuickClear Committee AAA Ohio Buckeye State Sheriff’s Association Ohio Association of Chiefs of Police Ohio Department.
Managing Safety and Health, Overview Ron Hopkins, CFPS, CFEI TRACE Fire Protection and Safety Consultants. Ltd. Richmond, Kentucky.
Accident/Incident Investigation
ORP Incorporating Human Performance Improvement Tools into DOE Processes Shirley J. Olinger, Deputy Manager, ORP Brian Harkins, ORP Facility Representative.
Accident Investigation State of Florida Loss Prevention Program.
CRASH SCENE MANAGEMENT A partnership between the Oregon Department Of Transportation and the Oregon State Police.
FINDING PATTERNS IN TEMPORAL DATA KRIST WONGSUPHASAWAT TAOWEI DAVID WANG CATHERINE PLAISANT BEN SHNEIDERMAN HUMAN-COMPUTER INTERACTION LAB UNIVERSITY OF.
Lesson Design Study Suggestions from our text: Leading Lesson Study.
Sharif University of Technology Session # 7.  Contents  Systems Analysis and Design  Planning the approach  Asking questions and collecting data 
ACCIDENT INVESTIGATION
Web 2.0 Testing and Marketing E-engagement capacity enhancement for NGOs HKU ExCEL3.
Science Inquiry Minds-on Hands-on.
A Study of the 2004 Street Smart Communications Program Prepared by Riter Research for: Metropolitan Washington Council of Government’s May 2004 Advertising.
Choosing Your Primary Research Method What do you need to find out that your literature did not provide?
Fleet Safety. Introduction: Why Address Fleet Accidents Frequency of Fleet Accidents (NSC) 22% of workplace fatalities were highway accidents 80-90% were.
Discovering Web Access Patterns and Trends by Applying OLAP and Data Mining Technology on Web logs Data Engineering Lab 성 유 진.
Transportation leadership you can trust. presented to Talking Operations Webinar presented by Richard Margiotta Cambridge Systematics, Inc. June 28, 2006.
Incident Reporting Procedure
Incident Protocol Hazardous Material HERO UNIT Training Module.
MODULE 2 ROLES AND PROCEDURES
Quality Directions Australia Improving clinical risk management systems: Root Cause Analysis.
Guidance Notes on the Investigation of Marine Incidents
Policy #C: CHAP CII.7I  To define the reporting, follow-up, and feedback process for incidents involving patients and Ambercare personnel.
1. Objectives  Describe the responsibilities and procedures for reporting and investigating ◦ incidents / near-miss incidents ◦ spills, releases, ◦ injuries,
Wyoming’s Approach to Safety Using the New SHRP2 Data Martin Kidner, State Planning Engineer Wyoming DOT July 22, 2015.
Economics 173 Business Statistics Lecture 7 Fall, 2001 Professor J. Petry
204: Assessing Safety in Out-of-Home Care Updates.
ISV Innovation Presented by ISV Innovation Presented by Business Intelligence Fundamentals: Data Cleansing Ola Ekdahl IT Mentors 9/12/08.
Incident Management Training
Krist Wongsuphasawat John Alexis Guerra Gomez Catherine Plaisant Taowei David Wang Ben Shneiderman Meirav Taieb-Maimon Presented by Ren Bauer.
Risk Assessment Soft Skills Programme Nexus Alliance Ltd.
Recent Epidemiologic Situations of TB in Myanmar -Preliminary Review of Data from routine TB surveillance focusing on Case Finding- 9 May 2014, Nay Pyi.
Detection of Routing Loops and Analysis of Its Causes Sue Moon Dept. of Computer Science KAIST Joint work with Urs Hengartner, Ashwin Sridharan, Richard.
University of Minnesota Intersection Decision Support Research - Results of Crash Analysis University of Minnesota Intersection Decision Support Research.
Design and Assessment of the Toronto Area Computerized Household Activity Scheduling Survey Sean T. Doherty, Erika Nemeth, Matthew Roorda, Eric J. Miller.
What is an accident and why should it be investigated?
The techniques involved in systems analysis Explanation of a feasibility study:Explanation of a feasibility study: –economic, –legal, –technical, –time.
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
GCSE CHILD DEVELOPMENT. Summary of Assessment Unit 1 Written Paper 1½ hours (40% final mark, one tier only) Unit 2 Controlled Assessment – Child Study.
Development of Basic Indices of Attention to Nonsocial Events Across Infancy: Effects of Unimodal versus Bimodal Stimulation Lorraine E. Bahrick, James.
Risk Identification. Hazards and Risk Section 2: ACCIDENT THEORIES 2.1 Single Factor Theories  This theory stems from the assumption that an accident.
Using State Data to Assess the Influence of Child Safety Campaigns Challenges Faced When Analyzing State Data Marc Starnes National Center for Statistics.
Predicting User Interests from Contextual Information R. W. White, P. Bailey, L. Chen Microsoft (SIGIR 2009) Presenter : Jae-won Lee.
Accident Investigation. What is an Accident? n An unintended happening, mishap. n Most often an accident is any unplanned event that results in personal.
1 IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo, Jose G. Delgado-Frias Publisher: Journal of Systems.
Work Place Committees and Health and Safety Representatives Training Module 4 - HAZARDOUS OCCURRENCE INVESTIGATION AND REPORTING.
Accident Investigation Basics Becky Pierson DOSH Consultation Revised: 07/2008.
Opinion spam and Analysis 소프트웨어공학 연구실 G 최효린 1 / 35.
The DRE Program and Drug Impaired Driving
Incident Reporting And Investigation Program
Investigation Procedures
Exposing Impediments to Insurance Claims Processing
Lesson Design Study Leading Lesson Study.
Incident Reporting And Investigation Program
Clinical audit 2017/18 National Results
Clinical audit 2017/18 National Results
HERO UNIT Training Module
Clinical Educator and Teacher Candidate Training Chapter for the Candidate Preservice Assessment for Student Teachers (CPAST) Form Developed by the VARI-EPP*
Accident Investigation.
Cooperating Teacher and Student Teacher Training Chapter for the Candidate Preservice Assessment for Student Teachers (CPAST) Form Developed by the VARI-EPP*
Presentation transcript:

LifeFlow Case Study: Comparing Traffic Agencies' Performance from Traffic Incident Logs This case study was conducted by John Alexis Guerra Gómez and Krist Wongsuphasawat under the supervision of Ben Shneiderman and Catherine Plaisant. We appreciated the feedback from Nikola Ivanov, Michael L. Pack and Michael VanDaniker. Last updated: Feb 12, 2012 Human-Computer Interaction Lab University of Maryland

Introduction Vehicle crashes remain the leading cause of death for people between the ages of four and thirty-four. In 2008, approximately 6-million traffic accidents occurred in the United States. This resulted in nearly 40,000 deaths, 2.5 million injuries, and losses estimated at $237 billion. While traditional safety and incident analysis has mostly focused on incident attributes data, such as the location and time of the incident, there are other aspects in incident response that are temporal in nature and are more difficult to analyze. We used LifeFlow to examine a dataset from the National Cooperative Highway Research Program (NCHRP) that includes 203,214 traffic incidents from 8 agencies. Each incident record includes a sequence of incident management events: Incident notification --when the agency is first notified of the incident Incident Arrival --when the emergency team arrives the scene Lane Clearance --when the lanes are opened, but the incident scene may be not completely cleared Incident cleared, Incident clearance, and Return to normal --all denote the end of incidents. For ease of analysis, we aggregated all three into the new event type Return to normal (aggregated). A typical sequence should start with Incident Notification and finish with Return to normal (aggregated), with the possibility of having Incident Arrival and Lane Clearance in between. In addition, the traffic incidents data include two attributes: Agency (represented with a letter from A to H for anonymity) Type of incident (e.g. “Disabled Vehicle”, “Fatal Accident”, etc.)

This figure shows 203,214 traffic incidents in LifeFlow. There is a long pattern (> 100 years long) in the bottom that stands out. We were wondering if there was any incident that could last more than hundred years, we probably should not be driving any more. Investigating further, we found that the Incident Arrival time of those incidents were January 1, 1900, a common initial date in computer systems. This suggested that the system might have used this default date when no date was specified for an incident.

Clean the data!

Video After cleaning the data, we used the time from when the agencies were notified to the final clearance of the incidents as a performance measure. The time when the agency was notified can be indicated by the Incident Notification event and the final clearance is indicated by the Return to normal (Aggregated) event.

Incidents are grouped by agencies. The horizontal length of each agency's path represents the average time from incident notification to final clearance, which reflects the performance measure for that agency. We then sorted the agencies according to the length of their paths, resulting in the fastest agency (shortest path) on the top and the slowest agency (longest path) in the bottom. We could see that Agency C was the fastest agency to clear its incidents, taking about 5 minutes on average, while the slowest one was Agency G with an average of about 2 hours 27 minutes.

To investigate deeper, we removed the data from all agencies except the slowest (Agency G) and the fastest (Agency C).

Looking at the distribution, Agency C seems to have many incidents that were cleared almost immediately after notification.

On the other hand, Agency G does not have many incidents with very short clearing time.

We looked into the different incident types reported and found that most of the incidents that Agency C reported are Disabled Vehicles which had about 1 minute clearance time on average. A large number of the incidents reported Clearance immediately after Incident Notification. This observation made us wonder if there is any explanation for these immediate clearances, and encouraged further analysis.

In a similar fashion, we investigated Agency G, which seemed to be the slowest agency. Agency G classified their incidents in only two types: Non- ATMS Route Incident and simply Incident. The Incident incidents had an average length of about 38 minutes, which is very fast compared to the other agencies. However, the Non-ATMS Route Incident incidents took on average 5 hours 14 minutes to clear.

Conclusions These evidences showed that the agencies reported their incident management timing data in inconsistent ways, e.g., Agency C's incidents with 1 minute clearance time. Beside timing information, the same problem occurred with incident types as the terminology was not consistent across agencies or even local areas. Difference might stem from differences in data entry practices, terminology variations, or missing data. Policies are needed to decide if, and how, to normalize the data and tools will be needed to manage this normalization process and provide audit trails documenting the transformations. At the time of this analysis, these inconsistencies among agencies restricted further meaningful performance comparison. Although our data analysis in the case study was limited and preliminary, domain experts from the CATT Lab were conducting a more formal analysis of the data. They reviewed our work and stated that they wished LifeFlow was available earlier on when they started their own analysis. They confirmed the existence of anomalies that we had found in the data, and stated that their elimination was non-trivial when using SQL because they had to expect the errors in advance and be careful to exclude them from their analysis. However excluding all the possible erroneous sequences in a SQL query would be very difficult. In the end, they needed to review the results of SQL queries to ascertain that there were no longer any errors. Without LifeFlow, this kind of review and identification of unexpected sequences would be almost impossible. Finally, they mentioned that LifeFlow would allow them to ask more questions faster, and probably richer questions about the data. LifeFlow was also able to reveal unexpected sequences that may have been overlooked, but the tool also suggested that their prevalence is limited. We believe that using LifeFlow can assist analysts explore large datasets, such as the NCHRP traffic incident information, in ways that would be very difficult using traditional tools and might allow analysts to find richer results in less time.