Download presentation
Presentation is loading. Please wait.
1
ViDA Data Management/Quality Assurance
2
Outline – ViDA Data Management/Quality Assurance
QA review before the coding process QA review during the coding process QA review after the coding process Pre-processing of data prior to analysis
3
Road rating training usRAP inspection manual How to use software
Practice road length Peer reviews Supervisor reviews
4
By the end of the training, the coders must:
understand what usRAP is and aims to achieve, including the purpose of Star Ratings and Safer Roads Investment Plans be familiar with the road attributes and categories that must be recorded know how to use the software to code roads be able to consistently code roads with a high level of accuracy 7-14 hours (intro, manual, software)
5
Target Level of Accuracy
6
Before the coding … All coders rate same section of road Up to 8 hours
Practice Comparison (peer, trainer) Up to 8 hours
7
QA review during the coding process
Fatigue Quality review Interruptions Data management
8
Road rating/coding… human factors
Perception (volume, expectation, contrast) Interpretation (workload, motivation, mistakes) Response (coding, saving) 4-6 coders per 2000 miles 2-7 mph range
9
Example: slope
10
Recommend Reviews Peer to Peer Progress Reviews External Reviews (opt)
Encourage discussion Cross-check each other Progress Reviews Supervisor checks Target 10% review External Reviews (opt) Independent Peer to peer reviews One of the best ways for new raters to learn is to ask questions, of both their supervisor and other raters. Often, this will be the main way to learn some of the more difficult issues to grasp. To ensure that all raters share exactly the same views on rating, peer to peer reviews are used. Whenever a difficult question is raised, the answer needs to be shared with all rating staff in the team. For example what guardrail end conditions are considered safe and where are they considered a hazard (see photos on slide). It is also helpful to have a cross check of rated roads. To do this, one rater will review the ratings made by his or her peer and then they have a discussion about each item where the reviewer identifies each attributes rated with which there is not agreement and they review the inspection manual and come to a conclusion as to how each such attribute should be rated. Peer to peer review should be undertaken and documented regularly throughout the rating process. As an example, a peer to peer review can be undertaken on a short section of each road or major section that is completed. It is recommended that the rating supervisor sit in on these peer to peer reviews, initially taking a proactive role and with time simply listening and intervening only in the case of an improper conclusion. Progress reviews Data should be checked by the rating supervisor as soon as possible after rating to identify any issues of concern quickly. Initially, detailed checks should be undertaken at the end of each day’s rating until it is clear that the rater is providing outputs in accordance with the rating guidelines. A general rule is that 10% of the roads rated should be reviewed by the rating supervisor. Rather than wait until an entire network of roads have been rated to find an error, it is best that constant reviews be done, which would eventually add up to the final 10% check of all roads. In addition to checking the ratings against the specifications, the following issues can be checked. Typical scenarios that can cause errors include: duplicate records missing 100m sections not rated (gaps between rated sections) missing data in some attributes only. Some general checks of the data may include: has the correct length of road been rated? has the correct segments been rated? has horizontal alignment been rated or is it based on inertial measurement data (eg. Gipsi-trac)? where there is dual carriageway is there data for the reverse direction? If major errors are found in the sample, complete re-rating of the entire work by that rater may be required. Minor and isolated errors can be ratified and corrected by the rating supervisor. It is important to discuss any issues with raters straight away to minimise the need for re-rating. If all questions as to how to rate a particular item or situation are resolved by the rating supervisor it will be possible to reduce both discrepancies and errors. The rating supervisor should prepare a weekly report of completed surveys and the peer to peer evaluation and discrepancies and errors found on each survey. These should be shared with the usRAP team to promote uniformity. External reviews External review of the survey should occur after completion of 25%, 50% and 100% of the rating. The external review should cover approximately 10% of the network, and be based on the specifications for accuracy. These external reviews should be undertaken by someone qualified to be a rating supervisor who was not directly involved in the rating task.
11
Typical errors duplicate records
missing sections not coded (gaps between coded sections) missing data in some attributes only
12
Documentation Errors, by coder Results of peer reviews
Weekly progress report Share
13
General Checks has the correct length of road been coded?
have the correct segments been coded? has horizontal alignment been coded? where there is divided highway, are there data for the reverse direction? Were the correct photos used?
14
Not always easy to see speed limit signs in streetview
15
Notes Non-standard sign
16
Notes Location Attribute Recorded item Correct item Notes 0.1 to 0.7
Speed limit unknown 60mph Speed sign at 0.7m is a repeater sign (small diameter), therefore previous section has same speed limit. 0.1 Roadside Left 0-3m 3 -7m Only hazardous item present is telegraph pole, around 7m. See image 1 below Intersection type 3-leg (unsignalised) no turn lane 4-leg (unsignalised) no turn lane This is a staggered 4 leg intersection. See image 2 0.1 to Lane width Medium Wide Compare the lane width to the car width 0.3 Curvature Moderate Straight or gentle There is no curve in this 100m (further on there is) Median type Central hatching Centre line The hatching stopped in the previous 100m Image 1. A9, distance 0.1m Image 2. A9, distance 0.1m (looking back from end of the section)
17
QA review after the coding process
Debrief with raters Consistency checks Produce and review maps Spreadsheet computations Summaries Pivot tables
18
Simple Consistency Checks
See Chapter 4, p. 17 of the Producer Manual
19
Results Review Check extreme risk values – are the coded attributes correct. Check a random sample – are the coded attributes correct. Distribution of items for each category using detailed condition report (road specific) Do risk scores follow logical road structure/ classification? Are countermeasures in the correct places – cross reference with road inspection data. Are the maps accurately plotted on the road network?
20
Spreadsheet summary statistics
21
Pivot Tables – comparing coders
22
Pivot Tables – summary by codes
23
Mapping Checks
24
Example 1 – Roadside Severity Review Using ArcGIS and Google Maps
After rating, plot points and attributes in ArcGIS. Filter on attributes that you want to review. In this example, roadside severities: Deep drainage ditch (9) Downwards slope (> - 15) Based on some preliminary descriptive statistics on roadside severity, I felt that the above had been reported too frequently by one rater. The spatial distribution of locations coded in this manner are presented. Convert records of interest to KMZ and display in Google Earth. In Google Earth, review the Streetview images and re-assess the attributes. Simultaneously, but independently, update the appropriate attributes in ArcGIS by selecting the corresponding records
25
GIS on base map
26
Street View review
27
Example 2 – Intersection Attribute Review Using ArcGIS (thematic maps)
After rating, plot points and attributes in ArcGIS. Filter on attributes that you want to review In this example, intersection locations and recorded volumes. Compare intersection locations to an underlying roadway network, intersection database or aerial image. Where available, compare traffic volumes to the recorded/assumed traffic volumes. Thematically map the intersection and traffic (roadway) data in the same manner. Where available, compare provided traffic signal locations to recorded traffic signal locations. Identify locations where an intersection was recorded but traffic volume was not (this could be done in Excel as well). Update any pertinent attributes in 3-6 in ArcGIS by selecting the corresponding records.
29
Example 3 –Attribute Review Using Street View and preprocessor
After rating, plot points and attributes in ArcGIS. Filter on attributes that you want to review In this example, intersection locations and recorded volumes. Compare intersection locations to an underlying roadway network, intersection database or aerial image. Where available, compare traffic volumes to the recorded/assumed traffic volumes. Thematically map the intersection and traffic (roadway) data in the same manner. Where available, compare provided traffic signal locations to recorded traffic signal locations. Identify locations where an intersection was recorded but traffic volume was not (this could be done in Excel as well). Update any pertinent attributes in 3-6 in ArcGIS by selecting the corresponding records.
31
Pre-processing of data prior to analysis
Although data for the majority of these fields is collected during the road inspection and coding phases, some fields are difficult to rate or require further investigation, or pre-calculation data processing (pre-processing).
32
Fields covered road name section traffic volume motorcycle volume
Updated: see page 7 of producer manual Fields covered road name section traffic volume motorcycle volume pedestrian and bicycle flows speed overtaking demand horizontal alignment vertical alignment intersecting road volume
33
Use “local knowledge” Document this to ensure there is a high level of transparency and quality
34
Motorcycle flow – usually default in the US
35
Ped/Bike Flow estimation (general approach, being revised for ViDA)
Method Area type Sidewalk Ped crossing facility Land use 3 Stages of preprocessing: global, local, smoothing Engineer/planner w/local knowledge may check/adjust Plotting useful
36
Pedestrian Crossing – Global Stage
Direct observation (coded flow)
37
Ped Crossing – Global Stage
Estimate flow
38
Ped Crossing – Global Stage
39
Ped Crossing – Local Stage
Example Adjustments If no flow observed, but crossing facility, adjust to low If no flow observed, but intersection, adjust to low Global Stage may produce excessive flows for US conditions – consider reducing by one unit Do not smooth
40
Ped Flow Along Road – Global Stage
41
Ped Flow Along Road - Global
42
Ped Flow Along Road - Global
May produce excessive flows for US conditions – consider reducing by one unit
43
Ped Flow Along Road – Local Stage/Smoothing
Example Adjustments If no flow observed, but sidewalk, adjust to low If no flow observed, but semi-urban/urban, adjust to low Global Stage may produce excessive flows for US conditions – consider reducing by one category/unit Smoothing +/- 1.5 mi. (don’t carry into rural) Observed rural carry over to nearest semi- urban/urban
44
Bike Flow – Global Stage
Similar to ped flow along road
45
Bike Flow – Local Stage/Smoothing
Example Adjustments If no flow observed, but bike facility, adjust to low If rural and isolated med/high flow observed, reduce by one Global Stage may produce excessive flows for US conditions – consider reducing by one category/unit Smoothing +/- 1.5 mi. (don’t carry into rural) Observed rural carry over to nearest semi-urban/urban
46
Speed Three categories: Posted speed (coded by the coders)
85th percentile speed (from surveys, similar sites, or set to 5 over posted) – post processing Mean speed (from surveys, similar sites, or set to posted) – post processing
47
Horizontal Curvature Add it if you’ve got it (e.g., from an inventory)
48
Vertical Alignment Variation
Add it if you’ve got it
49
Passing Demand (inside the program)
Physical median – none 1 lane w/median rumble, central hatching, <2000 ADT, non-physical median – low 1 lane w/centerline median and <4000 ADT, – medium 1 lane w/centerline median and >=4000 ADT, – high Document method
50
Intersecting Road Volume
Detailed volume data is best Post process if you have some If have all, don’t code Better to estimate than to use default (high)
51
Questions? Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.