Presentation is loading. Please wait.

Presentation is loading. Please wait.

WELCOME TO THE WEBINAR This Webinar is being presented from a computer with a resolution of 1280 X 1024. Using a similar setting on your machine may improve.

Similar presentations


Presentation on theme: "WELCOME TO THE WEBINAR This Webinar is being presented from a computer with a resolution of 1280 X 1024. Using a similar setting on your machine may improve."— Presentation transcript:

1 WELCOME TO THE WEBINAR This Webinar is being presented from a computer with a resolution of 1280 X 1024. Using a similar setting on your machine may improve your viewing experience.

2 FY 2019 Enhancements Released with Version 2. 5
FY 2019 Enhancements Released with Version and Introduction to the Calibration Assistance Tool 22 October 2019

3 FY 2019 Enhancements Released with Version 2.5.5 and CAT
Presenters: Moderator: John Donohue, Missouri Department of Transportation; Chair Wouter Brink, ARA, Chad Becker, ARA Harold Von Quintus, ARA Presentation will be available for viewing on the ME-Design Resource website: Moderator: Introduce the presenters and announce where the presentation can be downloaded for future reference.

4 FY 2019 Enhancements Released with Version 2.5.5 and CAT
Phones are being muted. Please post your questions in the Q&A box. This can be accessed by clicking on the Webex Q&A button. The presenters will answer all questions at the end of the webinar/demonstration as time permits. Questions not answered because of time, will be responded to separately.

5 Process for Enhancing the Software:
Suggested revisions are received from users and AASHTO members. Pavement ME Design Task Force monitors on-going/completed NCHRP, FHWA, and pool fund projects applicable to the MEPDG. Pavement ME Design Task Force members review and prioritize all potential suggested revisions and enhancements. Moderator: Overviews the activities of the task force for deciding what enhancements get done and turns presentations back to Chad.

6 Pavement ME Task Force Members
John Donahue, P.E.; Missouri DOT, Chairperson Vicki Schofield, AASHTO Project Manager Clark Morrison, P.E., North Carolina DOT, Vice-Chair Felix Doucet; Quebec Transportation, Vice-Chair Robert Shugart, P.E.; Alabama DOT David Holmgren, P.E.; Utah DOT Patrick Bierl, P.E.; Ohio DOT Jeff Neal, P.E.; Kansas DOT, SCOA Liaison Tara Liske; TAC Liaison Tom Yu, P.E.; FHWA Liaison Travis Tackett; Florida DOT; T&AA Liaison

7 FY 2019 Enhancements Released with Version 2.5.5 and CAT
Future Webinars on FY 2019 Webinars Webinar #3: TBD Top Down Cracking Model Integration

8 FY 2019 Enhancements Released with Version 2.5.5 and CAT
Before we get started: Poll 1: Questions 1 and 2 Chad: Brings up the poll question and summarizes the results.

9 FY 2019 Enhancements Released with Version 2.5.5 and CAT
Outline of todays webinar Calibration Assistance Tool – Part 1 Review Calibration Assistance Tool – Part 2

10 Calibration Assistance Tool

11 CAT Overview - What was developed
Full featured web application Calibration database with both LTPP and user defined pavement sections Step-by-step Calibration Guide CAT software user manual Available for public use October 23rd 2019

12 Calibration Assistance Tool – Part 1 Review
Introduction Requirements, Assumptions, and Limitations CAT Calibration Process Software Walkthrough

13 Walkthrough Items from Part 1
Login Screen Home Screen Navigation Pane Upload Required Data Start New Calibration Parts 1 through 3

14 Calibration Assistance Tool - Walkthrough

15 Walkthrough Items for Part 2
Manage Calibration Projects Initial Verification Review – from Part 1 Detailed Verification Review – Filters and Data Review Optimization Validation and Data Export

16 Manage Calibration Projects

17 Manage Calibration Projects

18 Calibration Process – Parts 3 and 4
Parts of Process Manual Steps in Process Automated Steps Step 9: Calculate and compare predicted & measured distress; assess bias and standard error, hypothesis testing Step 10: Review hypothesis test results to determine if calibration is needed. If not needed: Stop If needed: Select calibration coefficients to be modified and range of values Part 4 Data Analysis and Interpretation of Distress Predictions From Part 3

19 Part 3 –Initial Verification Results
As an example, the next step is to review the predicted and measured distress data for the initial verification run. The main screen summarizes the projects included in the initial verification, Filters based on the experimental design matrix The predicted versus measured distress data Model fit statistics and the Hypothesis test results. To evaluate the results and determine if calibration is needed you have to: First you want to review the measured and predicted plot to see how the predicted and measured data compare to each other. Next you want to assess the model fit statistics which include the bias, standard error of the estimate, r squared, linear fit intercept, slope, number of data points, and number of pavement sections included in the verification. Lastly you want to review the hypothesis test results to determine if local calibration is necessary. If any of the p-values are less than 0.05 then the null hypotheses are rejected and local calibration is recommended. For this example, I would conclude that calibration is recommended to try and eliminate bias and reduce the standard error. The process for doing the model optimization will be covered in the next webinar.

20 Part 4 – Detailed Review Use the CAT to gain knowledge about pavement sections. Identify outliers, anomalies. Investigate why certain sections perform the way they do. Use the data filters to update MvP and timeseries plots to: Identify trends Make decisions about your pavement sections

21 Part 4 – Detailed Review – All Sections
Questions to ask: Why are these points so different than the rest? Do they have structural differences? Are they in different regions? Do they have any features that are unique to them? Did they exhibit and material related defects? What can you conclude from the timeseries data? 1 2 3 We want to look at all the florida, alabama, and Arizona SPS 1 sections together. Based on the figure I want to focus on three different areas and ask the questions: Why are these points different than the rest? Are there structural differences between them? Thickness, asphalt treated base? Etc. Can different geographical locations explain why some are performing this way? Any material related defects?

22 Part 4 – Detailed Review – All Sections
Questions to ask: If there are differences, do those differences significantly impact the residual error? Should these sections be excluded? Can you explain their performance? 1 2 3 Some additional questions: IF you know that there are differences, then what? Do the differences between the projects affect the residual error? Do they explain the performance trends? We cant tell directly from this figure but these are the questions you need to ask when going through calibration. You can separate the sections out to further refine your questioning.

23 Part 4 – Detailed Review – All Sections
Questions to ask: Increase in measured rutting while predicted stays constant? How can I adjust the coefficients to remedy this? For example, We observe that for this particular section, the measured data increases while the predicted data stays fairly constant. What can we adjust in the rutting model to account for this behavior? We should also ask if this behavior is observed in the other sections. If yes, then we know that we need to address it If no, then is this section could be an outlier.

24 Part 4 – Detailed Review – All Sections
Questions to ask: Overprediction Why does the residual error look like this? Over predicts for low measured rutting and underpredicts for high measured rutting Is the residual error randomly scattered around zero? How can I correct this? Underprediction Lets look at the residual error: We see a trend in the predicted – measured data. Consistent over and under prediction with time. The permanent deformation equation exponents impact the slope of the predicted rutting. This will help improve the trend we are seeing. Residual error is dependent on the predicted error and we do not want that.

25 Part 4 – Detailed Review – Separate sections
Lets split the sections based on their locations (for demonstration purposes) Alabama Arizona Florida Can we identify why some sections perform the way they do? Should they be calibrated separately?

26 Part 4 – Detailed Review – Alabama
Clear difference What is different in this section? Material defects during construction? Loss of bond between AC lifts? Is this an outlier? Can you explain the difference? If yes, then delete If not, should remain in data set 1 2 These are all LTPP SPS-1 sections in close proximity of one another, They have the same traffic, climate, subgrade materials etc. They do have different thicknesses however. Based on these facts, I conclude that this is an outlier and it does not represent typical performance.

27 Part 4 – Detailed Review – Alabama
What about timeseries data? Increase in slope over time? Recommendation: Exclude These are all LTPP SPS-1 sections in close proximity of one another, They have the same traffic, climate, subgrade materials etc. Based on these facts, I conclude that this is an outlier and it does not represent typical performance. Lets looka the time series data. We can clearly see that it performs different than the rest of the sections. It has an increasing slope with time which does not follow typical rut patterns or the predicted model. Should see a constant slope with time. Based on this assessment, I conclude that this is most probably an outlier and should be removed from the dataset.

28 Part 4 – Detailed Review – Alabama
After excluding section now what? Looks like a slight under prediction as measured rutting increases Do these sections have different thicknesses and can that explain some of the bias? Once we removed the section we repeat the same process. We ask the same questions and try to explain the behavior before moving to adjust the coefficients. We can further use the filters in the detailed review to make decisions. For this data I asked what are the different thicknesses? Do they perform differently?

29 Part 4 – Detailed Review – Alabama
AC thickness > 10 inch Difference in magnitude AC thickness between 5 and 10 inch Minimal difference in slope. Mostly magnitude. Lets look at the sections greater than 10 inches and the sections between 5 and 10 inches seperatly For the sections greater than 10, we can see that the model under predicts the measured performance while over predicting for the sections between 5 and 10 inches. We can use the timeseries data to understand if there are differences in slope. I just did a visual inspection and added some hand drawn trends. The >10 inch sections did show a different slope between the measured and predicted data. This would suggest that either the br2 or br3 coefficients may need to be adjusted The sections between 5 and 10 do not show the slope differences but rather a magnitude difference. The br1 or bs1/bsg1 coefficients may be a better option to improve the predictions. These examples are used to show you the types of questions to ask, and how to use the tool to gain knowledge bout your sections and how to use that information to adjust the calibration coefficients.

30 Detailed Review – Florida
Looks like there are clusters of data for each section Any outliers? What is causing this difference? Do these sections have different base types? (Asphalt treated base perhaps?) As another example, I separated out the florida sections. I can see that there seems to be a distinct difference in magnitude for the sections included. The same questions should be asked. You need to try and explain why this is happening and then proceed.

31 Part 4 – Detailed Review – Florida
AC thickness between 5 and 10 inch AC thickness < 5 inch AC thickness >10 inch I looked at the pavement sections by thickness because their climate, traffic, and other inputs are similar. I am highlighting a few points to look at and explain what you are seeing. There is a thickness effect, some of the findings are similar to what was found in NCHRP 9-30a and the confinement factor. The confinement factor built in over estimates. One way to account for it is in the lab derived k1 value. This can be done external to the calibration tool.

32 Part 4 - Recap User Decisions
Carefully review measured and predicted data to gain knowledge about what you are seeing Use filters to make decisions Explain why you are seeing what you are seeing Iterative process Automated processes Populate all data tables and plots Calculate descriptive and model fit statistics Populate Results Filter data and update figures and tables

33 Calibration Process – Parts 5 and 6
Parts of Process Manual Steps in Process Automated Steps From Part 4 Step 11: Execute optimization runs to optimize coefficients to eliminate bias and minimize standard error Step 12: Review results and select calibration coefficient set with lowest bias and standard error or select additional adjustments Part 5 Optimization of Calibration Coefficients to Eliminate Bias Part 6 Validation and Accepting the Final Results Step 13: Review validation results, develop standard deviation of residual error relationship and accept final coefficients for report generation Step 14: Export final calibration coefficients and standard deviation results in XML format compatible with pavement ME design

34 Part 5 – Optimize Coefficients

35 Part 5 – Optimize Coefficients: Alabama

36 Part 5 – Optimize Coefficients: Alabama - ANOVA
Only selected AC thickness because that is the main difference between these sections. Can look at each one individually but there has to be different data values. Binder type, MAAT, traffic, subgrade type are all the same in this instance.

37 Part 5 – Optimize Coefficients: Alabama
Adjust Br3 to increase the slope of the AC rut model – This should increase the predictions for the thicker sections without impacting the sections with thickness between 5 and 10.

38 Part 5 – Optimize Coefficients: Alabama Iteration 1
The tool visually shows you the number of calibration coefficient sets that you selected. You can then individually click on each one to view the measured versus predicted and residual error data as well as the descriptive statistics. The table data is also graphically represented to show the bias and SEE trends for each coefficient set. As you can see here, the bias and SEE increased with an increase in Br3 from 1.36 to 1.5.

39 Part 5 – Optimize Coefficients: Alabama Iteration 1
The data c

40 Part 5 – Optimize Coefficients: Alabama Iteration 1
Initial Verification After Adjusting Coefficients How does the updated model compare? The predictions improve but still need to be adjusted. Further refinement is needed to improve the residual error trend Systematic difference needs to be removed and will be a combination of coefficients, br1, br2, br3

41 Part 5 – Optimize Coefficients: Florida
Lets look at the Florida Sections

42 Part 5 – Optimize Coefficients: Florida - ANOVA
Only selected AC thickness because that is the main difference between these sections. Can look at each one individually but there has to be different data values. Binder type, MAAT, traffic, subgrade type are all the same in this instance.

43 Part 5 – Optimize Coefficients: Florida
Adjust Br3 to increase the slope of the AC rut model – This should increase the predictions for the thicker sections without impacting the sections with thickness between 5 and 10.

44 Part 5 – Optimize Coefficients: Florida Iteration 1

45 After Adjusting Coefficients
Initial Verification After Adjusting Coefficients The thickness possible thickness effect cannot be directly accounted for and should be done using the dgpx file by adjusting the k value.

46 Part 5 - Recap User Decisions
Select which coefficients to adjust based on the data Review results and determine if further adjustments are needed Iterative process Automated processes Send optimization sets and list of projects to analysis server for execution Run all optimization sets Notify user when each optimization run is complete Populate Results Set up one matrix at a time – different factors may be important for different distresses Set factor limits based on local design practices and conditions Iterate and adjust values to try and obtain a balanced design Balanced design means that an equal number of projects are within each populated cell. If 10 projects are within one cell and only 1 project in others, it may bias your results. Review the minimum number of sections based on agency values and tolerances.

47 Calibration Process – Parts 6
Parts of Process Manual Steps in Process Automated Steps From Part 5 Part 6 Validation and Accepting the Final Results Step 13: Review validation results, develop standard deviation of residual error relationship and accept final coefficients for report generation Step 14: Export final calibration coefficients and standard deviation results in XML format compatible with pavement ME design

48 Part 6 – Final Selection Just as an example, once you have finalized the optimization, you then select the final set and proceed to the next section.

49 Part 6 – Final Selection The CAT then summarizes your verification, calibration, and validation results. Validation only occurs when you have more than 30 total pavement sections included in your analysis. This allows you to compare your model statistics. You can still go back and adjust if needed.

50 Part 6 – Standard Deviation Derivation
The final step

51 Part 6 – Data Export

52 Part 6 - Recap User Decisions Review final coefficient selections
Compare verification, optimization, validation Was bias eliminated? Did the SEE reduce Did all hypothesis tests have a p-value greater than 0.05? Are these results acceptable? Automated processes Populate all data tables and plots Calculate descriptive and model fit statistics Derive standard deviation of residual error equation coefficients Export results Set up one matrix at a time – different factors may be important for different distresses Set factor limits based on local design practices and conditions Iterate and adjust values to try and obtain a balanced design Balanced design means that an equal number of projects are within each populated cell. If 10 projects are within one cell and only 1 project in others, it may bias your results. Review the minimum number of sections based on agency values and tolerances.

53 As a review: Parts 1 and 2 Webinar 1 Webinar 2

54 Calibration Process – Parts 1 and 2
Step 4: Extract & review distress data Step 5: Calculate distress data statistics & identify outliers Step 6: Decision on adequate number of test sections in matrix Step 1: Establish sampling matrix Manual Steps in Process Automated Steps Step 3: Populate matrix from calibration database Step 2: Select test sections for matrix Part 2 Review Distress Data Part 1 Getting Ready for Calibration Parts of Process To Part 3

55 Calibration Process – Parts 3 and 4
Parts of Process Manual Steps in Process Automated Steps Step 8: Perform initial verification by executing batch file runs & extract predicted distress Step 7: Review project files and select a set of calibration coefficients (global or local) Step 9: Calculate and compare predicted & measured distress; assess bias and standard error, hypothesis testing Step 10: Review hypothesis test results to determine if calibration is needed. If not needed: Stop If needed: Select calibration coefficients to be modified and range of values Part 3 Set-Up Project Files and Execute ME Design Part 4 Data Analysis and Interpretation of Distress Predictions From Part 2 To Part 5

56 Calibration Process – Parts 5 and 6
Parts of Process Manual Steps in Process Automated Steps From Part 4 Step 11: Execute optimization runs to optimize coefficients to eliminate bias and minimize standard error Step 12: Review results and select calibration coefficient set with lowest bias and standard error or select additional adjustments Part 5 Optimization of Calibration Coefficients to Eliminate Bias Part 6 Validation and Accepting the Final Results Step 13: Review validation results, develop standard deviation of residual error relationship and accept final coefficients for report generation Step 14: Export final calibration coefficients and standard deviation results in XML format compatible with pavement ME design

57 CAT Overview – What does it NOT do?
NOT a hands-off approach Significant user interactions and decisions required. NOT a one click solution The tool was created to guide users through the decision-making process while automating the PMED analysis runs and data extraction As discussed throughout todays webinar. A lot of user interaction and decisions are required. Does NOT Identify and select sections Data quality and accuracy must be checked by user

58 Live demonstration at the ME User Group meeting
November 6 and 7, 2019 in New Orleans, LA.

59 Before we get conclude:
FY 2019 Enhancements Released with Version and CAT Before we get conclude: Poll 2: Question 3 Chad: Brings up the poll question and summarizes the results.

60 Question and answer Session
AASHTOWare: Pavement ME Design FY 2019 Enhancement and the Calibration Assistance Tool Question and answer Session We welcome comments & suggestions for future webinars; Send an to Harold – moderates the question and answer period.

61 FY 2019 Enhancements Released with Version 2.5.5 and CAT
Remember: Pavement ME Design Users Group Meeting scheduled for November 6 and 7, 2019 in New Orleans, LA. Future webinar on Top Down Cracking

62 FY 2019 Enhancements Released with Version 2.5.5 and CAT
Remember to visit tions/tagged/pavement to ask questions and participate in the Pavement ME design community.

63 FY 2019 Enhancements Released with Version 2.5.5 and CAT
Enhancements for FY 2020: Top Down Cracking Model Integration Continue to prepare a web technology version of the software – high priority of AASHTO.

64 Thank you for Attending the Webinar!
AASHTOWare Pavement ME-Design Contacts: Vicki Schofield, AASHTO Phone: (202) John Donahue, MoDOT ME Design Resource Website Pavement ME Design Users Group Contact: Christopher Wagner, FHWA Phone: (404) Help Desk, Customer Support: PREFERRED Pavement ME Design Help Desk Phone: (217) Other ARA Staff: Chad Becker Wouter Brink, Harold Von Quintus, P.E. Harold: Identify the AASHTOWare contacts and thanks everyone for participating. Marta: Thanks everyone for participating in the webinar.

65 Examples:

66 Detailed Review – Arizona
No increase in predictions with increase in measured Do these sections have different base types?

67 Detailed Review – Arizona
No increase in predictions with increase in measured

68 Part 5 – Optimize Coefficients: Bs1

69 Part 5 – Optimize Coefficients: Bs1 Results
Set 1 Set 2 Set 3

70 Part 5 – Optimize Coefficients: Bs1– Results
Set 3 Results Even changing coefficients randomly will not remove the consistent bias that can be seen here.

71 Part 5 – Optimize Coefficients: Second Run Br2

72 Part 5 – Optimize Coefficients: Br2 – Results
Set 3 Set 2 Set 1

73 Part 5 – Optimize Coefficients: Br2 – Results
Set 2 Results

74 Part 5 – Optimize Coefficients: Br2 – Results
Noticed that zero bias occurs somewhere between Set 1 and 2 Set third run coefficients between Set 1 and Set 2

75 Part 5 – Optimize Coefficients: Br2 Refinement

76 Part 5 – Optimize Coefficients: Br2 Refinement – Results
Set 1 Set 2 Set 3 Set 4

77 Part 5 – Optimize Coefficients: Br2 Refinement – Results
Set 2 Results


Download ppt "WELCOME TO THE WEBINAR This Webinar is being presented from a computer with a resolution of 1280 X 1024. Using a similar setting on your machine may improve."

Similar presentations


Ads by Google