Computing Performance Recommendations #10, #11, #12, #15, #16, #17.

Slides:



Advertisements
Similar presentations
GEOSS Data Sharing Principles. GEOSS 10-Year Implementation Plan 5.4 Data Sharing The societal benefits of Earth observations cannot be achieved without.
Advertisements

Using computer simulations to assess energy and carbon reductions in building CDM projects Joe Huang March 14, 2011.
United Nations Statistics Division
1 Requirements and the Software Lifecycle The traditional software process models Waterfall model Spiral model The iterative approach Chapter 3.
The System and Software Development Process Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Chapter 7: Key Process Areas for Level 2: Repeatable - Arvind Kabir Yateesh.
TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader MERLIN is a.
Maria Grazia Pia, INFN Genova 1 Part V The lesson learned Summary and conclusions.
Global Poverty Action Fund Community Partnership Window Funding Seminar January 2014 Global Poverty Action Fund Community Partnership Window Funding Seminar.
Simulation Project Major achievements (past 6 months 2007)
Highlights of latest developments ESA/ESTEC Makoto Asai (SLAC)
Fall 2001CS 4471 Chapter 2: Performance CS 447 Jason Bakos.
1 Technology Readiness Maryland /2015 Admin Schedule 2 AssessmentOnline/CBT Testing Dates PARCC - PBAMarch 2 – May 8 MSA ScienceApril 13.
Cambodia-India Entrepreneurship Development Centre - : :.... :-:-
Automated Tests in NICOS Nightly Control System Alexander Undrus Brookhaven National Laboratory, Upton, NY Software testing is a difficult, time-consuming.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
A Review ISO 9001:2015 Draft What’s Important to Know Now
Capability Maturity Model
Simulation Project Organization update & review of recommendations Gabriele Cosmo, CERN/PH-SFT Application Area Internal.
Simulation Project Organization update & review of recommendations Gabriele Cosmo, CERN/PH-SFT Application Area Internal.
CHAPTER 5 Infrastructure Components PART I. 2 ESGD5125 SEM II 2009/2010 Dr. Samy Abu Naser 2 Learning Objectives: To discuss: The need for SQA procedures.
Framework for Automated Builds Natalia Ratnikova CHEP’03.
Introduction to RUP Spring Sharif Univ. of Tech.2 Outlines What is RUP? RUP Phases –Inception –Elaboration –Construction –Transition.
N By: Md Rezaul Huda Reza n
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Demystifying the Business Analysis Body of Knowledge Central Iowa IIBA Chapter December 7, 2005.
Business Analysis and Essential Competencies
1 ISA&D7‏/8‏/ ISA&D7‏/8‏/2013 Systems Development Life Cycle Phases and Activities in the SDLC Variations of the SDLC models.
Usability Issues Documentation J. Apostolakis for Geant4 16 January 2009.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Architectural Design l Establishing the overall structure of a software system.
GEant4 Parallelisation J. Apostolakis. Session Overview Part 1: Geant4 Multi-threading C++ 11 threads: opportunity for portability ? Open, revised and.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
ICOM 6115: COMPUTER SYSTEMS PERFORMANCE MEASUREMENT AND EVALUATION Nayda G. Santiago August 16, 2006.
GEO Work Plan Symposium 2012 ID-03: Science and Technology in GEOSS ID-03-C1: Engaging the Science and Technology (S&T) Community in GEOSS Implementation.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
A Short Course on Geant4 Simulation Toolkit How to learn more?
The System and Software Development Process Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Computing Performance Recommendations #13, #14. Recommendation #13 (1/3) We recommend providing a simple mechanism for users to turn off “irrelevant”
LCG Generator Meeting, December 11 th 2003 Introduction to the LCG Generator Monthly Meeting.
Geant4 in production: status and developments John Apostolakis (CERN) Makoto Asai (SLAC) for the Geant4 collaboration.
Release Validation J. Apostolakis, M. Asai, G. Cosmo, S. Incerti, V. Ivantchenko, D. Wright for Geant4 12 January 2009.
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
Introduction What is detector simulation? A detector simulation program must provide the possibility of describing accurately an experimental setup (both.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Documentation Gunter Folger / CERN Geant4 School, Annecy 2008.
Consistency of Assessment (Validation) Webinar – Part 1 Renae Guthridge WA Training Institute (WATI)
Ian F. C. Smith Writing a Journal Paper. 2 Disclaimer / Preamble This is mostly opinion. Suggestions are incomplete. There are other strategies. A good.
Users contributions, documentation J. Apostolakis.
Closing remarks Makoto Asai and Marc Verderi 2012 Geant4 Collaboration Meeting Chartres.
Geant4 CPU performance : an update Geant4 Technical Forum, CERN, 07 November 2007 J.Apostolakis, G.Cooperman, G.Cosmo, V.Ivanchenko, I.Mclaren, T.Nikitina,
Fabiola Gianotti, 13/05/2003 Simulation Project Leader T. Wenaus Framework A. Dell’Acqua WP Geant4 J.Apostolakis WP FLUKA Integration A.Ferrari WP Physics.
1 Future Circular Collider Study Preparatory Collaboration Board Meeting September 2014 R-D Heuer Global Future Circular Collider (FCC) Study Goals and.
Software Tools for Layout Optimization (Fermilab) Software Tools for Layout Optimization Harry Cheung (Fermilab) For the Tracker Upgrade Simulations Working.
Physics Performance. EM Physics: Observations Two apparently independent EM physics models have led to user confusion: –Different results for identical.
Project Deliverables CIS 4328 – Senior Project 2 And CEN Engineering of Software 2.
Atlas Software May, 2000 K.Amako Status of Geant4 Physics Validation Atlas Software Week 10 May, Katsuya Amako (KEK)
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
Toward Geant4 version 10 Makoto Asai (SLAC PPA/SCA) For the Geant4 Collaboration Geant4 Technical Forum December 6 th, 2012.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
Geant4 Training 2003 A Short Course on Geant4 Simulation Toolkit How to learn more? The full set of lecture notes of this Geant4.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Multi-threading and other parallelism options J. Apostolakis Summary of parallel session. Original title was “Technical aspects of proposed multi-threading.
for the Offline and Computing groups
A Short Course on Geant4 Simulation Toolkit How to learn more?
Chapter 9: Virtual-Memory Management
A Short Course on Geant4 Simulation Toolkit How to learn more?
Priorities and contents of releases
A Short Course on Geant4 Simulation Toolkit How to learn more?
Simulation Project Structure and tasks
Presentation transcript:

Computing Performance Recommendations #10, #11, #12, #15, #16, #17

Recommendation 10 We recommend extending the scope of the computing professionals to review and optimize all Geant4 code. Periodical meetings over the past two-three years organised at CERN for reviewing computing performance in Geant4, have been replaced by internal Geant4 efforts, in communication with external (experiment) users, inviting experts representatives from the LHC experiments. Such meetings demonstrated useful in exchanging feedback and issues which could raise from studies made within the experiments frameworks. In 2007, software experts from FNAL joined the Geant4 Collaboration with explicit mandate to contribute in code reviews and performance studies on Geant4. Selected classes from particular domains were identified and code reviews organised (*). The suggested fixes as results of the code review have been promptly applied to the code and made available in the most recent releases. (*) See also notes attached to this slide

Recommendation 11 We recommend that Geant4 encourage users to monitor their applications, and provide feedback so additional “hot spots” can be identified. Communicated with users. Issues identified by profiling were addressed by Geant4 FNAL experts, GATE developers and CMS contributors in collaboration with Geant4 developers. Participation of computing experts from the experiments have been encouraged; fixes suggested by CMS in the CMS performance task-force have been evaluated and in most cases applied to the code, and released. Fixes 12-15% boost (CMS) in QGSP_BERT About 15% (GATE) in Low-E physics tables Monitoring by ATLAS Feedback useful to fix issues which occurred

Recommendation 12 We recommend the creation of a performance optimization guide. It is likely that such information already exists and just needs to be collected into one document. Presentations at the Geant4 Workshop 2007 summarised many options available for improving application performance. Improving use of Geant4, better user classes, and for appropriate applications using event biasing. The first draft web page with tips on improving performance is available as a Twiki document.Twiki document A link will be added in the User Documentation (Feb 2009) Once reasonably complete, it is planned to include the information as a separate document or dedicated chapter in the Geant4 Users’ Guide.

Recommendation 15 We recommend systematic tracking of code performance for each part of the code, and for each physics model. Comparisons with previous versions should be an integral part of the release notes. CPU performance of the Geant4 code is already systematically controlled and verified at every public release and/or patch. Benchmark tests have been implemented and grouped to a benchmark suite, to verify CPU performance at different levels (pure geometry and tracking, tracking with magnetic-field, EM and hadronic processes at integration level). Results are compared against previous releases taken as reference, making particular care on the system where tests are built and executed, in order to guarantee that the same system conditions apply when performing the tests, or else re-running the same tests on the older releases as well when this may not be possible. (… continued on next slide)

Recommendation 15 (cont.) In addition to this, experts from the EM working-group and physics validation, execute validation tests based on well-defined physics observables to assess the correctness of the physics results and overall performance of the various physics models. The results of the tests cited above (in particular the physics validation ones) are partially available from the web sites of the EM and hadronic working groups They are NOT available through the release notes (although, any performance issue or relevant improvement is mentioned in the release notes, as part of a dedicated section in the notes) Making a large number or all results/plots/summaries properly formatted to be published in time in the release notes would require a significant effort and cannot be realized with the current manpower and without seriously affecting (delaying) the release schedule A web spreadsheet page is being put in place, summarising the results of the CPU benchmarks run at each release.web spreadsheet page

Recommendation 16 We recommend that Geant4 keep itself abreast of developments in the area of multi cores and advanced instructions, so it can take advantage of them when there is sufficient infrastructure and support to do so. A multi-core version of Geant4 is under development, as part of the PhD Thesis project of Xin Dong, under the supervision of Prof. Gene Cooperman (Northeastern Univ.) who developed the existing event level parallel version of Geant4. Prototype have been created with successive refinements: starting from fork multi- process version (sharing via Linux copy on write): sharing nothing; enabling reuse of parts of Geant4 which consume significant memory, by separating out read-only parts and those which are changed during event simulation - starting from the geometry and the physics tables of key electromagnetic processes progressively. A presentation of the status was made at the 2008 Workshop. A first beta release of multi-core enabled revised version of Geant4 9.1 is proposed for April 2009, with draft documentation for identifying potential parallelisation problems. Within 2009 a second version, based on Geant4 9.2 is planned.

Recommendation 17 We recommend that Geant4 publish a plan regarding the expected computing performance of the toolkit over the next five years. It is very hard to predict the computing performance of future releases of Geant4. The only way we can make an approximate estimate is to utilize the experience of the last 4 years. Using this we forecast that on a single core, with constant hardware, there will be a reduction of around 4-6% per year in CPU time. This would be from code improvements, principally as a result of code reviews of key classes and the corresponding implementation and interface improvements. The uncertainty in this forecast is significant, and we estimate that the resulting reduction over five years could range between 5 and 30%. An additional one-time improvement of order 15% is expected in an area (low energy Livermore EM processes) in ongoing assessment is addressing hot spots identified in collaboration with Geant4 users (GATE developers). (… continued on next slide)

Recommendation 17 (cont.) In general we expect that over the next 5 years, the throughput (simulated events/minute) for a typical application of the Geant4 toolkit will follow the growth curve of available improvements in CPU performance – potentially of a factor between 10 and 30. This will due to more efficient and faster CPUs, the ability to run separate jobs in parallel on multi-core CPUs, and the development of a multi-thread capable variant of the Geant4 code. Our performance benchmarking will be used to identify and address any new bottlenecks from the changed environment of multi-core machines, and the continuing increase of the importance of memory accesses for performance. The development of a multi- core capable variant of Geant4 is an essential part of these plans.