Download presentation
Presentation is loading. Please wait.
Published byBlanche Chase Modified over 8 years ago
1
1
2
2 Overview Extremely short summary of the physical part of the conference (I am not a physicist, will try my best) Overview of the Grid session focused on the distributed analysis
3
3 The Standard Model of Fundamental Interactions Something one may be proud of D. Kazakov
4
4 Physics beyond the SM Low Energy Supersymmetry Extra gauge bosons Axions Extra dimensions Deviation from Unitarity triangle Modification of Newton law Free quarks New forces / particles Violation of Baryon number Violation of Lepton number Monopoles Violation of Lorentz invariance Compositeness Not found so far … D. Kazakov
5
5 The SM Higgs Boson If it is there we may see it soon Indirect limit from radiative corrections Direct limit from Higgs non observation at LEP II (CERN) D. Kazakov
6
6 Some new results
7
7 But they correspond to the Standard Model
8
8 Conference Summary Valery Rubakov
9
9 “Blessed are those who believe and yet have not seen” “Blessed are those who believe and yet have not seen” Instead of Conclusion St. John, XX, 29 “Блаженны не видевшие и уверовавшие” “Блаженны не видевшие и уверовавшие” Св. Иоанн, XX, 29 Dmitry Kazakov
10
10 Grid session First time Grid distributed analysis session was included in the ICHEP scientific program Talks on the status of the distributed analysis on the Grid from LHC experiments (ATLAS, ALICE and CMS). Yet more plans and intentions than status reports LCG status talk, network report, common software talk
11
11 Distributed Analysis Challenges Distributed production is now routinely done in HENP For MC production and reprocessing of data - not yet LHC scale Scale: few TB’s of data generated/processed daily in ATLAS Scope: organized activity, managed by experts Lessons learned from production Robust software systems to automatically recover from grid failures Robust site services – with hundreds of sites, there are daily failures Robust data management – pre-location of data, cataloguing, transfers Distributed analysis is in early stages of testing Moving from Regional Analysis Center model (ex. D0) to fully distributed analysis model – computing on demand Presents new challenges, in addition to those faced in production Chaotic by nature – hundreds of users, random fluctuations in demand Robustness becomes even more critical – software, sites, services Kaushik De
12
12 Divide and Conquer Experiments optimize/factorize both data and resources Data factorization Successive processing steps lead to compressed physics objects End user does physics analysis using physics objects only Limited access to detailed data for code development, calibration Periodic centralized reprocessing to improve analysis objects Resource factorization Tiered model of data location and processors Higher tiers hold archival data and perform centralized processing Middle tiers for MC generation and some (re)processing Middle and lower tiers play important role in distributed analysis Regional centers are often used to aggregate nearby resources Kaushik De
13
13 Example from D0 from A. Boehnlein Kaushik De
14
14 Common features Becoming pragmatic Organization of the analysis workflow is more and more driven by the data management organization - Sending analysis jobs close to the input data sets - Merging MC and Analysis output into file blocks allowing to organize data access in a more optimal way (ATLAS,CMS) - Aiming to decrease a load on the central data management services (catalogue) and work load management services (RB) Common system for distributed analysis and production (Alien, Panda) Sort of central queue for the Grid (Alien, Dirac, Atlas Production System tried for analysis) Though this approach is not shared by all experiments Develop Job Submission Tools which should provide for the user simple interface to the Grid (Ganga, Crab, Panda)
15
15 Job submission tools User Interface Logging and bookkeeping Computer sites WMS Information System VOMS Getting proxy Submitting a job Register Checking job status Experiment software VO specific Grid flavour specific Experiment Data Managent LHC specific applications Job Submission tool Talks to experiment DM to find out where data is and how to split user task Implements task splitting Does packaging of the user code and libraries Generates executable shell Generates Grid submission instructions Submits all jobs belonging to a task Check status of jobs belonging to a given task and retrieves the job output File Catalogue Error recovery Job submission tools should hide from the user complexity of dealing with the distributed computing facility, providing simple and user-friendly interface Experiments develop different solutions Examples: GANGA for ATLAS and LHCb CRAB and ASAP for CMS
16
16 Job Management: Productions (ATLAS) Once we have data distributed in the correct way we can rework the distributed production system to optimise job distribution, by sending jobs to the data (or as close as possible to them) This was not the case previously, as jobs were sent to free CPUs and had to copy the input file(s) to the local WN, from wherever in the world the data happened to be Next: make better use of the task and dataset concepts A “task” acts on a dataset and produces more datasets Use bulk submission functionality to send all jobs of a given task to the location of their input datasets Minimize the dependence on file transfers and the waiting time before execution Collect output files belonging to the same dataset to the same SE and transfer them asynchronously to their final locations David Constanzo
17
17
18
18 Analysis statistics for CMS Widely used by CMS physics community Plot shows CMS analysis jobs submitted via Crab for the period 01.06.06-20.07.06 distributed by site: ~83K jobs, 50 users over 85 sites
19
19 Interactive analysis in Alice A user starts ROOT session on a laptop The analysis macros are started from the ROOT command line The data files on the GRID are accessed using ROOT (AliEn) UI (via xrootd) The results are stored locally or can be registered on the GRID (AliEn file catalogue) If the data files are stored on a cluster, the interactive analysis is done in parallel using PROOF I. Belikov
20
20 Common concerns Robustness of data management critically important (Kaushik De, Atlas) Problem diagnosis and debugging (Ian Fisk, CMS) Do not yet distinguish activities for prioritization (Atlas and CMS) Need for increase reliability of the Grid and experiment infrastructure (Atlas and CMS)
21
21 Conclusions Scale of resources and users unprecedented at LHC. Enabling of the distributed analysis is a serious challenge for the LHC experiments A big progress is already done in this direction. Experiments defined their computing models. Experiment data management and work load management systems are mostly in place Still a lot of work has to be done to ensure that both Grid and experiment specific services and infrastructures do provide a required level of scalability, reliability and performance
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.