Download presentation
Presentation is loading. Please wait.
Published byAlison Stafford Modified over 9 years ago
1
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter over antimatter in the universe, evidence for Supersymmetry and other new physics, including even micro black hole production! The experiment is being constructed by some 1600 scientists in ~150 institutes in 6 continents. The experiment will be located at the 27km circumference Large Hadron Collider at CERN in Geneva. Despite highly efficient filters acting on the raw data read by the detector, the `good’ events will still correspond to several Petabytes of data per year, which will require millions of SpecInt2k to process and analyse. Even now, to design the detector and to understand the physics, many millions of simulated events also have to be produced. Only a Grid can satisfy our requirements. ATLAS is a global collaboration with Grid testbeds already deployed worldwide. While building on generic middleware, we are required to develop several components, which may be reusable. We are also required to build tools that can run tasks in a coherent way across several grid deployments These are being exercised and developed in Data Challenges of increasing size and complexity These have now been performed using three Grid deployments in 85 sites and six continents. They are a proof of principle for Grid-based production. We are merging these activities with a series of Service Challenges to establish the required system. A simulated micro black hole decay in the ATLAS detector The ATLAS detector ATLAS Applications
2
The GANGA/GRAPPA project is working to produce an interface between the user, the Grid Middleware and the experimental software framework. It is being developed jointly with the LHCb experiment, and as it is using component technologies will allow reuse elsewhere The large number of Grid sites requires automated and scalable Installation Tools. Coherent rpms and tar files are created from CMT, exposing the package dependencies as PACMAN cache files. PACMAN can pull or push complete installations to remote sites. Scripts have been developed making the process semi-automatic The large number of Grid sites requires automated and scalable Installation Tools. Coherent rpms and tar files are created from CMT, exposing the package dependencies as PACMAN cache files. PACMAN can pull or push complete installations to remote sites. Scripts have been developed making the process semi-automatic. ATLAS UK integrates the EGEE/LCG middleware. The most recent dat challenge ran over 570000 production jobs in 84 sites using grid tools. Even analysis can now be run this way. Job submission rates are being improved to avoid bottlenecksJob submission rates are being improved to avoid bottlenecks The production system is being redesigned to increase performanceThe production system is being redesigned to increase performance A new data handling system is being writtenA new data handling system is being written The ATLAS Distributed Analysis system supports distributed users, data and processing. This includes Monte Carlo production, data reconstruction and the extraction of summary data. A prototype system has been created based on the GANGA user interface, the ATLAS production and job management systems. It will incorporate ARDA middleware when available. Another important element in the Distributed Analysis system if the access to metadata describing the event. GridPP is providing effort in the metadata description of the ATLAS data, and also provides an interface to the ATLAS Metadata Information system, AMI, for applications like GANGA. Athena/ GAUDI Application GANGA Grappa GUI JobOptions Virtual Data Algorithms GRID Services Histograms Monitoring Results ? ?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.