Download presentation
Presentation is loading. Please wait.
Published byEric Tyler Modified over 8 years ago
1
HIGH ENERGY PHYSICS DATA WHERE DATA COME FROM, WHERE THEY NEED TO GO In their quest to understand the fundamental nature of matter and energy, Fermilab scientists study elementary particles. These particles come from two different sources, powerful accelerators and the cosmos, and their interactions with each other and surrounding matter are captured in multi-component particle detectors with up to millions of read-out channels. Fermilab’s four-mile particle accelerator ring, the Tevatron, circulates protons and antiprotons at high speeds — and high energies — in opposite directions. When the two beams collide head-on at the centers of designated detector zones, the debris — fast- moving elementary particles — leave tracks and deposit energy in specially-designed materials as they exhaust their brief existence under the detector’s watchful eye. Neutrino experiments also gather data from particle tracks. Scientists study changes in these particles’ properties after the neutrinos exit Fermilab’s accelerator at high energies and travel hundreds of miles to reach a detector. Astrophysics experiments at Fermilab investigate dark matter, dark energy and high- energy cosmic rays from the sun, and map the skies in 3-D. Fermilab scientists pursue data-intensive scientific endeavors involving collaborators and experimental facilities at CERN and other institutions worldwide. Fermilab’s Computing Division develops and supports innovative and cutting-edge computing, storage and data transfer solutions that allow the global high-energy physics community to access data and contribute to discoveries. SOPHISTICATED SOFTWARE MUST QUICKLY PROCESS SIGNALS FROM UP TO MILLIONS OF READ-OUT CHANNELS ON A DETECTOR, EACH RECEIVING MILLIONS OF SIGNALS A SECOND.
2
COLLABORATIVE COMPUTING CONNECTING VIA THE GRID High-energy physicists from all over the world collaborate to analyze data from Fermilab experiments. Fermilab’s Computing Division supports their research by providing computing and data storage resources and contributing to nationally and globally distributed grids. FermiGrid – a campus grid This grid provides a uniform interface to on-site Fermilab computing and storage resources used by scientists across the laboratory. FermiGrid also supports off-site users through its membership in the Open Science Grid. Open Science Grid – a national grid The OSG integrates distributed resources across more than 60 labs and universities in the U.S. into a reliable, coherent, shared cyberinfrastructure. It provides the U.S. contribution to the Large Hadron Collider’s Worldwide LHC Computing Grid (WLCG). Initially driven by the needs of the CMS and ATLAS experiments at the LHC, the OSG has expanded to support data-intensive research in many fields such as genetics and meteorology. The Worldwide LHC Computing Grid The experiments at the Large Hadron Collider have built a globally distributed, tiered grid infrastructure to support their data distribution and analysis. Data collected at CERN, the top tier (Tier-0), are distributed to a set of seven Tier-1 sites in as many countries. The U.S. CMS Tier-1 site at Fermilab further processes and distributes the data across the seven U.S. CMS Tier-2 sites, which in turn send datasets to tens of universities (Tier-3 sites) for physicists to access and analyze from anywhere in the world. IN A DAY THE WORLDWIDE LHC COMPUTING GRID MOVES OVER 20 TERABYTES OF DATA AND RUNS MORE THAN 600,000 JOBS.
3
ANALYSIS PROCESSING MASSIVE DATASETS Current Fermilab collider detectors record data at a rate of 20 megabytes per second and 300 terabytes per year. Experiments at CERN’s Large Hadron Collider will record ten times as much data. Trigger systems, which act as filters, select only the most interesting collision events for further study, roughly 100 per second, with each raw event about 200 kB in size. Physicists at Fermilab and at the collaborating institutions use complex software with thousands of lines of code to reassemble the data into a form the human brain can grasp and analyze. Screen shot of http://www.isgtw.org/?pid=1001699 (Newly single and playing hard to get) http://www.isgtw.org/?pid=1001699 TEVATRON EXPERIMENTS RECORD ABOUT ONE TERABYTE PER DAY FOR LATER ANALYSIS.
4
SIMULATION MODELING PARTICLE ACCELERATORS AND DETECTORS AND ADVANCING PARTICLE PHYSICS THEORY Detector simulations Physicists at Fermilab develop computer simulations of particle collisions expected to occur in their detectors and run them millions of times. In this way they gather enough statistics to confidently identify signals in the real data that they can map to particular particles and phenomena. The bulk of these simulations run on Open Science Grid resources. Screen shot of http://www.isgtw.org/?pid=1000322 (The CMS “Top 100”) http://www.isgtw.org/?pid=1000322 Theoretical physics simulations The theory of quantum chromodynamics (QCD) describes how quarks and gluons bind together to form other particles, such as protons and neutrons, and in turn, atomic nuclei. Uncovering the most important predictions of QCD from the comprehensive Standard Model theory of particle physics requires large-scale numerical simulations. Fermilab participates in the DOE/SciDAC-2 computational infrastructure project for lattice QCD. The laboratory operates four high-performance clusters with an aggregate sustained performance of 12 TeraFLOPS for lattice QCD simulations, with peak performance about five times higher. IN A COLLABORATIVE, COMPETITIVE AND ULTIMATELY SUCCESSFUL PURSUIT OF THE “SINGLE TOP QUARK,” THE TWO TEVATRON EXPERIMENTS TOGETHER USED ABOUT 400 CPU YEARS TO SIMULATE NEARLY 200 MILLION PARTICLE COLLISIONS. Accelerator simulations Fermilab accelerator scientists run modeling codes on parallel clusters at Fermilab and other national labs that range from 20 to 20,000 processors. Through the Advanced Accelerator Modeling project, Fermilab developed Synergia, a parallel 3D modeling code for multi-particle effects. The scientists have improved the performance of their colliding beam accelerator, the Tevatron, and are already studying potential upgrades for the Large Hadron Collider at the European laboratory, CERN.Synergia
5
DATA STORAGE PROVIDING DATA ON TAPE AND DISK Fermilab provides long-term custodial storage of tens of petabytes of scientific data. Scientists on-site can directly access files on tape through Fermilab’s Enstore Mass Storage System. From on- or off- site, high-rate access to files is through dCache, a disk-cache front-end to the tape storage that can also be used alone. Fermilab has served data at 45 TB/hr on-site and 2 TB/hr off-site. dCache, developed jointly by DESY, NDGF and Fermilab, provides high-performance disk storage, as a high-speed front end to a tape or stand-alone system. It supports a variety of transport protocols and authorization schemes. The Open Science Grid supports and distributes dCache as part of its Virtual Data Toolkit. Fermilab supports dCache installations in North America and beyond for the LHC experiments and Open Science Grid sites, including one of the largest in the world at its US CMS Tier 1 site. TEVATRON EXPERIMENTS CURRENTLY STORE MORE THAN THREE PETABYTES (THREE MILLION MEGABYTES) PER YEAR. THE DATASETS ARE EXPECTED TO DOUBLE BY 2011. The DZero experiment at Fermilab, an Open Science Grid user, typically submits 60,000-100,000 simulation jobs per week. OSG worked with member institutions to allow DZero to use opportunistic storage, that is, idle storage on shared machines, at several sites. With allocations of up to 1 TB at processing sites, DZero increased its job success rate from roughly 30% to upwards of 85%.DZeroFermilabOpen Science Grid Pic of tape robot from http://www.isgtw.org/?pid=1001435http://www.isgtw.org/?pid=1001435 ?
6
NETWORKING BUILDING AND MONITORING NETWORKS AND OPTIMIZING DATA FLOW Lambda Station This joint Fermilab-Caltech project enables the dynamic rerouting of designated traffic through site LAN infrastructure onto so-called "high-impact“, wide-area networks. For the large, sustained flows of high-energy physics data, Lambda Station works in an application-independent reactive mode, in which its action is triggered by changes in existing network traffic. PerfSONAR Fermilab is a major development collaborator of this network monitoring middleware that is designed to facilitate troubleshooting across multiple network management domains. Fermilab has deployed perfSONAR to monitor two major LHC-related end-to-end WAN infrastructures: the LHC Optical Private Network (LHCOPN) between Fermilab and CERN in Switzerland, and a heterogeneous set of network links between Fermilab and about a dozen U.S. university sites for the CMS experiment. Metropolitan Area Network (MAN) Built and operated jointly with Argonne National Laboratory, Fermilab’s MAN expects to reach over 600 Gb/s potential capacity, with 80 Gb/s active now. This network, and redundant equipment on site, have kept Fermilab free from internet outages for over two years. HIGH-CAPACITY NETWORK CAN MOVE 100 PETABYTES OF DATA PER YEAR AND ENABLE REAL-TIME COLLABORATION.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.