Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grid Computing at Fermilab

Similar presentations


Presentation on theme: "Grid Computing at Fermilab"— Presentation transcript:

1 Grid Computing at Fermilab

2 Grids for Science There is no doubt that computing resources for Particle Physics are going to be distributed globally and shared between “Virtual Organizations” Happening already with Run II, Babar, RHIC and LHC experiments Vicky White

3 The Fermilab Experimental Program Statistics
“Active” experiments from the 2003 Research Program Workbook Of 213 Institutions involved 114 of them are non-US Of 1916 physicists, 753 are non-US Of 699 students, 234 are non-US Vicky White

4 Computing Division Supports
Run II (CDF, D0, Accelerator Upgrades) **** MINOS MiniBoone Sloan Digital Sky Survey ** CMS ***** Lattice QCD Theory with distributed computational facilities** CDMS(DAQ) and Pierre Auger Test Beams BTeV **** SNAP(JDEM) DES investigations Linear Collider Future Neutrino experiments investigations Fixed Target Experiments’ Analysis (mainly KTeV now) Services and Facilities used by all of the above and the lab in general *represents Large Global Collaboration/Petabyte Datasets/Massive distributed computational power needed Vicky White

5 Common Services Central Storage and Data Handling Systems
OS support (Linux and Windows) Including central updates and patch management Databases, Helpdesk, Networking, Desktop and Server support, Mail, etc. Engineering and Equipment Support for experiments R&D on DAQ and Trigger systems Scientific Libraries and Tools Videoconferencing support And more… Vicky White

6 Inclusive Worldwide Collaboration is essential for Particle Physics
What are we doing at Fermilab to help this ? Networks and Network research Grids -- lots on this – important strategically Guest Scientist program Education and Outreach program Experiment sociology and leadership – changes Videoconferencing Virtual Control Rooms Physics Analysis Center for CMS Public Relations Vicky White

7 LHC: Key Driver for Grids
Complexity: Millions of individual detector channels Scale: PetaOps (CPU), 100s of Petabytes (Data) Distribution: Global distribution of people & resources 2000+ Physicists 159 Institutes 36 Countries CMS Collaboration Vicky White

8 Fermilab Grid Strategy
Strategy (from a Fermi-centric view) Common approaches and technologies across our entire portfolio of experiments and projects Strategy (from a global HEP view) Work towards common standards and interoperability At Fermilab we are working aggressively to put all of our Computational and Storage Fabric on “the Grid” and to contribute constructively to interoperability of Grid middleware Vicky White

9 Satellite Computing Facilities to FCC
Run II meets Lattice QCD – building re-use High Density Computing Facility Under construction – delivery in the fall KVA facility Vicky White

10 Grid Projects and Coordinating Bodies
Fermilab is involved in numerous Grid projects and coordination bodies PPDG, GriPhyN and iVDGL - the Trillium US Grid projects LHC Computing Grid bodies and working groups on Security, Operations Center, Grid Deployment, etc. SRM – storage systems standards Global Grid Forum – Physics Research Area Global Grid Forum – Security Research Area Open Science Grid Interoperability of Grids And probably some I forgot….. Vicky White

11 SAM-Grid SAM-GRID is fully functional distributed computing infrastructure in use by D0, CDF and MINOS (almost) ~30 SAM stations worldwide active for D0 ~20 SAM stations worldwide active for CDF D0 successfully carried out reprocessing of data at 6 sites worldwide FermiNews article February SAM-GRID is evolving towards common (also evolving?) Grid standards and will run on top of LCG computing environment Vicky White

12 SAM-GRID System at D0 Integrated Files Consumed vs Month (DØ)
Summary of Resources (DØ) 4.0 M Files Consumed Registered Users 600 Number of Stations 56 Registered Nodes 900 Total Disk Cache 40 TB Number of Files 1.5M Integrated GB Consumed vs Month (DØ) Regional Center Analysis site 1.2 PB Consumed CDF and D0 are both highly distributed collaborations, with many physicists and computing resources. Making use of that potential has been an issue from the start for D0 (Monte Carlo generation) and for CDF more recently. These efforts are going to grow. The SAM data handling system, used from the beginning by D0 and recently by CDF, is being used/modified for grid/distributed computing. 3/02 3/03 Vicky White

13 SAM at a Glance – D0 and CDF
Includes Stations in U.S., Czech Republic, India, France, UK, Netherlands, Germany, Canada Monitored Drill down to display jobs running and files being delivered Also many countries worldwide Vicky White

14 Successes - US-CMS Vicky White

15 Grid3 Grid2003: An Operational Grid - Huge Success !
Grid3 demonstrator for multi-organizational Grid environment together with US Atlas, iVDGL, GriPhyN, PPDG, SDSS, LIGO Fermilab and US LHC facilities available through shared Grid Massive CMS production of simulated events and data movements Hugely increase CPU resources available to CMS through opportunistic use, running on Grid3! Grid2003: An Operational Grid - Huge Success ! 28 sites ( CPUs) concurrent jobs 10 applications Running since October 2003 Vicky White

16 CMS Data Challenge (DC04) and Grid
Vicky White

17 Grid2003: 3 Months Usage Vicky White

18 ATLAS using Grid2003 now Vicky White

19 CMS Physics Analysis Center(s) in the U.S.
Fermilab is the Host Lab for U.S. CMS and a Tier 1 Computing Center for the LHC. But how is analysis going to be done ? Serious thought going into this now. What is a PAC? We can’t all go to CERN, all the time, to do physics Must empower physicists to do analysis away from CERN Technology is important – networks, videoconferencing, access to data But there is more What do we need at Fermilab to have a vibrant center for physics where people come to work together both physically and virtually? The effort we put into this may have far reaching effects on the “democratization of science”. Vicky White

20 Open Science Grid (OSG)
Goals and White Paper at Join all of the LHC computing resources at labs and universities in the U.S. Add, over time, computing resources of other high energy and nuclear physics experiments and other scientific partners from Grid Projects The idea is to federate, over time, all of these computers and storage systems and services together into a Grid that serves the needs of all of these physics and related disciplines. But more importantly to invite other sciences to join this Open Science Grid and to invite educators, students and others to work with us as we evolve. Vicky White

21 US OSG + European EGEE Vicky White

22 Federating Grids As the US representative to the LHC Computing Grid Grid Deployment Board (LCG GDB) I have repeatedly pushed the case for the LHC Computing Grid to be a “Grid of Grids” – not a distributed Computing center Resources must be able to belong to >1 Grid E.g. NorduGrid, UK Grid, Open Science Grid, TeraGrid, Region XYZ Grid, University ABC Grid and Fermilab Grid! Grids must learn to interoperate Governance, policies, security, resource usage and service level agreements must be considered at many levels including at the level of a Federation of Grids Vicky White

23 “FermiLight” – we are connected to Starlight with dark fiber
In early use with 1 wavelength lit – 1 Gb/sec Will light up 3 wavelengths this year – potential for up to 10Gb/s each Potential for several partners to connect to FermiLight Potential for network R&D – 1 proposal funded Optical networking interconnection point downtown Chicago. Owned by Northwestern University Vicky White

24 Network Research Also in the past year Fermilab chose to become a beginning player in the Network research arena Raw bandwidth is not enough – we need to learn how to use it, monitor the network, look for research partners and opportunities Joining with several partners on proposal for UltraNet (DOE research network) exploitation and with Caltech et. al. on Ultralight program of work/proposal Some international opportunities for broader collaborations UK, CA, Czech, Netherlands ? Research agenda pushes provisioning of network Vicky White

25 State-of-the art Central Storage
CDF dCache Data read Often exceeds 40 TB/day – Up to 60 TB/day Data to tape Experiment TB CDF 919 DO 729 CMS, MINOS,Miniboone, SDSS, Theory, legacy 145 Total Data on tape Petabytes Vicky White

26 Scientific Linux – a Fermilab initiative
Vicky White

27 Education Quarknet meets “Grid”
Traditional K-12 Education Program at Fermilab is working with Grid Projects and Computing Division on the educational opportunities afforded by the Grid Open Science Grid – will have a significant Educational component. Vicky White

28 Public Relations Judy Jackson, head of Public Affairs at Fermilab, works closely with many other public affairs groups – at CERN, SLAC and elsewhere Interactions.org website Forming a Strategic Grid Communications group – presented a plan for this at a recent Open Science Grid meeting Vicky White

29 Role of Science in the Information Society (RSIS)
Fermilab/SLAC exhibit from SC2003 on Networking, Grids, Storage and Simulations went to the RSIS exhibition in Geneva last December as part of the World Summit on the Information Society event. Vicky White

30 Conclusions We are working on Grid Computing to help us to do more/better Science and to enable greater participation by scientists worldwide Many social and collaborative aspects to the work, as well as technical. Vicky White


Download ppt "Grid Computing at Fermilab"

Similar presentations


Ads by Google