2001 Summer Student Lectures Computing at CERN Lecture 1 — Looking Around Tony Cass —

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Operating System.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Grids: Why and How (you might use them) J. Templon, NIKHEF VLV T Workshop NIKHEF 06 October 2003.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
1999 Summer Student Lectures Computing at CERN Lecture 2 — Looking at Data Tony Cass —
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
Lecture 1: Introduction CS170 Spring 2015 Chapter 1, the text book. T. Yang.
1 The development of modern computer systems Early electronic computers Mainframes Time sharing Microcomputers Networked computing.
Computer Networks IGCSE ICT Section 4.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Types of Operating System
Fabric Management for CERN Experiments Past, Present, and Future Tim Smith CERN/IT.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CS 0004 –Lecture 1 Wednesday, Jan 5 th, 2011 Roxana Gheorghiu.
Operating System. Architecture of Computer System Hardware Operating System (OS) Programming Language (e.g. PASCAL) Application Programs (e.g. WORD, EXCEL)
Java Analysis Studio Status Update 12 May 2000 Altas Software Week Tony Johnson
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 1 Introduction Read:
 Design model for a computer  Named after John von Neuman  Instructions that tell the computer what to do are stored in memory  Stored program Memory.
Chapter 1 The Big Picture.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Computer Fundamentals MSCH 233 Lecture 2. What is a Software? Its step by step instructions telling the computer how to process data, execute operations.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Eugenia Hatziangeli Beams Department Controls Group CERN, Accelerators and Technology Sector E.Hatziangeli - CERN-Greece Industry day, Athens 31st March.
INFORMATION SYSTEM-SOFTWARE Topic: OPERATING SYSTEM CONCEPTS.
October 8, 2002P. Nilsson, SPD General Meeting1 Paul Nilsson, SPD General Meeting, Oct. 8, 2002 New tools and software updates Test beam analysis Software.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Public Batch and Interactive Services on Linux FOCUS — July 1 st 1999 Tony Cass —
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
INFORMATION TECHNOLOGY
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
2001 Summer Student Lectures Computing at CERN Lecture 1 — Looking Around Tony Cass —
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Computing at SSRL: Experimental User Support Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
1 july 99 Minimising RISC  General strategy - converge on PCs with Linux & NT to avoid wasting manpower in support teams and.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
COMP1321 Digital Infrastructure Richard Henson March 2016.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
COMP2322 Network Management Richard Henson Worcester Business School March 2016.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Virtual Server Server Self Service Center (S3C) JI July.
System Software (1) The Operating System
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
1 OPERATING SYSTEMS. 2 CONTENTS 1.What is an Operating System? 2.OS Functions 3.OS Services 4.Structure of OS 5.Evolution of OS.
Computer and Internet Basics
2. OPERATING SYSTEM 2.1 Operating System Function
Operating System.
Types of Operating System
PC Farms & Central Data Recording
LHC experiments Requirements and Concepts ALICE
OffLine Physics Computing
Lecture Topics: 11/1 General Operating System Concepts Processes
Instructor: Mort Anvari
Chapter-1 Computer is an advanced electronic device that takes raw data as an input from the user and processes it under the control of a set of instructions.
An Introduction to Operating Systems
Presentation transcript:

2001 Summer Student Lectures Computing at CERN Lecture 1 — Looking Around Tony Cass —

2 Tony Cass Acknowledgements  The choice of material presented here is entirely my own. However, I could not have prepared these lectures without the help of –Charles Granieri, Hans Grote, Mats Lindroos, Franck Di Maio, Olivier Martin, Pere Mato, Bernd Panzer-Steindel, Les Robertson, Stephan Russenschuck, Frank Schmidt, Archana Sharma and Chip Watson who spent time discussing their work with me, generously provided material they had prepared, or both.  For their general advice, help, and reviews of the slides and lecture notes, I would also like to thank –Marco Cattaneo, Mark Dőnszelmann, Dirk Düllmann, Steve Hancock, Vincenzo Innocente, Alessandro Miotto, Les Robertson, Tim Smith and David Stickland.

3 Tony Cass Some Definitions General  Computing Power –CERN Unit –MIPS –SPECint92, SPECint95  Networks –Ethernet »Normal (10baseT, 10Mb/s) »Fast (100baseT, 100Mb/s) »Gigabit (1000Mb/s) –FDDI –HiPPI  bits and Bytes –1MB/s = 8Mb/s  Factors –K=1024, K=1000 CERN  Interactive Systems –Unix: WGS & PLUS »CUTE, SUE, DIANE –NICE  Batch Systems –Unix: SHIFT, CSF »CORE –PCSF Other  Data Storage, Data Access & Filesystems –AFS, NFS, RFIO, HPSS, Objectivity[/DB]  CPUs –Alpha, MIPS, PA-Risc, PowerPC, Sparc –Pentium, Pentium II, Merced

4 Tony Cass How to start?  Computing is everywhere at CERN! –experiment computing facilities, administrative computing, central computing, many private clusters.  How should this lecture course be organised? –From a rigorous academic standpoint? –From a historical standpoint –... –From a physics based viewpoint

5 Tony Cass Weekly use of Interactive Platforms Number of Users each Week Week Windows 95 Windows NT WGS and PLUS CERNVM VXCERN

6 Tony Cass Computer Usage at IN2P3

7 Tony Cass Computing at CERN  Computing “purely for (experimental) physics” will be the focus of the second two lectures of this series. Leaving this area aside, other activities at CERN can be considered as falling into one of three areas: –administration, –technical and engineering activities, and –theoretical physics.  We will take a brief look at some of the ways in which computing is used in these areas in the rest of this first lecture.

8 Tony Cass Administrative Computing  As any organisation, CERN has all the usual Administrative Data Processing activities such as –salaries, human resource tracking, planning...  Interesting aspects of this work at CERN are –the extent to which many tasks are automated –the heterogeneous nature of the platforms used when performing administration related tasks.  The Web is, as in many other cases at CERN, becoming the standard interface.

9 Tony Cass Technical and Engineering Computing  Engineers and physicists working at CERN must –design, –build, and –operate for experimental physicists to be able to collect the data that they need.  As in many other areas of engineering design, computer aided techniques are essential for the construction of today’s advanced accelerators and detectors. –accelerators and –detectors both

10 Tony Cass Accelerator design issues  Oliver Brüning’s lectures will tell you more about accelerators. For the moment, all we need to know is that –particles travelling in bunches around an accelerator are bent by dipole magnets and must be kept in orbit. »Of course, they must be accelerated as well(!), but we don’t consider that here.  Important studies for LHC are –magnet design »how can we build the (superconducting) dipole magnets that are needed? –transverse studies »will any particles leave orbit? (and hit the magnets!) –longitudinal studies »how can we build the right particle bunches for LHC?

11 Tony Cass LHC Magnet Design 2D field picture for LHC dipole coil 3D representation of dipole coil end with magnetic field vectors Pictures generated with ROXIE.

12 Tony Cass Genetic Algorithms for Magnet Design Original coil design. New coil design found using a genetic algorithm. This was further developed using deterministic methods and replaced the original design. Genetic Algorithm convergence plot. The algorithm is designed to come up with a number of alternative solutions which can then be further investigated.

13 Tony Cass Transverse Studies Particles that move like this in phase space stay in the accelerator. Those that move like this don’t! These images show how particles in a circulating bunch move about in a 4 dimensional phase space: X position & angle, Y position and angle. Particles with chaotic trajectories in this phase space have orbits that are unbounded and so will hit the walls of the accelerator eventually. Transverse studies of particle motion attempt to understand how these instabilities arise—and how they can be reduced by changes to the magnets.

14 Tony Cass Longitudinal Studies  Not all particles in a bunch have the same energy. Studies of energy distribution show aspects of bunch shape. –The energy of a particle affects its arrival time at the accelerating cavity… which then in turn affects the energy.  Need to measure both energy and arrival time, but can’t measure energy directly. Measuring arrival times is easy –but difficult to interpret successive slices.  Tomography techniques lead to a complete picture –like putting together X-ray slices through a person.

15 Tony Cass Bunch splitting at the PS

16 Tony Cass Accelerator Controls PS Operator Control Windows Magnet current trace showing some of the many beam types the PS can handle for different users.

17 Tony Cass Detector Design Issues Detector designs also benefit from computer simulations.

18 Tony Cass Detector Design Issues II NA45 TPC with field cage Electric field near the field cage

19 Tony Cass Computing for Theory Feynmann diagrams for some LHC processes…… and some at LEP Theoretical physicists could not calculate probabilities for processes represented by Feynmann diagrams like these without using symbolic algebra packages—e.g. Maple or Mathematica. These calculations are essential for two reasons: 1 As collision energies increase, and as the precision of experimental measurements increases with increasing data volume, more Standard Model processes contribute to the data that is collected. 2 Theorists need to calculate how the effects of theories beyond the standard model, e.g. SUSY, could affect the data that is collected today.

20 Tony Cass CERN and the World Wide Web  The World Wide Web started as a project to make information more accessible, in particular, to help improve information dissemination within an experiment. –These aspects of the Web are widely used at CERN today. All experiments have their own web pages and there are now web pages dedicated to explaining about Particle Physics to the general public. –In a wider sense, the web is being used to distribute graphical information on system, accelerator and detector status. The release of Java has given a big push to these uses.  Web browsers are also used to provide a common interface, e.g. »currently to the administrative applications, and »possibly in future as a batch job submission interface for PCs.

21 Tony Cass 1998  1999: What has changed?  Hardware –PC hardware has replaced RISC workstations for general purpose computing.  Software –Future operating system developments clearly concentrated on Linux and Windows »Linux success & use of PCs is a positive feedback loop! –Java is coming up fast on the inside lane. »but C++ investment is large and C++/Java interoperability poor.  Systems Management –Understand costs—one PC is cheap, but managing 200 is not!

22 Tony Cass 1999  2000: What has changed?  Hardware –PC hardware has replaced RISC workstations.  Software –Future operating system developments are clearly concentrated on Linux. Windows 2000 will be deployed at CERN but is now a distant 3 rd choice for physics »Linux success & use of PCs is a positive feedback loop! –Java is still coming up fast on the inside lane. » C++ investment is still large and C++/Java interoperability is still poor.  Systems Management –Understand costs—one PC is cheap, but managing 2000 is not! –And do we have enough space, power and cooling for the LHC equipment?

23 Tony Cass 2000  2001: What has changed? I  Windows 2000 has arrived and Wireless Ethernet is arriving. –Portable PCs replacing desktops. –Integration of home directory, web files, working offline makes things easier—just like AFS and IMAP revolutionised my life 8 years ago.  I now have ADSL at home rather than ISDN. –I am now outside the CERN firewall when connected from home but this doesn’t matter so much with all my files cached on my portable. »I just need to bolt on a wireless home network so I can work in the garden! –The number of people connecting from outside the firewall will grow »CERN will probably have to support Virtual Private Networks for privileged access »And users will have to worry about securing their home network against hackers…

24 Tony Cass Looking Around—Summary  Computing extends to all areas of work at CERN.  In terms of CERN’s “job”, producing particle physics results, computing is essential for –the design, construction and operation of accelerators and detectors, and –theoretical studies, as well as –the data reconstruction and analysis phases.  The major computing facilities at CERN, though, are provided for particle physics work and these will be the subject of the next two lectures.

2001 Summer Student Lectures Computing at CERN Lecture 2 — Looking at Data Tony Cass —

26 Tony Cass Data and Computation for Physics Analysis batch physics analysis batch physics analysis detector event summary data raw data event reconstruction event reconstruction event simulation event simulation interactive physics analysis analysis objects (extracted by physics topic) event filter (selection & reconstruction) event filter (selection & reconstruction) processed data

27 Tony Cass Central Data Recording  CDR marks the boundary between the experiment and the central computing facilities.  It is a loose boundary which depends on an experiment’s approach to data collection and analysis.  CDR developments are also affected by –network developments, and –event complexity. detector raw data event filter (selection & reconstruction) event filter (selection & reconstruction)

28 Tony Cass Monte Carlo Simulation  From a physics standpoint, simulation is needed to study –detector response –signal vs. background –sensitivity to physics parameter variations.  From a computing standpoint, simulation –is CPU intensive, but –has low I/O requirements. Simulation farms are therefore good testbeds for new technology: –CSF for Unix and now PCSF for PCs and Windows/NT. event simulation event simulation

29 Tony Cass Data Reconstruction  The event reconstruction stage turns detector information into physics information about events. This involves –complex processing »i.e. lots of CPU capacity –reading all raw data »i.e lots of input, possibly read from tape –writing processed events »i.e. lots of output which must be written to permanent storage. event summary data raw data event reconstruction event reconstruction

30 Tony Cass Batch Physics Analysis  Physics analysis teams scan over all events to find those that are interesting to them. –Potentially enormous input »at least data from current year. –CPU requirements are high. –Output is “small” »O(10 2 )MB –but there are many different teams and the output must be stored for future studies »large disk pools needed. batch physics analysis batch physics analysis event summary data analysis objects (extracted by physics topic)

31 Tony Cass Symmetric MultiProcessor Model Experiment Tape Storage TeraBytes of disks

32 Tony Cass Scalable model—SP2/CS2 Experiment Tape Storage TeraBytes of disks

33 Tony Cass Distributed Computing Model Experiment Tape Storage Disk Server CPU Server Switch

34 Tony Cass Today’s CORE Computing Systems

35 Tony Cass Today’s CORE Computing Systems PaRC Engineering Cluster PaRC Engineering Cluster CERN Network Home directories & registry Central Data Services Shared Disk Servers CORE Physics Services CER N 32 IBM, DEC, SUN servers SHIFT Data intensive services 200 computers, 550 processors (DEC, H-P, IBM, SGI, SUN, PC) 25 TeraBytes embedded disk 200 computers, 550 processors (DEC, H-P, IBM, SGI, SUN, PC) 25 TeraBytes embedded disk 2 TeraByte disk 10 SGI, DEC, IBM servers 2 TeraByte disk 10 SGI, DEC, IBM servers 4 tape robots 90 tape drives Redwood, 9840 DLT, IBM 3590, 3490, 3480 EXABYTE, DAT, Sony D1 4 tape robots 90 tape drives Redwood, 9840 DLT, IBM 3590, 3490, 3480 EXABYTE, DAT, Sony D1 Shared Tape Servers Data Recording, Event Filter and CPU Farms for NA45, NA48, COMPASS consoles & monitors DXPLUS, HPPLUS, RSPLUS,LXPLUS, WGS Interactive Services 70 systems (HP, SUN, IBM, DEC, Linux) RSBATCH Public BatchService 32 PowerPC 604 NAP - accelerator simulation service NAP - accelerator simulation service 10-CPU DEC DEC workstations 10-CPU DEC DEC workstations Simulation Facility 25 H-P PA-RISC 25 H-P PA-RISC CSF - RISC servers PCSF - PCs & NT 10 PentiumPro 25 Pentium II 10 PentiumPro 25 Pentium II 60 dual processor PCs 60 dual processor PCs 13 DEC workstations 3 IBM workstations 13 DEC workstations 3 IBM workstations PC Farms

36 Tony Cass Today’s CORE Computing Systems PaRC Engineering Cluster PaRC Engineering Cluster CERN Network Home directories & registry Central Data Services Shared Disk Servers CORE Physics Services CER N 25 PC servers 4 others for HPSS SHIFT Data intensive services 350 computers, 850 processors (DEC, H-P, IBM, SGI, SUN, PC) 40 TeraBytes embedded disk 350 computers, 850 processors (DEC, H-P, IBM, SGI, SUN, PC) 40 TeraBytes embedded disk 1 TeraByte disk 3 Sun servers 1 TeraByte disk 3 Sun servers 6 tape robots 90 tape drives Redwood, 9840 DLT, IBM 3590, 3490, 3480 EXABYTE, DAT, Sony D1 6 tape robots 90 tape drives Redwood, 9840 DLT, IBM 3590, 3490, 3480 EXABYTE, DAT, Sony D1 Shared Tape Servers Data Recording, Event Filter and CPU Farms for NA45, NA48, COMPASS consoles & monitors DXPLUS, HPPLUS, RSPLUS,LXPLUS, WGS Interactive Services 80 systems (HP, SUN, IBM, DEC, Linux) RSBATCH Public BatchService 32 PowerPC 604 NAP - accelerator simulation service NAP - accelerator simulation service 10-CPU DEC DEC workstations 10-CPU DEC DEC workstations Simulation Facility 25 H-P PA-RISC 25 H-P PA-RISC CSF - RISC servers PCSF - PCs & NT 10 PentiumPro 25 Pentium II 10 PentiumPro 25 Pentium II 250 dual processor PCs 250 dual processor PCs 13 DEC workstations 5 dual processor PCs 13 DEC workstations 5 dual processor PCs PC Farms LXBATCH Public BatchService 25 dual processor PCs

37 Tony Cass Today’s CORE Computing Systems CERN Network CORE Physics Services CER N Dedicated RISC clusters 300 computers, 750 processors (DEC, HP, SGI, SUN) 300 computers, 750 processors (DEC, HP, SGI, SUN) Central Data Services Shared Disk Servers 5 TeraByte disk 3 Sun servers 6 PC based servers 5 TeraByte disk 3 Sun servers 6 PC based servers 10 tape robots 100 tape drives 9940, Redwood, 9840, DLT, IBM 3590E, 3490, 3480 EXABYTE, DAT, Sony D1 10 tape robots 100 tape drives 9940, Redwood, 9840, DLT, IBM 3590E, 3490, 3480 EXABYTE, DAT, Sony D1 Shared Tape Servers Home directories & registry consoles & monitors DXPLUS, HPPLUS, RSPLUS,LXPLUS, WGS Interactive Services 120 systems (HP, SUN, IBM, DEC, Linux) NAP - accelerator simulation service NAP - accelerator simulation service 10-CPU DEC DEC workstations 20 dual processor PCs 10-CPU DEC DEC workstations 20 dual processor PCs PaRC Engineering Cluster PaRC Engineering Cluster 13 DEC workstations 5 dual processor PCs 5 Sun workstations 13 DEC workstations 5 dual processor PCs 5 Sun workstations “Queue shared” Linux Batch Service 350 dual processor PCs RISC Simulation Facility Maintained for LEP only “Timeshared” Linux cluster 200 dual processor PCs Dedicated Linux clusters 250 dual processor PCs PC & EIDE based disk Servers 40TB mirrored disk (80TB raw capacity) 40TB mirrored disk (80TB raw capacity) 25 PC servers

38 Tony Cass Hardware Evolution at CERN,

39 Tony Cass Interactive Physics Analysis  Interactive systems are needed to enable physicists to develop and test programs before running lengthy batch jobs. –Physicists also »visualise event data and histograms »prepare papers, and »send  Most physicists use workstations—either private systems or central systems accessed via an Xterminal or PC.  We need an environment that provides access to specialist physics facilities as well as to general interactive services. analysis objects (extracted by physics topic)

40 Tony Cass Unix based Interactive Architecture Backup & Archive Reference Environments CORE Services Optimized Access X Terminals PCs Private Workstations. WorkGroup Server Clusters PLUS CLUSTERS Central Services (mail, news, ccdb, etc.) ASIS : Replicated AFS Binary Servers AFS Home Directory Services General Staged Data Pool X-terminal Support CERN Internal Network

41 Tony Cass PC based Interactive Architecture

42 Tony Cass Event Displays Event displays, such as this ALEPH display help physicists to understand what is happening in a detector. A Web based event display, WIRED, was developed for DELPHI and is now used elsewhere. Clever processing of events can also highlight certain features—such as in the V-plot views of ALEPH TPC data. Standard X-Y view V-plot view

43 Tony Cass Data Analysis Work By selecting a dE/dx vs. p region on this scatter plot, a physicist can choose tracks created by a particular type of particle. Most of the time, though, physicists will study event distributions rather than individual events. RICH detectors provide better particle identification, however. This plot shows that the LHCb RICH detectors can distinguish pions from kaons efficiently over a wide momentum range. Using RICH information greatly improves the signal/noise ratio in invariant mass plots.

44 Tony Cass Looking at Data—Summary  Physics experiments generate data! –and physcists need to simulate real data to model physics processes and to understand their detectors.  Physics data must be processed, stored and manipulated.  [Central] computing facilities for physicists must be designed to take into account the needs of the data processing stages –from generation through reconstruction to analysis  Physicists also need to –communicate with outside laboratories and institutes, and to –have access to general interactive services.