Download presentation
Presentation is loading. Please wait.
Published byErin Shepherd Modified over 9 years ago
1
2001 Summer Student Lectures Computing at CERN Lecture 1 — Looking Around Tony Cass — Tony.Cass@cern.ch
2
2 Tony Cass Acknowledgements The choice of material presented here is entirely my own. However, I could not have prepared these lectures without the help of –Charles Granieri, Hans Grote, Mats Lindroos, Franck Di Maio, Olivier Martin, Pere Mato, Bernd Panzer-Steindel, Les Robertson, Stephan Russenschuck, Frank Schmidt, Archana Sharma and Chip Watson who spent time discussing their work with me, generously provided material they had prepared, or both. For their general advice, help, and reviews of the slides and lecture notes, I would also like to thank –Marco Cattaneo, Mark Dőnszelmann, Dirk Düllmann, Steve Hancock, Vincenzo Innocente, Alessandro Miotto, Les Robertson, Tim Smith and David Stickland.
3
3 Tony Cass Some Definitions General Computing Power –CERN Unit –MIPS –SPECint92, SPECint95 Networks –Ethernet »Normal (10baseT, 10Mb/s) »Fast (100baseT, 100Mb/s) »Gigabit (1000Mb/s) –FDDI –HiPPI bits and Bytes –1MB/s = 8Mb/s Factors –K=1024, K=1000 CERN Interactive Systems –Unix: WGS & PLUS »CUTE, SUE, DIANE –NICE Batch Systems –Unix: SHIFT, CSF »CORE –PCSF Other Data Storage, Data Access & Filesystems –AFS, NFS, RFIO, HPSS, Objectivity[/DB] CPUs –Alpha, MIPS, PA-Risc, PowerPC, Sparc –Pentium, Pentium II, Merced
4
4 Tony Cass How to start? Computing is everywhere at CERN! –experiment computing facilities, administrative computing, central computing, many private clusters. How should this lecture course be organised? –From a rigorous academic standpoint? –From a historical standpoint –... –From a physics based viewpoint
5
5 Tony Cass Weekly use of Interactive Platforms 1987-2001 Number of Users each Week Week Windows 95 Windows NT WGS and PLUS CERNVM VXCERN
6
6 Tony Cass Computer Usage at IN2P3
7
7 Tony Cass Computing at CERN Computing “purely for (experimental) physics” will be the focus of the second two lectures of this series. Leaving this area aside, other activities at CERN can be considered as falling into one of three areas: –administration, –technical and engineering activities, and –theoretical physics. We will take a brief look at some of the ways in which computing is used in these areas in the rest of this first lecture.
8
8 Tony Cass Administrative Computing As any organisation, CERN has all the usual Administrative Data Processing activities such as –salaries, human resource tracking, planning... Interesting aspects of this work at CERN are –the extent to which many tasks are automated –the heterogeneous nature of the platforms used when performing administration related tasks. The Web is, as in many other cases at CERN, becoming the standard interface.
9
9 Tony Cass Technical and Engineering Computing Engineers and physicists working at CERN must –design, –build, and –operate for experimental physicists to be able to collect the data that they need. As in many other areas of engineering design, computer aided techniques are essential for the construction of today’s advanced accelerators and detectors. –accelerators and –detectors both
10
10 Tony Cass Accelerator design issues Oliver Brüning’s lectures will tell you more about accelerators. For the moment, all we need to know is that –particles travelling in bunches around an accelerator are bent by dipole magnets and must be kept in orbit. »Of course, they must be accelerated as well(!), but we don’t consider that here. Important studies for LHC are –magnet design »how can we build the (superconducting) dipole magnets that are needed? –transverse studies »will any particles leave orbit? (and hit the magnets!) –longitudinal studies »how can we build the right particle bunches for LHC?
11
11 Tony Cass LHC Magnet Design 2D field picture for LHC dipole coil 3D representation of dipole coil end with magnetic field vectors Pictures generated with ROXIE.
12
12 Tony Cass Genetic Algorithms for Magnet Design Original coil design. New coil design found using a genetic algorithm. This was further developed using deterministic methods and replaced the original design. Genetic Algorithm convergence plot. The algorithm is designed to come up with a number of alternative solutions which can then be further investigated.
13
13 Tony Cass Transverse Studies Particles that move like this in phase space stay in the accelerator. Those that move like this don’t! These images show how particles in a circulating bunch move about in a 4 dimensional phase space: X position & angle, Y position and angle. Particles with chaotic trajectories in this phase space have orbits that are unbounded and so will hit the walls of the accelerator eventually. Transverse studies of particle motion attempt to understand how these instabilities arise—and how they can be reduced by changes to the magnets.
14
14 Tony Cass Longitudinal Studies Not all particles in a bunch have the same energy. Studies of energy distribution show aspects of bunch shape. –The energy of a particle affects its arrival time at the accelerating cavity… which then in turn affects the energy. Need to measure both energy and arrival time, but can’t measure energy directly. Measuring arrival times is easy –but difficult to interpret successive slices. Tomography techniques lead to a complete picture –like putting together X-ray slices through a person.
15
15 Tony Cass Bunch splitting at the PS
16
16 Tony Cass Accelerator Controls PS Operator Control Windows Magnet current trace showing some of the many beam types the PS can handle for different users.
17
17 Tony Cass Detector Design Issues Detector designs also benefit from computer simulations.
18
18 Tony Cass Detector Design Issues II NA45 TPC with field cage Electric field near the field cage
19
19 Tony Cass Computing for Theory Feynmann diagrams for some LHC processes…… and some at LEP Theoretical physicists could not calculate probabilities for processes represented by Feynmann diagrams like these without using symbolic algebra packages—e.g. Maple or Mathematica. These calculations are essential for two reasons: 1 As collision energies increase, and as the precision of experimental measurements increases with increasing data volume, more Standard Model processes contribute to the data that is collected. 2 Theorists need to calculate how the effects of theories beyond the standard model, e.g. SUSY, could affect the data that is collected today.
20
20 Tony Cass CERN and the World Wide Web The World Wide Web started as a project to make information more accessible, in particular, to help improve information dissemination within an experiment. –These aspects of the Web are widely used at CERN today. All experiments have their own web pages and there are now web pages dedicated to explaining about Particle Physics to the general public. –In a wider sense, the web is being used to distribute graphical information on system, accelerator and detector status. The release of Java has given a big push to these uses. Web browsers are also used to provide a common interface, e.g. »currently to the administrative applications, and »possibly in future as a batch job submission interface for PCs.
21
21 Tony Cass 1998 1999: What has changed? Hardware –PC hardware has replaced RISC workstations for general purpose computing. Software –Future operating system developments clearly concentrated on Linux and Windows »Linux success & use of PCs is a positive feedback loop! –Java is coming up fast on the inside lane. »but C++ investment is large and C++/Java interoperability poor. Systems Management –Understand costs—one PC is cheap, but managing 200 is not!
22
22 Tony Cass 1999 2000: What has changed? Hardware –PC hardware has replaced RISC workstations. Software –Future operating system developments are clearly concentrated on Linux. Windows 2000 will be deployed at CERN but is now a distant 3 rd choice for physics »Linux success & use of PCs is a positive feedback loop! –Java is still coming up fast on the inside lane. » C++ investment is still large and C++/Java interoperability is still poor. Systems Management –Understand costs—one PC is cheap, but managing 2000 is not! –And do we have enough space, power and cooling for the LHC equipment?
23
23 Tony Cass 2000 2001: What has changed? I Windows 2000 has arrived and Wireless Ethernet is arriving. –Portable PCs replacing desktops. –Integration of home directory, web files, working offline makes things easier—just like AFS and IMAP revolutionised my life 8 years ago. I now have ADSL at home rather than ISDN. –I am now outside the CERN firewall when connected from home but this doesn’t matter so much with all my files cached on my portable. »I just need to bolt on a wireless home network so I can work in the garden! –The number of people connecting from outside the firewall will grow »CERN will probably have to support Virtual Private Networks for privileged access »And users will have to worry about securing their home network against hackers…
24
24 Tony Cass Looking Around—Summary Computing extends to all areas of work at CERN. In terms of CERN’s “job”, producing particle physics results, computing is essential for –the design, construction and operation of accelerators and detectors, and –theoretical studies, as well as –the data reconstruction and analysis phases. The major computing facilities at CERN, though, are provided for particle physics work and these will be the subject of the next two lectures.
25
2001 Summer Student Lectures Computing at CERN Lecture 2 — Looking at Data Tony Cass — Tony.Cass@cern.ch
26
26 Tony Cass Data and Computation for Physics Analysis batch physics analysis batch physics analysis detector event summary data raw data event reconstruction event reconstruction event simulation event simulation interactive physics analysis analysis objects (extracted by physics topic) event filter (selection & reconstruction) event filter (selection & reconstruction) processed data
27
27 Tony Cass Central Data Recording CDR marks the boundary between the experiment and the central computing facilities. It is a loose boundary which depends on an experiment’s approach to data collection and analysis. CDR developments are also affected by –network developments, and –event complexity. detector raw data event filter (selection & reconstruction) event filter (selection & reconstruction)
28
28 Tony Cass Monte Carlo Simulation From a physics standpoint, simulation is needed to study –detector response –signal vs. background –sensitivity to physics parameter variations. From a computing standpoint, simulation –is CPU intensive, but –has low I/O requirements. Simulation farms are therefore good testbeds for new technology: –CSF for Unix and now PCSF for PCs and Windows/NT. event simulation event simulation
29
29 Tony Cass Data Reconstruction The event reconstruction stage turns detector information into physics information about events. This involves –complex processing »i.e. lots of CPU capacity –reading all raw data »i.e lots of input, possibly read from tape –writing processed events »i.e. lots of output which must be written to permanent storage. event summary data raw data event reconstruction event reconstruction
30
30 Tony Cass Batch Physics Analysis Physics analysis teams scan over all events to find those that are interesting to them. –Potentially enormous input »at least data from current year. –CPU requirements are high. –Output is “small” »O(10 2 )MB –but there are many different teams and the output must be stored for future studies »large disk pools needed. batch physics analysis batch physics analysis event summary data analysis objects (extracted by physics topic)
31
31 Tony Cass Symmetric MultiProcessor Model Experiment Tape Storage TeraBytes of disks
32
32 Tony Cass Scalable model—SP2/CS2 Experiment Tape Storage TeraBytes of disks
33
33 Tony Cass Distributed Computing Model Experiment Tape Storage Disk Server CPU Server Switch
34
34 Tony Cass Today’s CORE Computing Systems
35
35 Tony Cass Today’s CORE Computing Systems PaRC Engineering Cluster PaRC Engineering Cluster CERN Network Home directories & registry Central Data Services Shared Disk Servers CORE Physics Services CER N 32 IBM, DEC, SUN servers SHIFT Data intensive services 200 computers, 550 processors (DEC, H-P, IBM, SGI, SUN, PC) 25 TeraBytes embedded disk 200 computers, 550 processors (DEC, H-P, IBM, SGI, SUN, PC) 25 TeraBytes embedded disk 2 TeraByte disk 10 SGI, DEC, IBM servers 2 TeraByte disk 10 SGI, DEC, IBM servers 4 tape robots 90 tape drives Redwood, 9840 DLT, IBM 3590, 3490, 3480 EXABYTE, DAT, Sony D1 4 tape robots 90 tape drives Redwood, 9840 DLT, IBM 3590, 3490, 3480 EXABYTE, DAT, Sony D1 Shared Tape Servers Data Recording, Event Filter and CPU Farms for NA45, NA48, COMPASS consoles & monitors DXPLUS, HPPLUS, RSPLUS,LXPLUS, WGS Interactive Services 70 systems (HP, SUN, IBM, DEC, Linux) RSBATCH Public BatchService 32 PowerPC 604 NAP - accelerator simulation service NAP - accelerator simulation service 10-CPU DEC 8400 10 DEC workstations 10-CPU DEC 8400 10 DEC workstations Simulation Facility 25 H-P PA-RISC 25 H-P PA-RISC CSF - RISC servers PCSF - PCs & NT 10 PentiumPro 25 Pentium II 10 PentiumPro 25 Pentium II 60 dual processor PCs 60 dual processor PCs 13 DEC workstations 3 IBM workstations 13 DEC workstations 3 IBM workstations PC Farms
36
36 Tony Cass Today’s CORE Computing Systems PaRC Engineering Cluster PaRC Engineering Cluster CERN Network Home directories & registry Central Data Services Shared Disk Servers CORE Physics Services CER N 25 PC servers 4 others for HPSS SHIFT Data intensive services 350 computers, 850 processors (DEC, H-P, IBM, SGI, SUN, PC) 40 TeraBytes embedded disk 350 computers, 850 processors (DEC, H-P, IBM, SGI, SUN, PC) 40 TeraBytes embedded disk 1 TeraByte disk 3 Sun servers 1 TeraByte disk 3 Sun servers 6 tape robots 90 tape drives Redwood, 9840 DLT, IBM 3590, 3490, 3480 EXABYTE, DAT, Sony D1 6 tape robots 90 tape drives Redwood, 9840 DLT, IBM 3590, 3490, 3480 EXABYTE, DAT, Sony D1 Shared Tape Servers Data Recording, Event Filter and CPU Farms for NA45, NA48, COMPASS consoles & monitors DXPLUS, HPPLUS, RSPLUS,LXPLUS, WGS Interactive Services 80 systems (HP, SUN, IBM, DEC, Linux) RSBATCH Public BatchService 32 PowerPC 604 NAP - accelerator simulation service NAP - accelerator simulation service 10-CPU DEC 8400 12 DEC workstations 10-CPU DEC 8400 12 DEC workstations Simulation Facility 25 H-P PA-RISC 25 H-P PA-RISC CSF - RISC servers PCSF - PCs & NT 10 PentiumPro 25 Pentium II 10 PentiumPro 25 Pentium II 250 dual processor PCs 250 dual processor PCs 13 DEC workstations 5 dual processor PCs 13 DEC workstations 5 dual processor PCs PC Farms LXBATCH Public BatchService 25 dual processor PCs
37
37 Tony Cass Today’s CORE Computing Systems CERN Network CORE Physics Services CER N Dedicated RISC clusters 300 computers, 750 processors (DEC, HP, SGI, SUN) 300 computers, 750 processors (DEC, HP, SGI, SUN) Central Data Services Shared Disk Servers 5 TeraByte disk 3 Sun servers 6 PC based servers 5 TeraByte disk 3 Sun servers 6 PC based servers 10 tape robots 100 tape drives 9940, Redwood, 9840, DLT, IBM 3590E, 3490, 3480 EXABYTE, DAT, Sony D1 10 tape robots 100 tape drives 9940, Redwood, 9840, DLT, IBM 3590E, 3490, 3480 EXABYTE, DAT, Sony D1 Shared Tape Servers Home directories & registry consoles & monitors DXPLUS, HPPLUS, RSPLUS,LXPLUS, WGS Interactive Services 120 systems (HP, SUN, IBM, DEC, Linux) NAP - accelerator simulation service NAP - accelerator simulation service 10-CPU DEC 8400 12 DEC workstations 20 dual processor PCs 10-CPU DEC 8400 12 DEC workstations 20 dual processor PCs PaRC Engineering Cluster PaRC Engineering Cluster 13 DEC workstations 5 dual processor PCs 5 Sun workstations 13 DEC workstations 5 dual processor PCs 5 Sun workstations “Queue shared” Linux Batch Service 350 dual processor PCs RISC Simulation Facility Maintained for LEP only “Timeshared” Linux cluster 200 dual processor PCs Dedicated Linux clusters 250 dual processor PCs PC & EIDE based disk Servers 40TB mirrored disk (80TB raw capacity) 40TB mirrored disk (80TB raw capacity) 25 PC servers
38
38 Tony Cass Hardware Evolution at CERN, 1989-2001
39
39 Tony Cass Interactive Physics Analysis Interactive systems are needed to enable physicists to develop and test programs before running lengthy batch jobs. –Physicists also »visualise event data and histograms »prepare papers, and »send Email Most physicists use workstations—either private systems or central systems accessed via an Xterminal or PC. We need an environment that provides access to specialist physics facilities as well as to general interactive services. analysis objects (extracted by physics topic)
40
40 Tony Cass Unix based Interactive Architecture Backup & Archive Reference Environments CORE Services Optimized Access X Terminals PCs Private Workstations. WorkGroup Server Clusters PLUS CLUSTERS Central Services (mail, news, ccdb, etc.) ASIS : Replicated AFS Binary Servers AFS Home Directory Services General Staged Data Pool X-terminal Support CERN Internal Network
41
41 Tony Cass PC based Interactive Architecture
42
42 Tony Cass Event Displays Event displays, such as this ALEPH display help physicists to understand what is happening in a detector. A Web based event display, WIRED, was developed for DELPHI and is now used elsewhere. Clever processing of events can also highlight certain features—such as in the V-plot views of ALEPH TPC data. Standard X-Y view V-plot view
43
43 Tony Cass Data Analysis Work By selecting a dE/dx vs. p region on this scatter plot, a physicist can choose tracks created by a particular type of particle. Most of the time, though, physicists will study event distributions rather than individual events. RICH detectors provide better particle identification, however. This plot shows that the LHCb RICH detectors can distinguish pions from kaons efficiently over a wide momentum range. Using RICH information greatly improves the signal/noise ratio in invariant mass plots.
44
44 Tony Cass Looking at Data—Summary Physics experiments generate data! –and physcists need to simulate real data to model physics processes and to understand their detectors. Physics data must be processed, stored and manipulated. [Central] computing facilities for physicists must be designed to take into account the needs of the data processing stages –from generation through reconstruction to analysis Physicists also need to –communicate with outside laboratories and institutes, and to –have access to general interactive services.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.