Download presentation
Presentation is loading. Please wait.
Published byEzra Watkins Modified over 9 years ago
1
Trip Report SC’04 Pittsburgh Nov 6-12 Fons Rademakers
2
Super Computing Conferences Large conference, more than 6000 attendees. Very large exhibit, all major and not so major vendors showing their latest high performance products. All major US and many European and Asian labs had stands on the exhibit floor –Fermi, SLAC, BNL, LBL, Argonne, many Universities, etc. No explicit CERN presence. We have now attended the last three SC conferences, the last two we demoed the ALICE/PROOF/AliEn/gLite analysis system on the HP stand. We did not attend the technical program, we gave many demos, talked to a lot of people, got many good ideas and feedback, and wandered over the large exhibit.
3
Timeline Nov 5: Visit to Pittsburgh Super Computing Center Nov 6 - 7: Sightseeing and arrival of rest of team Nov 8: Setting up demo at HP booth Nov 9 - 11: Running demos
4
The HP Grid Team Derek Feichtinger Fons Rademakers Andreas Peters Federico Carminati Matevs Tadel Predrag Buncic
5
Other Incognito Appearances Wolfgang von Rueden - Fermilab Sverre Jarp - FermiLab David Foster - Caltech
6
Nov 5: Visit to PSC On the steps of the Mellon Institute, which houses the PSC.
7
Nov 5: Visit to PSC Federico Carminati in discussion with Mike Levine (left), scientific director of PSC and his associates.
8
Nov 5: Visit to PSC Mike Levine and Frederico Carminati in front of “Ben”, PSC’s 64 node Compaq Alphaserver system, each node is a 4 processor SMP with 4 GB of memory.
9
Summary PSC Large amount of expensive hardware: –Ben: 64 Compaq Alphaserver nodes, each node is a 4 CPU SMP with 4 GB RAM –Lemieux: 750 Compaq Alphaserver ES45 nodes and two separate front end nodes. Each computational node is a 4 CPU SMP with 4 GB RAM running Tru64. A Quadrics interconnection network connects the nodes. –Red Storm from Cray (2005): 2000 AMD Opteron processors with a peak performance of 10 TFlops running Linux. [A 10000 node, 40 TFlops, Red Storm has already been installed at Sandia]. –LambaRail: next generation 30 Gb WAN. Multiple networks exists side-by-side in the same fiber-optic cable, but are independent, each supported by its own lightwave (lambda).
10
Summary PSC Most jobs running at PSC use a large number of CPU’s in parallel. On Lemieux more than 50% of all jobs used more than 1024 CPU’s in parallel (mainly biomedical and meteorological codes). PROOF would be an interesting candidate to run on >1024 nodes. But there is little interest to provide cycles for many single CPU batch reconstruction or analysis jobs.
11
Sightseeing Derby between the Pittsburgh Steelers and the Philadelphia Eagles.
12
The SC’04 Exhibit Small part of the SC’04 exhibit floor.
13
Nov 8: Setting Up the Demo Checking the demo machine.
14
Nov 9-11: Running Demos Predrag and Derek giving a demo.
15
Trends Seen at the Exhibit Sun showing their nice new AMD Opteron machines running Solaris 10. Cray showing Opteron based Red Storm components. SGI is betting the shop on Itanium, very nice graphics demos. AMD showed the first running multi-core CPU. Opteron is clearly the CPU of the year. Linux is clearly the SC OS of choice, Windows only seen on the Microsoft booth. Lots of management and monitoring software. SLAC showed xrootd as a fail safe file server. FermiLab showed PEAC, PROOF Enabled Analysis Cluster. Of course everybody had a Grid solution (even a dual CPU machine is now a problem asking for a Grid solution).
16
Our Demo Demonstrated the feasibility of global distributed parallel interactive data analysis. Used 14 sites, each running 4 PROOF workers, i.e. 52 CPU’s in parallel. Used ALICE MC data that had been produced at these sites during our PDC’04. Made a realistic analysis using the ALICE ESD objects. Used the AliRoot, ROOT, PROOF, and gLite technologies.
17
Forward Proxy Rootd Proofd Grid/Root Authentication Grid Access Control Service TGrid UI/Queue UI Proofd StartupPROOFClient PROOFMaster Slave Registration/ Booking- DB Site PROOF SLAVE SERVERS Site A PROOF SLAVE SERVERS Site B LCGPROOFSteer Master Setup New Elements Grid Service Interfaces Grid File/Metadata Catalogue Client retrieves list of logical file (LFN + MSN) Booking Request with logical file names “Standard” Proof Session Slave ports mirrored on Master host Optional Site Gateway Master Client Grid-Middleware independend PROOF Setup Only outgoing connectivity
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.