A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
 Changes to sources of funding for computing in the UK.  Past and present computing resources.  Future plans for computing developments. UK Status &
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
RAID-x: A New Distributed Disk Array for I/O-Centric Cluster Computing Kai Hwang, Hai Jin, and Roy Ho.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
CMS Report – GridPP Collaboration Meeting VIII Peter Hobson, Brunel University22/9/2003 CMS Applications Progress towards GridPP milestones Data management.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
8 October 1999 BaBar Storage at CCIN2P3 p. 1 Rolf Rumler BaBar Storage at Lyon HEPIX and Mass Storage SLAC, California, U.S.A. 8 October 1999 Rolf Rumler,
Dave Newbold, University of Bristol8/3/2001 UK Testbed 0 Sites Sites that have committed to TB0: RAL (R) Birmingham (Q) Bristol (Q) Edinburgh (Q) Imperial.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000.
CASPUR Site Report Andrei Maslennikov Group Leader - Systems RAL, April 1999.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
PROOF Benchmark on Different Hardware Configurations 1 11/29/2007 Neng Xu, University of Wisconsin-Madison Mengmeng Chen, Annabelle Leung, Bruce Mellado,
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
LHCb Grid MeetingLiverpool, UK GRID Activities Glenn Patrick Not particularly knowledgeable-just based on attending 3 meetings.  UK-HEP.
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
Cofax Scalability Document Version Scaling Cofax in General The scalability of Cofax is directly related to the system software, hardware and network.
11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
SLAC B-Factory BaBar Experiment WAN Requirements
Using an Object Oriented Database to Store BaBar's Terabytes
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

A UK Computing Facility John Gordon RAL

October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data) ~300TByte/year UK Physicists want access to data for analysis 2TB in 1999, 4TB/year 2000 on data and simulation

October ‘99HEPiX Fall ‘99 Why don’t UK Physicists use SLAC? ….and SLAC is already heavily used.

October ‘99HEPiX Fall ‘99 Existing UK Facilities Shared facilities in UK are HP, Intel/Linux and Intel/NT. BaBar mainly use Suns –Historically, UK lacking in Suns in HEP departments BaBar have Sun E3500, 4 cpus, 2GB of memory at RAL - Bought for program development Several hundred GB of disk Plus a few desktop machines in universities

October ‘99HEPiX Fall ‘99 BaBar bid for more BaBar went to a UK Government research fund and bid for $1.8M for UK BaBar facilities They were awarded ~$1.2M at the start of this year for: –A central server at RAL with several TB which will receive data from SLAC. –Server and disk in 10 UK universities –Co-operating databases across the UK –One extra staff member to achieve this

October ‘99HEPiX Fall ‘99 Actual Equipment Sun vs Compaq Sun won. RAL 5 Universities (Bristol, Edinburgh,Imperial, Liverpool, Manchester) with 4 Universities (Birmingham, Brunel, QMW, RHBNC) with E4500 server - 6x400MHz cpus, 4GB memory 5TB of formatted disk in 27 A1000 RAID arrays 6 UWSCSI busses DLT7000 stacker 7 fast ethernet adaptors E250, 2 x400MHz cpus, 1GB, 3xA1000 = 0.5TB E450, 3x400MHzcpus, 2GB 5x A1000, = 1TB

October ‘99HEPiX Fall ‘99 Setup at RAL (early experience) Equipment delivered and installed Filesystems limited to 1TB –used 4xA1000 => 720GB striped(?) 5.5 Million events brought from SLAC E3500 acts as a front-end, E4500 holds data, both runs batch jobs E4500 also AMS server to other systems. LSF cluster on 2 Suns. Who else is running large data on Suns?

October ‘99HEPiX Fall ‘99 OOSS Andy Hanushevsky visited in September and installed his OOFS and OOSS This provides a layer which interfaces Objectivity to the Atlas Datastore (cf HPSS at SLAC) All the disk space runs under the control of OOS which acts as a cache manager Current level of Objectivity/AMS doesn’t allow OOS to retrieve data transparently from the robot but data can be easily brought on-line by prestaging

October ‘99HEPiX Fall ‘99 Network Plans A single server in a university on fast ethernet can suck data from RAL at rates which will be unpopular with other sharing the institutes connections to the WAN. Pilot to establish tunnels over JANET using spare ATM capacity

October ‘99HEPiX Fall ‘99 Manchester RAL IC 2MB

October ‘99HEPiX Fall ‘99 Purpose of Trial Since bandwidth will be small the trial will not necessarily give better throughput Establish whether end-to-end connection over PVCs works Establish whether the different management domains can reach a common, working solution Check that the routing works Should be simple to increase bandwidth later

October ‘99HEPiX Fall ‘99 Original Data Model Data to RAL by tape Model I - all TAG data at other sites; pull detailed data from RAL Model II - frequently-accessed events stored in full at other sites; replication from RAL Investigate methods of copying, updating, replicating databases over WAN

October ‘99HEPiX Fall ‘99 New Data Model(?) BaBar currently has performance limitations Working on non-Objectivity solution NOTMA DST using ROOT i/o 10**9 events = 280GB Likely that universities will want all events locally Detailed events stored at RAL in Objectivity

October ‘99HEPiX Fall ‘99 Conclusion RAL moving in opposite direction from HEPCCC proposal - more flavours of unix on new hardware platform. BaBar will be using Linux soon for simulation though A bigger scale of disk data handling for one experiment. Data synchronisation over WAN –(SLAC-RAL-UK Universities)