SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
13th November 2002Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting University of Bristol 13 th November.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE SAN DIEGO SUPERCOMPUTER CENTER Particle Physics Data Grid PPDG Data Handling System Reagan.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
1 SLAC Site Report By Les Cottrell for UltraLight meeting, Caltech October 2005.
US-CMS Meeting (May 19, 2001)Paul Avery1 US-CMS Meeting (UC Riverside) May 19, 2001 Grids for US-CMS and CMS Paul Avery University of Florida
High-Performance Throughput Tuning/Measurements Davide Salomoni & Steffen Luitz Presented at the PPDG Collaboration Meeting, Argonne National Lab, July.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Richard P. Mount CHEP 2000Data Analysis for SLAC Physics Richard P. Mount CHEP 2000 Padova February 10, 2000.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
1 Grid Related Activities at Caltech Koen Holtman Caltech/CMS PPDG meeting, Argonne July 13-14, 2000.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
IST Storage & Backup Group 2011 Jack Shnell Supervisor Joe Silva Senior Storage Administrator Dennis Leong.
1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2.
1 High performance Throughput Les Cottrell – SLAC Lecture # 5a presented at the 26 th International Nathiagali Summer College on Physics and Contemporary.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.
DoE NGI Program PI Meeting, October 1999Particle Physics Data Grid Richard P. Mount, SLAC Particle Physics Data Grid Richard P. Mount SLAC Grid Workshop.
A B A B AR InterGrid Testbed Proposal for discussion Robin Middleton/Roger Barlow Rome: October 2001.
PPDGLHC Computing ReviewNovember 15, 2000 PPDG The Particle Physics Data Grid Making today’s Grid software work for HENP experiments, Driving GRID science.
Hepix LAL April 2001 An alternative to ftp : bbftp Gilles Farrache In2p3 Computing Center
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
1 Passive and Active Monitoring on a High-performance Network Les Cottrell, Warren Matthews, Davide Salomoni, Connie Logg – SLAC
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
1 Experiences and results from implementing the QBone Scavenger Les Cottrell – SLAC Presented at the CENIC meeting, San Diego, May
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
GLAST LAT Project SLAC-NRL Data Connection for GLAST Environmental Test 3/15/05 1 GLAST Large Area Telescope: SLAC – NRL Data Connection for the GLAST.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
DOE/NSF Quarterly review January 1999 Particle Physics Data Grid Applications David Malon Argonne National Laboratory
GDB meeting - Lyon - 16/03/05 An example of data management in a Tier A/1 Jean-Yves Nief.
1 IEPM / PingER project & PPDG Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99 Partially funded by DOE/MICS Field Work Proposal on.
Storage Management on the Grid Alasdair Earl University of Edinburgh.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
Universita’ di Torino and INFN – Torino
LHC Collisions.
High Speed File Replication
Wide Area Networking at SLAC, Feb ‘03
Nuclear Physics Data Management Needs Bruce G. Gibbard
SLAC B-Factory BaBar Experiment WAN Requirements
Using an Object Oriented Database to Store BaBar's Terabytes
Guinness Book of Records 2004
Wide-Area Networking at SLAC
Presentation transcript:

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, BaBar Very successful turn-on! –already reached design luminosity Computing: –1,000,000 lines of C++ –Object oriented Db = Objectivity –Online data rate 10-20MBytes/sec –Regional computing centers: IN2P3 (France/Lyon), RAL (UK), INFN (Italy/Rome)

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, SLAC-BaBar Data Analysis System 50/400 simultaneous/total physicists, 300 Tbytes per year

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, BaBar Offline Computing at SLAC: Costs other than Personnel (does not include “per physicist” costs such as desktop support, help desk, telephone, general site network) Does not include tapes

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, BaBar’s Need for the Grid Early 2000: –The Grid? “Don’t bother me now, I’m working on the CP-violation result for Osaka” –Data Transfer? “Something the French BaBarians do to justify the existence of their computer center” More recently: –“The Grid? Yes we need it and it has to work”

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, BaBar’s Need for the Grid What Happened? –PEP II/BaBar has reached design (integrated) luminosity (131 TB in database by end June 2000) –We all believe the plans to increase luminosity by a factor 8 by 2003 –BaBar data will grow faster than “Moore’s Law” Options: –Pour French, British, Italian money into the SLAC Computer Center –Make the Grid work While simultaneously improving storage management

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, SSRL Collaboration with SDSC (UC San Diego) –130MBytes/minute/beamline/full operation –374 GBytes per 12 hour day for 4 beamlines –90TBytes per year (8 months operation)

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, Connectivity: NTON NTON ATM Nortel MUX Cisco GSR HP Exemplar 8 x OC-3 (155M) ATM2 x OC-12 (622M) ATM Catalyst 6509 Gigabit Ethernet Sun E450Sun E420 Dual PIII 533 w/Linux NSTOR FC Array FC Disks Gigabit Ethernet Dual PIII 833 w/WindowsNT 2 x OC-12 (622M) ATM Demonstrated 57MBytes/second disk to application between SLAC & LBNL a year ago. Now aiming for 100MBytes/sec between SLAC & Caltech

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, Connectivity: Stanford U/Abilene OC3 for last 9 months Will upgrade to OC12 soon Currently only used between SLAC and Stanford and UC campuses –Important for SSRL –proposal to use for QoS tests between SLAC and Daresbury Lab in UK

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, Connectivity: ESnet 1 of 2 ESnet: 43 Mbps being upgraded to 155Mbps –upgrade requested 18 months ago, approved 15 months ago, hope for soon... –now saturated for long periods

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, Connectivity: ESnet 2 of 2 Heavy use to transfer data from SLAC to IN2P3 (17-26Mbps continuous, 100GBytes/day, need TBytes/day) No longer send tapes (latency goes from weeks to hours, less jitter, reduced labor (especially for errors)) In past weakness of International links protected ESnet from significant traffic HEP can saturate links between major regional centers Other major BaBar centers include: Oxford (RAL), Rome (INFN), Caltech, Colorado, LBNL, LLNL

SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, Requirements Higher speed end-to-end links Site-to-Site Replication Service –100 Mbyte/s goal possible through NTON (SLAC-Caltech are working on this). Multi-site Cached File Access System –Will use OC12, OC3, (even T3) as available (even 20 Mbits/s international links) –Need “Bulk Transfer” service: Latency unimportant Tbytes/day throughput important (Need prioritized service to achieve this on international links) Coexistence with other network users important. (This is the main PPDG need for differentiated services on ESnet) –can be background, don’t want onerous scheduling