TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF SITE REPORT Corrie Kost Update since Hepix Spring 2005.

Slides:



Advertisements
Similar presentations
UCL HEP Computing Status HEPSYSMAN, RAL,
Advertisements

HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald TRIUMF Steven McDonald & Konstantin Olchanski TRIUMF Network & Computing Services
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
NWfs A ubiquitous, scalable content management system with grid enabled cross site data replication and active storage. R. Scott Studham.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
TRIUMF SITE REPORT – Corrie Kost April Catania (Italy) Update since last HEPiX/HEPNT meeting.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Windows Server MIS 424 Professor Sandvig. Overview Role of servers Performance Requirements Server Hardware Software Windows Server IIS.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Introducing Snap Server™ 700i Series. 2 Introducing the Snap Server 700i series Hardware −iSCSI storage appliances with mid-market features −1U 19” rack-mount.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Small File File Systems USC Jim Pepin. Level Setting  Small files are ‘normal’ for lots of people Metadata substitute (lots of image data are done this.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
LAL Site Report Michel Jouvin LAL / IN2P3
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
HEPiX/HEPNT TRIUMF,Vancouver 1 October 18, 2003 NIKHEF Site Report Paul Kuipers
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Linux Servers with JASMine K. Edwards, A. Kowalski, S. Philpott HEPiX May 21, 2003.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
IST Storage & Backup Group 2011 Jack Shnell Supervisor Joe Silva Senior Storage Administrator Dennis Leong.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 TRIUMF Site Report HEPiX/HEPNT NIKHEF, Amsterdam May 19-23/2003 Corrie Kost.
Cosc 4750 Backups Why Backup? In case of failure In case of loss of files –User and system files Because you will regret it, if you don’t. –DUMB = Disasters.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
HEPIX Backup Survey David Asbury CERN/IT/FIO HEPIX, Rome, 6 April 2006.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
Tested, seen, heard… Andrei Maslennikov Rome, April 2006.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Database CNAF Barbara Martelli Rome, April 4 st 2006.
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
TRIUMF Site Report for HEPiX, JLAB, October 9-13, 2006 – Corrie Kost TRIUMF SITE REPORT Corrie Kost & Steve McDonald Update since Hepix Spring 2006.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Power Systems with POWER8 Technical Sales Skills V1
Open-E Data Storage Software (DSS V6)
Paul Kuipers Nikhef Site Report Paul Kuipers
NL Service Challenge Plans
LCG Deployment in Japan
JDAT Production Hardware
Cost Effective Network Storage Solutions
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF SITE REPORT Corrie Kost Update since Hepix Spring 2005

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 $2995 US w 1 yr support indexes up to 100,000 docs 220 different file formats Two 10/100 Ethernet ports - 1 st for normal operation - 2 nd for setup using cross-over cable 120GB Seagate Drive 2GB Memory Maintainance via special google dial- up modem Google Mini comes to TRIUMF Read a complete in-depth review at

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005

The TRIUMF-CERN 1GbE Lightpath(s) TRIUMF BCNET CANARIE SURFnet CERN 1 GbE circuit establishedApril 18 th nd GbE circuit established July 19 th 2005

TRIUMF Site Report for HEPiX, SLAC, October 10-14, Servers 3 EMT64 systems, each with: 2 GB memory hardware raid - 3ware 9xxx SATA raid controller Seagate Barracuda drives in hardware raid x 250 GB 1 dual Opteron 246 server with: 2 GB memory 3ware 9xxx SATA raid controller WD Caviar SE drives in hardware raid x 250 GB SLX IBM Tape Libraries (currently each with only 1 SDLT 320 tape drive) 1 borrowed EMT64 system used temporarily as an FTS Server with: 1 GB memory 2 SATA 80 GB drives for the OS and for Oracle's needs. Storage 5.5+ TB disk 8+ TB tape ATLAS Service Challenge

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 ATLAS Service Challenge

TRIUMF Site Report for HEPiX, SLAC, October 10-14, GbE Lightpath to CERN TRIUMF CERN Atlantic Crossing √ √ √ √ √ √ X

TRIUMF Site Report for HEPiX, SLAC, October 10-14, GbE Lightpath to CERN Permanent 10GbE TRIUMF-CERN Lightpath ~ year-end 2005 Foundry Bigiron RX-4’s at TRIUMF & BCnet

TRIUMF Site Report for HEPiX, SLAC, October 10-14, GbE Lightpath to CERN

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF WAN CWDM BCNET 22km 10GbE Foundry Switch (CERN / Ottawa) MRV CWDM 1610 nm 1590 nm 1570 nm 1550 nm Potential to Add 2 more 1GbE channels Single Pair Fiber 4 1GbE channels Passport 8600 ORAN WESTGRID 2x CERN SFP 4 Port Optical Mux 2x GbE TDM PROBLEM: MRV needs 1550+/-3nm but FOUNDRY 1550+/-15nm

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 Raid5: Puzzling I/O results Repeated reads on same set of files (at 600MB/sec) – one or more files will “degrade” – typically after set of 16 8GB files have been read 1000 times. Positive: Read ~2PB during 50 days – averaging about 600MB/sec 8 SATA disks on each of pair of RAID5 RocketRaid 1820A controllers

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 Unix Backups at TRIUMF Amanda system –Dual Opteron GHz 2G Memory 16 x400G WD disks ~ 6TB (1.5TB present sys ~ 10day cycle) 2 LSI Mega raid 8 disk controllers Disk based ~1 month of backups –At least 2 full backups with daily incrementals 26 Slot Overland DLT tape library SDLT 600 drive 300G native capacity per tape 150 Linux machines (users: home dir, servers: full)

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 Cheap Hot-Swap Backup Promise SuperSwap 1100 Enclosures Four 400 GB Seagate Sata Drives Promise FastTrak S150 SX4 Sata controller Raid 5 Linux RedHat 9 A disk can be removed at anytime and replaced at anytime. Rebuilds in background. Used to keep live multiple (daily) RSYNC (via DIRVISH) copies of critical servers (for ~ 1 month). See

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 VOIP coming to TRIUMF

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF Ticketing System (Request Tracker)

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF Ticketing System (Request Tracker)

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005

TRIUMF Site Report for HEPiX, SLAC, October 10-14, Site services (Web, , Batch, Windows) all much more stable – new hardware, more memory (typically 4-8GB) in servers - Quad Opteron SUN I/O - using external SATA - still limited below 1 GB/sec - Read 16 8GB files repeatedly – averaging over 600MB/sec for ~2PB - Site “Backup” services still problematic - tape media capacity (outgrow in 2 years) - reliability (is SDLT robust?) - Permanent 10GbE TRIUMF-CERN service by year-end. - ATLAS Service Challenges targets being met for TRIUMF as TIER1 - Started using PLONE as content management for TRIUMF Web Server - Moving some phones to voice-over-IP - Scientific Linux (3 &4) still preferred Linux OS at TRIUMF - Moving away from distributed printing to print/scan-to- /copy stations Conclusions / Observations

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 STORM1 STORM2 SUN1 Foundry LCG STORAGE WORKER NODES GPS TIME MSR WEB NAME DOCUMENTS CONDORG WEB SHARE MAIL FILE IBM CLUSTER FEDORA / SL MIRROR IBM / SHARE STORAGE AMANDA BACKUP (VIA DISKS) TRIUMF Servers – May/2005

TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF Servers – October/2005