IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.

Slides:



Advertisements
Similar presentations
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Advertisements

Computing Infrastructure
Hardware & the Machine room Week 5 – Lecture 1. What is behind the wall plug for your workstation? Today we will look at the platform on which our Information.
NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
© 2010 VMware Inc. All rights reserved Confidential Performance Tuning for Windows Guest OS IT Pro Camp Presented by: Matthew Mitchell.
Chapter 5: Server Hardware and Availability. Hardware Reliability and LAN The more reliable a component, the more expensive it is. Server hardware is.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Server Platforms Week 11- Lecture 1. Server Market $ 46,100,000,000 ($ 46.1 Billion) Gartner.
Computer Hardware and Procurement at CERN Helge Meinhard (at) cern ch HEPiX fall SLAC.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Consultant Learning Services Microsoft Store Customer Service and Support Sutherland Global Services.
TRIUMF SITE REPORT – Corrie Kost April Catania (Italy) Update since last HEPiX/HEPNT meeting.
Windows Server MIS 424 Professor Sandvig. Overview Role of servers Performance Requirements Server Hardware Software Windows Server IIS.
Day 10 Hardware Fault Tolerance RAID. High availability All servers should be on UPSs –2 Types Smart UPS –Serial cable connects from UPS to computer.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
14th April 1999Hepix Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CERN 25-Mar-99 Computer architectures and operating systems How many do we have to support in HEP? HEPCCC meeting CERN - 9 April.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
CLUSTER COMPUTING STIMI K.O. ROLL NO:53 MCA B-5. INTRODUCTION  A computer cluster is a group of tightly coupled computers that work together closely.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Planning the LCG Fabric at CERN openlab TCO Workshop November 11 th 2003 CERN.ch.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
The DCS lab. Computer infrastructure Peter Chochula.
RAL Site report John Gordon ITD October 1999
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
CERN 29-Apr-99 Computer architectures and operating systems Is there an opportunity for convergence? FOCUS meeting CERN - 29 April.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
CASPUR Site Report Andrei Maslennikov Group Leader - Systems RAL, April 1999.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
Linux IDE Disk Servers Andrew Sansum 8 March 2000.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Virtual Server Server Self Service Center (S3C) JI July.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Chapter 16 Client/Server Computing Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
CERN Disk Storage Technology Choices LCG-France Meeting April 8 th 2005 CERN.ch.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
NL Service Challenge Plans
PC Farms & Central Data Recording
OffLine Physics Computing
Chapters 1-3 Concepts NT Server Capabilities
Instructor: Mort Anvari
Cluster Computers.
Presentation transcript:

IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003

Introduction HEP computing in the past: mostly reading from, processing, and writing to (tape) files sequentially Mainframe era (until ~ 1995): single machine, CPUs, tape drives, little disk space In response to scaling problem, development of SHIFT architecture (early 1990s) Scalable farm out of ‘commodity’ components RISC CPUs (PowerPC, MIPS, Alpha, PA-RISC, Sparc) SCSI disks

SHIFT architecture Network - FDDI - Hippi - Myrinet - Ethernet Tape server Disk server Batch server Interactive server Batch and disk SMP Network - Ethernet

PC batch nodes 1995: First studies at CERN of PCs as batch nodes (Windows NT) 1997 onwards: Rapidly growing interest in Linux (on IA32 only) 1998/99: First production farms for interactive and batch services running Linux on PC hardware at CERN

PC disk servers 1997/98: Prototypes with SCSI disks 1998/99: Prototypes with EIDE disks Different IDE adapters Not RAIDed 1999/2000: First Jumbo servers (20 x 75 GB) put into production 2001: First rack-mounted systems 2002: 97 new servers (54 TB usable) 2003: 1.3 TB usable in one server at 13 kCHF Total usable capacity today: ~ 200 TB

1997: ~700 GB SCSI/Sparc

2000/2001: 750 GB PC/EIDE (1)

2000/2001: 750 GB PC/EIDE (2)

2002: 670 GB PC/EIDE 2 systems

Disks only Complete systems SCSI/ RISC EIDE/ PC

Gross Usable

Today’s servers: Specifications 19” rackmount, IA32 processor(s), 1 GB, 2x80 GB system disks, GigE (1000BaseT), redundant power supplies >500 GB usable space on data disks Hardware RAID offering redundancy Hot-swap disk trays Performance requirements network – disk: 50 MB/s reading from 500 GB 40 MB/s writing to 500 GB 5 years on-site warranty

Lessons learnt Capacity is not everything, for good performance need CPU, memory, RAID cards Good OS and application software Network connectivity Large number of spindles Firmware of RAID controllers and disks critical Redundancy (RAID) is a must, required performance possible only with mirroring (RAID 1) so far

Outlook Good price/performance has risen interest in other application domains at CERN AFS and MS DFS servers Web servers, mail servers Software servers (Linux installation) Data base servers (Oracle, Objectivity/DB) Access pattern of physics analysis likely to change Investigating different file systems (XFS), RAID 5 (in software), … Architecture constantly being reviewed Alternatives investigated: data disks scattered over large number of batch nodes; SAN

Conclusion Architecture of early 1990s still valid May even carry us into LHC era… Important improvements made Price/performance Reliability (RAID) Will review architecture soon (2003) New application areas New access patterns for physics analysis