3/16/2005CSS/SCS/Farms and Clustered System 1 CDF Farms Transition Old Production farm (fbs/dfarm/mysql) SAM-farm (Condor/CAF/SAM)

Slides:



Advertisements
Similar presentations
UCL HEP Computing Status HEPSYSMAN, RAL,
Advertisements

24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
Conserving Disk Energy in Network Servers ACM 17th annual international conference on Supercomputing Presented by Hsu Hao Chen.
ArcGIS Server Architecture at the DNR GIS/LIS Conference, October 2013.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
The Budget Crunch And Our Virtual Datacenter Presented By Joe Lanager.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
High-Performance Task Distribution for Volunteer Computing Rom Walton
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
“GRID” Bricks. Components (NARA I brick) AIC RMC4E2-QI-XPSS 4U w/SATA Raid Controllers: 3ware- mirrored root disks (2) Areca- data disks, battery backed.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
How to Cluster both Servers and Storage W. Curtis Preston President The Storage Group.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
Cluster Components Compute Server Disk Storage Image Server.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Planning and Designing Server Virtualisation.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
Winnie Lacesso Bristol Storage June DPM LCG Storage lcgse01 = DPM built in 2005 by Yves Coppens & Pete Gronbech SuperMicro X5DPAGG (Streamline.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
SCF/FEF Virtualization Strategy Jason Allen August 12, 2009.
Hotfoot HPC Cluster March 31, Topics Overview Execute Nodes Manager/Submit Nodes NFS Server Storage Networking Performance.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Wright Technology Corp. Minh Duong Tina Mendoza Tina Mendoza Mark Rivera.
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
ONStor Pantera 3110 ONStor NAS. Copyright 2008 · ONStor Confidential Pantera 3110 – An Integrated Channel only NAS  Integrated standalone NAS system.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
1 Worker Node Requirements TCO – biggest bang for the buck –Efficiency per $ important (ie cost per unit of work) –Processor speed (faster is not necessarily.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Tested, seen, heard… Andrei Maslennikov Rome, April 2006.
A Scalable Distributed Datastore for BioImaging R. Cai, J. Curnutt, E. Gomez, G. Kaymaz, T. Kleffel, K. Schubert, J. Tafas {jcurnutt, egomez, keith,
Southwest Tier 2 (UTA). Current Inventory Dedidcated Resources  UTA_SWT2 320 cores - 2GB/core Xeon EM64T (3.2GHz) Several Headnodes 20TB/16TB in IBRIX/DDN.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
AMS02 Software and Hardware Evaluation A.Eline. Outline  AMS SOC  AMS POC  AMS Gateway Computer  AMS Servers  AMS ProductionNodes  AMS Backup Solution.
Australian Institute of Marine Science Jukka Pirhonen.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
VM Layout. Virtual Machine (Ubuntu Server) VM x.x You can putty into this machine from on campus. Or you can use vSphere to control the hardware.
SBS Alert Web Console Senior Design 3 – February 28, 2005 Debra Sweet Barrett.
Understanding and Improving Server Performance
TYBIS IP-Matrix Virtualized Total Video Surveillance System Edge Technology, World Best Server Virtualization.
ICEPP, University of Tokyo
Cluster Active Archive
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Southwest Tier 2.
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

3/16/2005CSS/SCS/Farms and Clustered System 1 CDF Farms Transition Old Production farm (fbs/dfarm/mysql) SAM-farm (Condor/CAF/SAM)

3/16/2005CSS/SCS/Farms and Clustered System 2 CDF Old Production Farm Headnode: cdffarm0 Dell 6650, 4-Way Xeon HT, 4GB RAM, 0.5TB usable RAID5+0, SLF303 2 Servers: fnpcc (FPS) – dual-Xeon 2U, IDEs, FL731 (cdffarm2) – duel-Athlon 4U, SCSI 98 worker nodes: fncdf75-90, , where 8 readers and 8 concatenators

3/16/2005CSS/SCS/Farms and Clustered System 3 Experimental SAM-Farm Condor/CAF headnode: fncdf171 (fncdf) 2 SAM stations: fcdfdata057&055 (cdf- farm1&2) – Polywell 2TB/each, IDE RAID 29 condor workers in FCC2: fncdf new workers in GCC: fncdf (testing SLF303)

3/16/2005CSS/SCS/Farms and Clustered System 4 SAM-Farm functional requirements 1.dCAF/Condor headnode(s) 2.Monitoring and Web services 3.Many Condor worker nodes 4.SAM: Station(s) Storage: cache and store Concatenation

3/16/2005CSS/SCS/Farms and Clustered System 5 SAM-Farm hardware mapping cdffarm0 will replace fncdf171 as headnode, also monitoring and web All worker nodes will be converted to Condor and SLF303, using Rocks SAM stations need new machines since 055 and 057 will be out of warranty soon Add a 2 nd headnode?

3/16/2005CSS/SCS/Farms and Clustered System 6 SAM-Farm SAM station requirements >4TB/day (~50MB/s avg.) in/out network: multi Gb ports/links >2TB cache RAID usable, thus ~4TB raw >2TB store RAID usable, ~4TB raw Sufficient concatenation power Single or multiple machines? 24x7 or 8x5 support?

3/16/2005CSS/SCS/Farms and Clustered System 7 SAM station hardware options 1.SCSI RAID: 4xDell 2850 ($9k each) ~$36k Dual-Xeon 3GHz, 4x300GB SCSIs (and mirrored system drives) 2.IDE RAID: 4xKoi ($5k each) ~$20k Dual-Xeon 2.8GHz, 4x250GB IDEs (and mirrored system drives) 3.LSI or EMC: 2x2TB (~$40k each) ~$70k 4WNs w/ Fibre Channel interfaces 4.Panasas: 4WNs + Panasas (5TB)~$35k ** D0 will need similar hardware too….. **