News from Alberto et al. Fibers document separated from the rest of the computing resources https://edms.cern.ch/document/1155044/1 https://edms.cern.ch/document/1158953/1.

Slides:



Advertisements
Similar presentations
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Advertisements

Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Lesson 12 – NETWORK SERVERS Distinguish between servers and workstations. Choose servers for Windows NT and Netware. Maintain and troubleshoot servers.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Project Cysera Hardware Configuration Drafted by Zoebir Bong.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Virtual Desktop Infrastructure Solution Stack Cam Merrett – Demonstrator User device Connection Bandwidth Virtualisation Hardware Centralised desktops.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
Technology Expectations in an Aeros Environment October 15, 2014.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
REQUIREMENTS The Desktop Team Raphael Perez MVP: Enterprise Client Management, MCT RFL Systems Ltd
Farm Completion Beat Jost and Niko Neufeld LHCb Week St. Petersburg June 2010.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
15/12/10FV 1 Common DCS hardware. General ideas Centralized system: o Common computers. Detector dedicated computers (with detector participation) only.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Diamond Computing Status Update Nick Rees et al..
Sandor Acs 05/07/
UCSD CMS 2009 T2 Site Report Frank Wuerthwein James Letts Sanjay Padhi Abhishek Rana Haifen Pi Presented by Terrence Martin.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
NL Service Challenge Plans Kors Bos, Sander Klous, Davide Salomoni (NIKHEF) Pieter de Boer, Mark van de Sanden, Huub Stoffers, Ron Trompert, Jules Wolfrat.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
The DCS lab. Computer infrastructure Peter Chochula.
CSE 451: Operating Systems Autumn 2010 Module 25 Cloud Computing Ed Lazowska Allen Center 570.
Requirements for computing room A. Gianoli, M. Serra, P. Valente.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Virtualization Supplemental Material beyond the textbook.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Tested, seen, heard… Andrei Maslennikov Rome, April 2006.
PIC port d’informació científica DateText1 November 2009 (Elena Planas) PIC Site review.
Nikhef Site Report Paul Kuipers
AMS02 Software and Hardware Evaluation A.Eline. Outline  AMS SOC  AMS POC  AMS Gateway Computer  AMS Servers  AMS ProductionNodes  AMS Backup Solution.
Unit 2 Assignment 2 M2 RECOMMENDED PC. ZOOSTORM Processor – Intel Core i Operating system – Windows 8 Memory – 8GB RAM Hard Drive – 1TB.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN IT Facility Planning and Procurement HEPiX Fall 2010 Workshop.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Running clusters on a Shoestring US Lattice QCD Fermilab SC 2007.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Brief introduction about “Grid at LNS”
TYBIS IP-Matrix Virtualized Total Video Surveillance System Edge Technology, World Best Server Virtualization.
ICEPP, University of Tokyo
Slow Control Servers C. Irmler SVD 27th B2GM, KEK 21 June 2017.
Virtualization OVERVIEW
Computing WG highlights
Welcome! Thank you for joining us. We’ll get started in a few minutes.
HIGH-PERFORMANCE COMPUTING SYSTEM FOR HIGH ENERGY PHYSICS
May 7 About the Datacenter (hardware show and tell)
المحور 3 : العمليات الأساسية والمفاهيم
NSF cloud Chameleon: Phase 2 Networking
SAP HANA Cost-optimized Hardware for Non-Production
SAP HANA Cost-optimized Hardware for Non-Production
Carlos Solans TileCal Valencia meeting 27/07/2007
Presentation transcript:

News from Alberto et al. Fibers document separated from the rest of the computing resources Documents finalized (end of August) Power requirements: 100 kW total Racks Racks on Jura side: 800 mm depth (standard CERN racks) Racks on Saleve side: 1000 mm depth Dimensions of racks’ base plate is needed (for the supports)

Power Computing nodes: 9kW/rack 2 to 5 racks Storage nodes: 5kW/rack 3 racks Support: 1/2 rack, 4kW DCS,DSS 4 kW 2 racks Network: 4kW GTK: 3 kW/rack 3 racks + cooling doors: 1 kW/door, 10kW Total: 68 kW (include 50% safety margin: 95 kW)

Computing nodes How much total computing power for 2012? Specify AMD or INTEL? How many processors? How many cores? Intel Westmere architecture (32 nm): Xeon 5600, 6cores X series, up to 3.33 GHz E series, up to 2.66 GHz L series, low-consumption, up to 2.26 GHz CERN Openlab tested system: GHz, 16 cores 2×6 GB RAM, 450 W power load (full load), 238 HEPSPEC06 (24 processes)=20/core In one 9 kW rack, 20 systems, 4760 HEPSPEC06, approximately 1250 kSi2k How much RAM? How much local disk space? How many 10 Gb and 1 Gb ports/machine? (How many switches?)

Storage nodes How much disk for 2012? Availability of CMS hardware 12 disk arrays with 12 disks each and redundant fiber channel interface: 120 disks WD raptor 300GB (bought at the beginning of the year) 24 disks 1TB 2x 32 port fiberchannel switch 10 DELL servers 2950 with redundant fiber channel cards. Questions: More technical details asked to our CMS colleagues Cost? Maintenance?

Procurement Racks and cooling doors Difficult/not possible to join CERN-IT tender Network apparata Handled by CERN-IT, installation, management, maintenance included Computing & storage nodes How to handle the purchase/acquisition? (CERN, Mainz?...) Time plan for purchase and delivery?

Man power Who and when is going to do/follow/check: Installation Commissioning Operations More: What about UPS system purchase/installation? Remote monitoring of cooling doors (discussion going on with DCS central team