Hepix spring 2012 Summary SITE:

Slides:



Advertisements
Similar presentations
 What Is Desktop Virtualization?  How Does Application Virtualization Help?  How does V3 Systems help?  Getting Started AGENDA.
Advertisements

Protect Your Business and Simplify IT with Symantec and VMware Presenter, Title, Company Date.
Distributed Tier1 scenarios G. Donvito INFN-BARI.
Spark: Cluster Computing with Working Sets
Adam Duffy Edina Public Schools.  The heart of virtualization is the “virtual machine” (VM), a tightly isolated software container with an operating.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
High Performance Computing Course Notes High Performance Storage.
VMware Update 2009 Daniel Griggs Solutions Architect, Virtualization Servers & Storage Solutions Practice Dayton OH.
Implementing Failover Clustering with Hyper-V
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Multi-level Selective Deduplication for VM Snapshots in Cloud Storage Wei Zhang*, Hong Tang †, Hao Jiang †, Tao Yang*, Xiaogang Li †, Yue Zeng † * University.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Business Continuity Overview Wayne Salter HEPiX April 2012.
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google∗
CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System.
Components of Windows Azure - more detail. Windows Azure Components Windows Azure PaaS ApplicationsWindows Azure Service Model Runtimes.NET 3.5/4, ASP.NET,
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
HEPiX October 2009 Keith Chadwick. Outline Virtualization & Cloud Computing Physical Infrastructure Storage Monitoring Security ITIL HEPiX Conference.
An Open Source approach to replication and recovery.
High Performance Computing Cluster OSCAR Team Member Jin Wei, Pengfei Xuan CPSC 424/624 Project ( 2011 Spring ) Instructor Dr. Grossman.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Copyright © 2011 EMC Corporation. All Rights Reserved. MODULE – 6 VIRTUALIZED DATA CENTER – DESKTOP AND APPLICATION 1.
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
| nectar.org.au NECTAR TRAINING Module 9 Backing up & Packing up.
PhysX CoE: LHC Data-intensive workflows and data- management Wahid Bhimji, Pete Clarke, Andrew Washbrook – Edinburgh And other CoE WP4 people…
Large Scale Test of a storage solution based on an Industry Standard Michael Ernst Brookhaven National Laboratory ADC Retreat Naples, Italy February 2,
Storage Tank in Data Grid Shin, SangYong(syshin, #6468) IBM Grid Computing August 23, 2003.
The HEPiX IPv6 Working Group David Kelsey EGI TF, Prague 18 Sep 2012.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Monitoring at GRIF Frédéric SCHAER cea.fr Hepix Workshop, April
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Server Virtualization & Disaster Recovery Ryerson University, Computer & Communication Services (CCS), Technical Support Group Eran Frank Manager, Technical.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
Copyright © 2005 VMware, Inc. All rights reserved. How virtualization can enable your business Richard Allen, IBM Alliance, VMware
| nectar.org.au NECTAR TRAINING Module 9 Backing up & Packing up.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
To provide the world with a next generation storage platform for unstructured data, enabling deployment of mobile applications, virtualization solutions,
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS Data architecture challenges for CERN and the High Energy.
WLCG and IPv6 David Kelsey (STFC-RAL) LHCOPN/LHCONE, Rome 28 Apr 2014.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Hands-On Microsoft Windows Server 2008 Chapter 7 Configuring and Managing Data Storage.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CC Monitoring I.Fedorko on behalf of CF/ASI 18/02/2011 Overview.
Data Hosting and Security Overview January, 2011.
The HEPiX IPv6 Working Group David Kelsey HEPiX, Prague 26 April 2012.
Virtual Server Server Self Service Center (S3C) JI July.
Functions of Operating Systems V1.0 (22/10/2005).
ProStoria DATA-AS-A-SERVICE FOR DEVOPS. Agenda: ProStoria presentation Contact data.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS Storage plans for CERN and for the Tier 0 Alberto Pace (and.
PHD Virtual Technologies “Reader’s Choice” Preferred product.
Federating Data in the ALICE Experiment
WLCG IPv6 deployment strategy
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
Yaodong CHENG Computing Center, IHEP, CAS 2016 Fall HEPiX Workshop
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Luca dell’Agnello INFN-CNAF
Unit OS10: Fault Tolerance
SAN and NAS.
Gregory Kesden, CSE-291 (Storage Systems) Fall 2017
Computing Infrastructure for DAQ, DM and SC
Gregory Kesden, CSE-291 (Cloud Computing) Fall 2016
Design Unit 26 Design a small or home office network
Quick Tips #1 – Wan accelerator seeding for backup jobs
Presentation transcript:

Hepix spring 2012 Summary SITE:

Monitoring at GRIF Frédéric SCHAER Nagios+nrpe+pnp+rrdcached Vendor HW monitoring sensors : using nagios checks

Nagios tricks Compression : – Nrpe_check output|bzip2 | uuencode – Nagios check_nrpe |bunzip2 | uudecode RRD cache daemon: –

CERN Infrastructure Projects by Wayne SALTER New computing rooms Remote hosting –Reliable hosting of CERN equipment in a separated area with controlled access –Including all infrastructure support and maintenance –Wigner Data Centre in Budapest New facility due to be ready at the end of blocks of 275m² each with six rows of 21 racks with average power density of 10kW and A+B feeds to all racks 725m² in an existing building but new infrastructure 3 blocks of 275m² each with six rows of 21 racks Expect to go into full production January 2014

Western Digital The company announced on March 8th, 2012 that it completed the acquisition of Hitachi Global Storage Technologies. SAS drives

CHroot OS (L. Pezzaglia, NERSC) If we run one job per core on a 100-node cluster with 24 cores per node, we will have 2400 VMs to manage Each VM mounts and unmounts parallel filesystems Each VM will be joining and leaving shared serviceswith each reboot Shared services (including filesystems) mustmaintain state for all these VMs CHOS fulfills most of the use cases for virtualization in HPC with minimal administrative overhead and negligible performance impact Users do not interact directly with the “base” OS CHOS provides a seamless user experience Users manipulate only one file ($HOME/.chos), and the desired environment is automatically activated for all interactive and batch work

IPv6: David Kelsey F. Prelz Already found a few problems OpenAFS, dCache, UberFTP FTS & globus_url_copy Large sites (e.g. CERN and DESY) wish to manage the allocation of addresses – Do not like autoconfiguration (SLAAC) World IPv6 Launch Day: 6 June 2012 “The Future is Forever” Permanently enable IPv6 by 6 th June 2012

Data Storage at CERN Jakub T. Moscicki Now:

Data Storage at CERN Jakub T. Moscicki Evolution

CERN EOS EOS comes with features required for a very large Analysis Facility – integrated with a popular protocol in the experiments (xrootd) – fast metadata access (10x compared to Castor) – data replicated at the file level across independent storage units (JBODs) – lost replicas are automatically recreated by the system, no loss of service by the client – operations simplified: power-off a disk or machine at any time without loss of service (service availability) – known issues: in-memory metadata – EOS is freely available (GPL) but not packaged/supported off- the-shelf by us

EOS: Future Future: m+n block replication – stored on independent storage units (disk/servers) – different algorithms possible double, triple parity, LDPC, Reed Solomon – simple replication: 3 copies, 200% overhead, 3 x streaming performance – 10+3 replicaton: can lose any 3, remaining 10 are enough to reconstruct, 30 % storage overhead

Hardware evaluation 2012 Jiří Horky Deduplication test: HV: Fujitsu Eternus CS800 S2 – All zeros: dedup ratio 1:1045 – /dev/urandom, Atlas data: 1:1.07 – Backups: 1:2.8 – snapshots of VM: 1:11.7 How many disks are needed for 64 core worker nodes? – 3-4 SAS 2.5” 300GB 10kRPM drives in RAID0

Cyber security eakers_2012.asp