Infrastructure availability and Hardware changes Slides prepared by Niko Neufeld Presented by Rainer Schwemmer for the Online administrators.

Slides:



Advertisements
Similar presentations
TOI - Refresh Upgrades in Cisco Unity Connection 8.6
Advertisements

Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
ITE PC v4.0 Chapter 1 1 Operating Systems Computer Networks– 2.
Installing Windows 7 Lesson 2.
Operating Systems.
Installing software on personal computer
Managing a computerised PO Operating environment 1.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
© 2010 VMware Inc. All rights reserved VMware ESX and ESXi Module 3.
8/10/2015Windows 71 George South. 8/10/2015Windows Windows Vista Windows Vista was released in January 2007 some five years after Windows XP Vista.
Moving to Win 7 Considerations Dean Steichen A2CAT 2010.
Installing Windows Vista Lesson 2. Skills Matrix Technology SkillObjective DomainObjective # Performing a Clean Installation Set up Windows Vista as the.
Clara Gaspar, November 2012 Experiment Control System LS1 Plans…
Chapter 10 : Designing a SQL Server 2005 Solution for High Availability MCITP Administrator: Microsoft SQL Server 2005 Database Server Infrastructure Design.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
CIS 191 – Lesson 2 System Administration. CIS 191 – Lesson 2 System Architecture Component Architecture –The OS provides the simple components from which.
Chapter Fourteen Windows XP Professional Fault Tolerance.
Installing Windows Vista Lesson 2. Skills Matrix Technology SkillObjective DomainObjective # Performing a Clean Installation Set up Windows Vista as the.
SMS 2003 Deployment and Managing Windows Security Rafal Otto Internet Services Group Department of Information Technology CERN 26 May 2016.
13 th May 2004LINUX, which LINUX?1 Presentation to the AB/CO Technical Committee – Linux as the Future Console O/S Alastair Bland, 13 th May 2004.
Operating Systems & Information Services CERN IT Department CH-1211 Geneva 23 Switzerland t OIS Working with Windows 7 at CERN Michał Budzowski.
Identity Management in the Environment of Mendel University in Brno Milan Šorm.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
LHC Database Developers ’ Workshop (2D) January 24 – , CERN.
Lecture 19 Page 1 CS 236 Online 16. Account Monitoring and Control Why it’s important: –Inactive accounts are often attacker’s path into your system –Nobody’s.
Virtualization for the LHCb Online system CHEP Taipei Dedicato a Zio Renato Enrico Bonaccorsi, (CERN)
ALMA Archive Operations Impact on the ARC Facilities.
 Securing and Administering Virtual Machines George Manley and Yang He.
R. Fantechi. TDAQ commissioning Status report on Infrastructure at the experiment PC farm Run control Network …
Operating Systems & Information Services CERN IT Department CH-1211 Geneva 23 Switzerland t OIS Update on Windows 7 at CERN & Remote Desktop.
Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting,
A. Frank - P. Weisberg Operating Systems Structure of Operating Systems.
2-Dec Offline Report Matthias Schröder Topics: Scientific Linux Fatmen Monte Carlo Production.
UCAR Teresa Shibao & Paul Dial. The Major Components of IPT Communications Manager (formerly Call Manager) Voic Phones Miscellaneous others.
Online System Status LHCb Week Beat Jost / Cern 9 June 2015.
Status of the new NA60 “cluster” Objectives, implementation and utilization NA60 weekly meetings Pedro Martins 03/03/2005.
Installation of Storage Foundation for Windows High Availability 5.1 SP2 1 Daniel Schnack Principle Technical Support Engineer.
ECS and LS Update Xavier Vilasís-Cardona Calo Meeting - Xvc1.
Hands-On Virtual Computing
© 2012 The McGraw-Hill Companies, Inc. All rights reserved. 1 Third Edition Chapter 6 Today’s Windows Windows Vista and Windows 7 McGraw-Hill.
AliEn central services Costin Grigoras. Hardware overview  27 machines  Mix of SLC4, SLC5, Ubuntu 8.04, 8.10, 9.04  100 cores  20 KVA UPSs  2 * 1Gbps.
Atlas Gas Systems - CPC6 Upgrade and Re-Commissioning Infrastructure Meeting – 8th January 2014 Steve Pavis PH-DT-DI/Gas.
LS1 Review P.Charrue. Audio/Video infrastructure LS1 saw the replacement of BI and RF analog to digital video transport Was organised in close collaboration.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Juraj Sucik, Michal Kwiatek, Rafal.
R. Krempaska, October, 2013 Wir schaffen Wissen – heute für morgen Controls Security at PSI Current Status R. Krempaska, A. Bertrand, C. Higgs, R. Kapeller,
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
Status report of the new NA60 “cluster” Our OpenMosix farm will increase our computing power, using the DAQ/monitoring computers. NA60 weekly meetings.
Installing the ALSMS Software on a Windows Platform Configuration Example Alcatel-Lucent Security Products Configuration Example Series.
John Samuels October, Why Now?  Vista Problems  New Features  >4GB Memory Support  Experience.
 Prepared by: Eng. Maryam Adel Abdel-Hady
SMOOTHWALL FIREWALL By Nitheish Kumarr. INTRODUCTION  Smooth wall Express is a Linux based firewall produced by the Smooth wall Open Source Project Team.
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
ACO & AD0 DCS Status report Mario Iván Martínez. LS1 from DCS point of view Roughly halfway through LS1 now – DCS available through all LS1, as much as.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
Installing Windows 7 Lesson 2.
BaBar Transition: Computing/Monitoring
VDI Cyber Range Technical Requirements
Laplink PCmover.
CV PVSS project architecture
Enrico Bonaccorsi, (CERN) Loic Brarda, (CERN) Gary Moine, (CERN)
CERN Windows Roadmap Tim Bell 8th June 2011.
5 SYSTEM SOFTWARE CHAPTER
Networking for Home and Small Businesses – Chapter 2
5 SYSTEM SOFTWARE CHAPTER
Bodygram Plus Enterprise
Networking for Home and Small Businesses – Chapter 2
PLANNING A SECURE BASELINE INSTALLATION
16. Account Monitoring and Control
Presentation transcript:

Infrastructure availability and Hardware changes Slides prepared by Niko Neufeld Presented by Rainer Schwemmer for the Online administrators

Hardware changes Hardware here means essentially PCs, storage, network (ECS interface hardware: SPECS, CAN  Clara’s talk) In principle every PC older than 6 years will be thrown out On PCs between 3 and 5 years preemptive exchanges will be done (disks, memories) Old storage systems will be decommissioned Network devices will be kept Online Infrastructure during LS1 2

Operating system changes Why? –New hardware not supported by old OS –Newer versions of application code do not support old OS versions (WinCC, etc…) What do we run today? –Linux: SLC4 (CreditCard PC, some SPECS), SLC5 (farm, controls-PCs, plus, hist, etc…) SLC6 (“hidden” services: DNS, admin servers etc…) –Windows: XP (consoles, some special cases: Velo Iseg, Rasnik, Rich gas monitoring etc…) 2003 SP2 (everything else) Online Infrastructure during LS1 3

OS future Linux: –SLC5 32-bit – only for CCPCs –SLC6 for all Control PCs –SLC6 for gateways and hidden services –SLC5 64-bit for the farm until offline moves to SLC6 Windows: –(Ideally): Windows 2008 SP2 only –Only under duress: Windows 7 –For testing: Windows 2012 Online Infrastructure during LS1 4

Hardware changes Baseline: –All control-PCs will be replaced by virtual machines –Farm File-server (NFS) and PVSS/WinCC function will be split (irrelevant for users)  will refurbish old farm-blades for this (hlt[A-E]07-11) Backup –Run control-PCs on re-furbished farm-servers Online Infrastructure during LS1 5

Hardware Changes 2) We would like to get rid of most “corner” cases and limit the number of different hardware types – if you have a special need (old PCI card, USB device, etc…) please contact us for driver upgrade etc… After LS1 there will be no XP and no Linux below SLC6 (except the CCPCs) so hardware must be able to run this, this includes semi-private consoles etc… New farm-nodes will be bought and installed during 2014, to be ready for LHC startup. (As usual this is a lot of work, but has only a minor influence on day-to-day operations of the Online system) Online Infrastructure during LS1 6

Core Online Infrastructure work Change of main storage system (DDN9900  DDNSFA10K – requires physically touching every disk, recreation of all disk-sets, recreation of all file-systems Upgrade of SX controls network (improved redundancy) Change of electrical distribution to homogeneous battery-backed up dual-feed power distribution Recabling of all servers in the SX server room Exchange of some old racks Redundancy / emergency procedure tests Online Infrastructure during LS1 7

Planning Online system will be shutdown from 25/02/13 to 15/03/13 Web-services (logbook, wiki, rundb) will be kept up No user logins, no access to data, WinCC systems (no here really means no) Detailed planning for these 3 weeks in preparation (will be internally discussed and agreed after the X-mas break) Online Infrastructure during LS1 8

Planning during LS1 System will be kept up and running Manager on Duty will be at P8 Monday – Friday as usual and answer to direct requests and tickets Farm will be kept up and running (in particular for OnlineDirac) No intervention outside working hours (except very serious incidents: SAN or network failure, loss of outbound connectivity), no piquet Two farms will be reserved for tests (SLC6 migration preparation etc…) The “entire” farm will be needed for tests, upgrades, re- organisations for a few (4 – 5) one-week periods during 2013  exact planning will be synced with Operations and Offline needs Online Infrastructure during LS1 9

More Changes Password policy will be brought in line with CERN policy –Minimum complexity –History –Maximum validity 1 year Accounts will be cleaned up –inactive accounts will be blocked but the data will be kept –Can be made available to the user or the respective project-leader anytime Gateway machines (windows and Linux) will be upgraded to latest OS versions Online Infrastructure during LS1 10

Transparent changes These do not affect users directly, they will be done in the back-ground Registering of all machines in LanDB –Needed for easier sharing with IT resources (licenses, etc…) Pilot-project to use CERN accounts instead of Online accounts (easier user-management)  this is really a test not a commitment Online Infrastructure during LS1 11

External Changes Independent from the Online team there are other changes going which will affect us to some extent: Maintenance on the Technical Network (TN) Changes to the gas control system Maintenance on the data-base servers … Online Infrastructure during LS1 12

Summary A lot of replacement work to do during LS1 Transition plan for the Control PCs / PVSS/WinCC systems in place Some infrastructure consolidation done already with a view to the upgraded experiment after LS2 (consolidation of server-room in SX8) 3 weeks of complete shutdown from 25/02/13 to 15/03/13 Detailed Planning for 2013 will be put up in January and announced to everybody – frequent updates are initially to be expected since many (external) teams are involved Online Infrastructure during LS1 13