Site Report: CERN Helge Meinhard / CERN-IT HEPiX, Jefferson Lab 09 October 2006.

Slides:



Advertisements
Similar presentations
LAL Site Report Michel Jouvin LAL / IN2P3
Advertisements

Skyward Server Management Options Mike Bianco. Agenda: Managed Services Overview OpenEdge Management / OpenEdge Explorer OpenEdge Managed Demo.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Group Presentation Design and Implementation of a company- wide networking & communication technologies strategy 9 th December 2003 Prepared By: …………
Optinuity Confidential. All rights reserved. C2O Configuration Requirements.
CNIL Report April 4 th, CNIL Report (Apr 4 th, 2005) Two Major Goals: –Improvement of Instructional Services –Strengthening research IT infrastructure.
Site report: CERN Helge Meinhard (at) cern ch HEPiX fall SLAC.
Agenda Network Infrastructures LCG Architecture Management
Computerized Networking of HIV Providers Networking Fundamentals Presented by: Tom Lang – LCG Technologies Corp. May 8, 2003.
Windows Server MIS 424 Professor Sandvig. Overview Role of servers Performance Requirements Server Hardware Software Windows Server IIS.
Fundamentals of Networking Discovery 1, Chapter 2 Operating Systems.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
1 Infrastructure Hardening. 2 Objectives Why hardening infrastructure is important? Hardening Operating Systems, Network and Applications.
CERN IT Department CH-1211 Geneva 23 Switzerland t OIS Ideas for 2011 Prepare must be done work items –Warranty –Software maintenance –Commitments.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN Site Report Helge Meinhard / CERN-IT HEPiX Fall 2011 Vancouver 24 October 2011.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Site report: CERN Helge Meinhard (at) cern ch HEPiX spring CASPUR.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
Site report: CERN Helge Meinhard (at) cern ch HEPiX fall BNL.
Site report: CERN HEPiX/HEPNT Autumn 2003 Vancouver.
CERN’s Computer Security Challenge
CIS 460 – Network Design Seminar Network Security Scanner Tool GFI LANguard.
1 Linux in the Computer Center at CERN Zeuthen Thorsten Kleinwort CERN-IT.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN Site Report Helge Meinhard / CERN-IT HEPiX Spring 2011 GSI 02 May 2011.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
Honeypot and Intrusion Detection System
LAL Site Report Michel Jouvin LAL / IN2P3
13 th May 2004LINUX, which LINUX?1 Presentation to the AB/CO Technical Committee – Linux as the Future Console O/S Alastair Bland, 13 th May 2004.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
NESDIS/ORA March 2004 IT Security Incident Recovery Plan and Status April 12, 2004 Joe Brust, ORA Technical Support Team Lead.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN Site Report Helge Meinhard / CERN-IT HEPiX Spring April 2012, Praha.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility HEPiX – Fall, 2005.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
CERN IT Department CH-1211 Genève 23 Switzerland t Tier0 Status - 1 Tier0 Status Tony Cass LCG-LHCC Referees Meeting 18 th November 2008.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
LAL Site Report Michel Jouvin LAL / IN2P3
CERN Physics Database Services and Plans Maria Girone, CERN-IT
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
Computer Security Risks for Control Systems at CERN Denise Heagerty, CERN Computer Security Officer, 12 Feb 2003.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
Computer Security Status Update FOCUS Meeting, 28 March 2002 Denise Heagerty, CERN Computer Security Officer.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
HEPiX 2 nd Nov 2000 Alan Silverman Proposal to form a Large Cluster SIG Alan Silverman 2 nd Nov 2000 HEPiX – Jefferson Lab.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN Site Report Helge Meinhard / CERN-IT HEPiX Fall 2009 LBNL 26 October 2009.
A follow-up on network projects 10/29/2013 HEPiX Fall Co-authors:
Infrastructure availability and Hardware changes Slides prepared by Niko Neufeld Presented by Rainer Schwemmer for the Online administrators.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
CERN News on Grid and openlab François Fluckiger, Manager, CERN openlab for DataGrid Applications.
2-December Offline Report Matthias Schröder Topics: Monte Carlo Production New Linux Version Tape Handling Desktop Computers.
Virtual Server Server Self Service Center (S3C) JI July.
HEPiXBNL, Brookhaven18 October 2004 NIKHEF Site Report Paul Kuipers
Service Level Status Overview project Sebastian Lopienski CERN, IT/FIO HEPiX meeting, Jefferson Lab, October 10 th, 2006.
Common System Exploits Tom Chothia Computer Security, Lecture 17.
Helge Meinhard / CERN-IT HEPiX Spring 2010 “LIP Lisbon” 19 April 2010
IT Services Katarzyna Dziedziniewicz-Wojcik IT-DB.
Monitoring and Fault Tolerance
Status of Fabric Management at CERN
Olof Bärring LCG-LHCC Review, 22nd September 2008
LCG Monte-Carlo Events Data Base: current status and plans
Procurements at CERN: Status and Plans
Presentation transcript:

Site Report: CERN Helge Meinhard / CERN-IT HEPiX, Jefferson Lab 09 October 2006

Communication Systems  Checking VoIP and possible gateways  New audio conference system with Web- based booking  New LCG backbone in CERN CC fully operational  Existing LCG services being migrated

Database and Engineering Support  Oracle  Licence agreement for LHC covering 10 of the 11 Tier 1 sites as well  Very serious bug in : Wrong cursor sharing  Took several weeks to provide a fix  Evaluating NAS storage for databases  Twiki: migrated to version 4  See talk by Hege Hansbakk later  CVS services going well  CMS moved their repositories

Fabric Infrastructure and Operation (1)  Linux  SLC4 for LHC startup (SLC5 too late, probably not before 2Q or 3Q 2007)  Support for SLC3 to stop October 2007  SLC4 capacity set up in lxplus and lxbatch  Operations contract re-tendered  SURE (alarm GUI for CC operators) replaced by LAS  Lots of service interruptions  Power cuts  Partly site-wide  Failures of air conditioning  Emergency shutdown of non-critical machines

Fabric Infrastructure and Operations (2)  Machines coming in  95 serial console concentrators  30 dual-core AMD servers (yeah…)  100 CPU servers (dual-core Woodcrest)  200 midrange servers  60 disk arrays  50 small disk servers  Partly need 100 cm deep racks  In the pipeline: 650 CPU servers, 180 big disk servers  Study cases: fat disk servers (SW RAID?), virtualisation  Technical student working on IPMI functionality and deployment

Fabric Infrastructure and Operations (3)  New tapes / robots from IBM and Sun tested extensively  Tender based on service for Petabytes, including all media costs  Final decision awaiting approval by CERN Finance Committee  More detailed planning for LHC accelerator start-up  “January 2007” delivery is probably the only one for 2007  Service Level Status Overview  See talk by Sebastian Lopienski later

Fabric Infrastructure and Operations (4)  Castor 2  Most migrations done  Positive conclusions from Castor Readiness review  Backup  TSM based, 35% annual growth rate  Working on replacing AIX servers by Linux  Console manager software  Lots of improvements, common code repository SLAC/CERN  S.M.A.R.T. sensors now used to raise alarms  See talk by Tony Cass later  Odds and ends  Faulty disk sleds replaced by cages on 60 older disk servers  Firmware upgraded on 1300 SATA disks

Grid deployment  Upgrade to gLite 3 went very well  Service challenge 4 ongoing  Not yet production quality everywhere  Workshop for Tier 2s very successful

Internet Services (1)  Insecure mail protocols switched off  POP / IMAP without SSL  Anonymous SMTP  Anonymous LDAP  Evaluating providers of real-time spam blocking lists  Whitelisted collaborating labs

Internet Services (2)  Printing: Printers being moved from old Linux boxes to Windows servers  No server-side processing any more  Windows on demand: virtual servers in production  See presentation at HEPiX Rome  CERN CA production-ready, awaiting accreditation  Users on XP without admin privileges by default

Internet services (3)  Computer Management Framework (CMF) put into production  Presented at HEPiX in Rome  Flexible choice of setup while maintaining strong management (patches can be forced)  Few initial hiccups, but now going well  Odds and ends  24 power supply modules broke – servers had been switched off, but connected to AC power. PSU fans not spinning, overheat

Physics Services Support  All physics database services migrated to Oracle RAC  Two new RAC installations (30 servers, 30 disk arrays each) being set up

Computer Security  Mac OS X: Exploits have been seen  CERN firewall: All TCP and UDP ports closed by default  Some Linux boxes root-compromised due to weak passwords  Serious security hole in VNC product has accelerated VNC port blockage  Reviewing the need for outside visible ssh servers (reduced by 80%)  Compromised Web server (vulnerable PHP scripts)  Insecure security products  Vulnerability in Symantec AntiVirus

Miscellaneous  Computer user registration: CCDB replaced by CRA  Accounts blocked automatically  CERN Openlab phase 2: HP, Intel, Oracle as partners, some more contributors  CERN active in Open Access initiative