Public Batch and Interactive Services on Linux FOCUS — July 1 st 1999 Tony Cass —

Slides:



Advertisements
Similar presentations
Fabric Management at CERN BT July 16 th 2002 CERN.ch.
Advertisements

Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Birmingham site report Lawrie Lowe HEP System Managers Meeting, RAL,1 st July 2004.
Status of MODIS Production (C4/C5 Testing) Mike Teague 1/5/06.
Communicating Machine Features to Batch Jobs GDB, April 6 th 2011 Originally to WLCG MB, March 8 th 2011 June 13 th 2012.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
LAL Site Report Michel Jouvin LAL / IN2P3
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
1999 Summer Student Lectures Computing at CERN Lecture 2 — Looking at Data Tony Cass —
Site report: CERN Helge Meinhard (at) cern ch HEPiX fall SLAC.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
User Management in LHCb Gary Moine, CERN 29/08/
Fabric Management for CERN Experiments Past, Present, and Future Tim Smith CERN/IT.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
2001 Summer Student Lectures Computing at CERN Lecture 1 — Looking Around Tony Cass —
Service Review and Plans Carmine Cioffi Database Administrator and Developer 3D Workshop, Barcelona (ES), April 2009.
Transforming B513 into a Computer Centre for the LHC Era Tony Cass —
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
3. April 2006Bernd Panzer-Steindel, CERN/IT1 HEPIX 2006 CPU technology session some ‘random walk’
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Simplifying the Configuration of Student Laptops — StirlingVPNSetup Simon Booth University of Stirling Laptop Forum 27th June 2006.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
F. Rademakers - CERN/EPLinux Certification - FOCUS Linux Certification Fons Rademakers.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
4-8 th October 1999CERN Site Report, HEPiX SLAC. A.Silverman CERN Site Report HEPNT/HEPiX October 1999 SLAC Alan Silverman CERN/IT/DIS.
Deployment work at CERN: installation and configuration tasks WP4 workshop Barcelona project conference 5/03 German Cancio CERN IT/FIO.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
26/4/2001LAL Site Report - HEPix - LAL 2001 LAL Site Report HEPix – LAL Apr Michel Jouvin
Module 2 Part I Introduction To Windows Operating Systems Intro & History Introduction To Windows Operating Systems Intro & History.
HepNT - January 15, 1997 : PCSF Frederic Hemmer IT/PDP 1 PCSF - A Pentium ® /Windows NT ® Based simulation farm Frederic Hemmer CERN IT/PDP.
PSC’s CRAY-XT3 Preparation and Installation Timeline.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
2-Dec Offline Report Matthias Schröder Topics: Scientific Linux Fatmen Monte Carlo Production.
14 th April 1999CERN Site Report, HEPiX RAL. A.Silverman CERN Site Report HEPiX April 1999 RAL Alan Silverman CERN/IT/DIS.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
LCG Introduction John Gordon, STFC GDB September 14 th 2011.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Request for Service (RFS) Process and Metrics Update June 24, 2008.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
1/11/2000LAL Site Report - HEPix - JLab 2000 LAL Site Report HEPix – Jlab Nov Michel Jouvin
1 july 99 Minimising RISC  General strategy - converge on PCs with Linux & NT to avoid wasting manpower in support teams and.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
Problem Tracking Software Status FOCUS — June 8 th 2000 Tony Cass —
Linux 7.3 Migration S. O’Neale Review Feedback –Lots of machines running 7.3 –Certification started well –Then what went wrong ? –Suggestions for overall.
LCG Issues from GDB John Gordon, STFC WLCG MB meeting September 28 th 2010.
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
ARDA Massimo Lamanna / CERN Massimo Lamanna 2 TOC ARDA Workshop Post-workshop activities Milestones (already shown in December)
Farming Andrea Chierici CNAF Review Current situation.
Trace collection tool – diskmon [1] Maybe helpful in collecting laptop PC’s trace (p2p, multimedia…)
Andrew Sansum 21 March 2000 ITD/CLRC Site Report Andrew Sansum 21 March 2000
LHCb electronics, 12th June 2014
Gemini Re-platforming Project; External User Participation July 2012
WLCG Collaboration Workshop;
TYPES OFF OPERATING SYSTEM
3 Week A: May 1 – 19 3 Week B: May 22 – June 9
First attempt at using WIRED
Short to middle term GRID deployment plan for LHCb
Linux Cluster Tools Development
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
HEPix - SLAC 99 Michel Jouvin
Presentation transcript:

Public Batch and Interactive Services on Linux FOCUS — July 1 st 1999 Tony Cass —

2 Tony Cass Linux Services: Background  The Linux environment (SUE, HEPiX, ASIS, …) is now at the same level as for other Unix flavours.  Therefore, adding a central service (PLUS, WGS, SHIFT) is simple---if we have the boxes we can provide a service –we provide services for ALICE, ATLAS, NA49 and NOMAD –and we will start a service for LHCb at the end of July.

3 Tony Cass A Public Linux Service  DELPHI (and some others) asked for a basic interactive Linux service as a “reference platform”. The need is for a platform running the approved Linux installation which can be used to develop and test code---but with general interactive work remaining on, e.g., DXPLUS.  LHCb also need an initial front end to their future Linux batch facility.  A basic service has been available since early June.

4 Tony Cass LXPLUS: Current Status  Three dual Processor PCs provide two different services –2 run the current standard Linux environment based on RedHat 5.1. These machines are accessible using the lxplus ISS alias. –1 machine (lxp03) has the forthcoming environment based on RedHat 6.0 –We intend to move the service to the RedHat 6 environment in late July or early August. (But we will keep one machine running RedHat 5.1 for some time.)

5 Tony Cass LXPLUS: A Sizing Comparison  LXPLUS - 2 dual processor PC nodes  HPPLUS - 5 dual processor HP nodes  DXPLUS - 5 dual processor DEC PWS nodes  RSPLUS - 15 dual processor RS6000 nodes  Reasonably, LXPLUS can support active users compared to the of HPPLUS and DXPLUS (which have, respectively, some 850 and 350 different users each week).

6 Tony Cass The Future For Linux –We propose to »significantly increase LXPLUS capacity u But general interactive work (mail/web related) should move to a local desktop where possible. »Provide general purpose Linux batch capacity –The size for these services has to be determined, but should be such that these platforms are attractive and encourage people to move from RISC capacity.  For other platforms –We suggest that other public services (HPPLUS, DXPLUS, RSPLUS and RSBATCH) should be phased out by the end of 2001 at the latest.