2-3 April 2001HEPSYSMAN 20011 Oxford Particle Physics Site Report Pete Gronbech Systems Manager.

Slides:



Advertisements
Similar presentations
24th May 2004Hepix Edinburgh - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Advertisements

Oxford PP Computing Site Report HEPSYSMAN 28 th April 2003 Pete Gronbech.
23rd April 2002HEPSYSMAN April Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
The Approach to Security in CLRC Gareth Smith With acknowledgements to all the members of the CLRC Computer Network and Security Group, especially Trevor.
Birmingham site report Lawrie Lowe HEP System Managers Meeting, RAL,1 st July 2004.
UCL HEP Computing Status HEPSYSMAN, RAL,
24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
Physics Network Integration Chris Hunter. Physics network team Chris Hunter : Network Manager David Newton : Network Support Technician Room DWB 663 Phone.
Site report for KFKI RMKI Piroska Giese HEPiX ‘99 meeting at RAL April.
Physics Network Integration Chris Hunter. Physics network team Chris Hunter : Network Manager David Newton : Network Support Technician Room DWB 663 Phone.
9th May 2006HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Martin Bly RAL CSF Tier 1/A RAL Tier 1/A Status HEPiX-HEPNT NIKHEF, May 2003.
Saving Money by Recycling Existing Computers with LTSP Peter Billson Linux Terminal Server Project (LTSP.org) Linux User Group in Princeton LUG/IP July.
Chapter 5 Operating Systems. 5 The Operating System When working with multimedia, the operating system is perhaps the most important, the most complex,
17th October 2013Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
 Changes to sources of funding for computing in the UK.  Past and present computing resources.  Future plans for computing developments. UK Status &
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
Lesson 5-Accessing Networks. Overview Introduction to Windows XP Professional. Introduction to Novell Client. Introduction to Red Hat Linux workstation.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Gareth Smith RAL PPD HEP Sysman. April 2003 RAL Particle Physics Department Site Report.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
RHUL1 Site Report Royal Holloway Sukhbir Johal Simon George Barry Green.
Stuart Cunningham - Computer Platforms COMPUTER PLATFORMS Network Operating Systems Week 9.
14th April 1999Hepix Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Introductionto Networking Basics By Avinash Kulkarni.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
20th October 2003Hepix Vancouver - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Alain Romeyer - 15/06/20041 CMS farm Mons Final goal : included in the GRID CMS framework To be involved in the CMS data processing scheme.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Cloning NT Using DriveImage Chris Brew Particle Physics Department Rutherford Appleton Laboratory rl.ac.uk.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
11th Oct 2005Hepix SLAC - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager and South Grid Technical Co-ordinator.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
23 April 2002HEP SYSMAN meeting1 Cambridge HEP Group - site report April 2002 John Hill.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
Current Deployment (NT4) n Minimal central infrastructure u DHCP/DNS service (non NT) u WINS service (but not supported) u Software image repository u.
13th October 2011Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
1st July 2004HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Cloning Windows NT Systems Mainly based on experiences at RAL and Oxford.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
Oxford University Particle Physics Unix Overview Sean Brisbane Particle Physics Systems Administrator Room 661 Tel th.
14th October 2010Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and SouthGrid Technical Co-ordinator.
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
HEPSYSMAN May 2007 Oxford & SouthGrid Computing Status (Ian McArthur), Pete Gronbech May 2007 Physics IT Services PP Computing.
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
11th October 2012Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
Brunel University, Department of Electronic and Computer Engineering, Uxbridge, UB8 3PH, UK Dr Peter R Hobson C.Phys M.Inst.P SIRE Group.
Macromolecular Crystallography Workshop 2004 Recent developments regarding our Computer Environment, Remote Access and Backup Options.
Tier1A Status Martin Bly 28 April CPU Farm Older hardware: –108 dual processors (450, 600 and 1GHz) –156 dual processor 1400MHz PIII Recent delivery:
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
System Migration Guy “Randy” Fleegman
Oxford University Particle Physics Unix Overview
PC Farms & Central Data Recording
הכרת המחשב האישי PC - Personal Computer
How To Fix AOL Desktop Update Error AOL Helpline Number
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager

2-3 April 2001HEPSYSMAN Goals and Objectives l Flexibility on the Desktop (Low cost Seats) n Access to networked services (X, http, IMAP) n Access to PC Applications n Reduce management overheads n Reduce costs l Servers provide Compute power n Central Servers for UNIX, VMS, Mail & Web n Compatibility with CERN / DESY / Fermilab n Provide Code development environments

2-3 April 2001HEPSYSMAN The Server / Desktop Divide NT PC Unix Workstation Desktops Servers General Purpose Unix Server VMS Server Mail Server Web Server NT Server

2-3 April 2001HEPSYSMAN Status l General Setup remains the same with NT PC’s on the Desktop and Various UNIX/VMS/NT servers in the Computer Room. l Super Janet 4 connection: 622Mbs (limited by Firewall to 100Mbs ) Campus backbone: Gigabit Ethernet Department Connected: 100Mbs Internal Physics : 10 /100Mbs switched l Server Strategy - Heavy use of remote compute farms, Local CPU will be provided by Intel Linux systems. Digital Unix systems and legacy VMS systems.

2-3 April 2001HEPSYSMAN Network Access Campus Backbone Router Super Janet 4 Just upgraded to 622Mb/s; OUCS firewall to be upgraded to 1Gbps speed shortly OUCS Firewall depts Physics Firewall Physics Backbone Switch 100Mb/s 1Gb/s 100Mb/s Backbone Edge Router depts 100Mb/s depts 100Mb/s Backbone Edge Router 1Gb/s

2-3 April 2001HEPSYSMAN al1 Digital 2100 Server 3 CPU’s 512MB RAM 216GB Disks

2-3 April 2001HEPSYSMAN al17 - A Digital Personal Workstation 500au 9 * 50GB disks in the Datasilo

2-3 April 2001HEPSYSMAN Status 2 l Additional CPU is provided by a Digital Alpha 500au. New CPU provided by Linux. Just ordered a dual 800MHz 2GB RAM system for SNO analysis. l Oxford is the ‘Lead site’ for successful CDF_JIF bid, multi-cpu server plus 1 TB store in each CDF institute. Plus larger 2TB store at RAL and even larger at Fermilab. l Server Strategy - IT in general - NT server (6) for desktop file/print, Exchange 5.5 for , IIS 4 for web serving, MS Terminal Server to give NT 4 remote access. l VMS - DAQ systems still important but general purpose service is running down (mail, word processing etc going to NT) l Data Acquisition - LABview on NT for most laboratory DAQ and control (used by wide range of research groups) l Videoconferencing - PC based Intel plus access to ISDN6 Tandberg. MS Netmeeting used frequently to DESY

2-3 April 2001HEPSYSMAN Linux NT PC Unix Workstation Desktops Servers CDF Linux (Dual 400MHz PII) GENERAL RAL Linux Farm Porting Machine Treat Linux as just another Unix and hence a server OS to be managed centrally. Wish to avoid badly managed desktop PC’s running Linux. MINOS Linux/NT DAQ pplx1 Fermi pplx2 RH6.1 ppnt109 RH6.1 SNO Linux CPU Server pplx3 RH6.1

2-3 April 2001HEPSYSMAN Plans and Concerns l Look to replace local compute server with RAID Disk Server and Linux CPU servers. Disk server could be Intel based running NT or Linux but if performance is not sufficient a proprietary solution may be used. l Experience gained with CDF distributed data store will help plan LHC requirements. l Choice of OS/platforms for computation. Clear that this will be Red Hat Linux l NT4 provides all desktop functionality we need. Will look at Windows2000 but no rush (at least for the desktop) l cost of software licensing l MAN POWER