UCSD CMS 2009 T2 Site Report Frank Wuerthwein James Letts Sanjay Padhi Abhishek Rana Haifen Pi Presented by Terrence Martin.

Slides:



Advertisements
Similar presentations
1 Applications Virtualization in VPC Nadya Williams UCSD.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Yes, yes it does! 1.Guest Clustering is supported with SQL Server when running a guest operating system of Windows Server 2008 SP2 or newer.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
XP Practical PC, 3e Chapter 17 1 Upgrading and Expanding your PC.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
TECHNOLOGY AND THE BOND PROGRAM TECHNOLOGY PLAN
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Module 7: Hyper-V. Module Overview List the new features of Hyper-V Configure Hyper-V virtual machines.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Outline IT Organization SciComp Update CNI Update
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
Oracle 10g Database Administrator: Implementation and Administration Chapter 2 Tools and Architecture.
Chapter 19 Upgrading and Expanding Your PC. 2Practical PC 5 th Edition Chapter 19 Getting Started In this Chapter, you will learn: − If you can upgrade.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
Project GreenLight Overview Thomas DeFanti Full Research Scientist and Distinguished Professor Emeritus California Institute for Telecommunications and.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
News from Alberto et al. Fibers document separated from the rest of the computing resources
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
OSG Abhishek Rana Frank Würthwein UCSD.
Virtualization Supplemental Material beyond the textbook.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Tier-1 Andrew Sansum Deployment Board 12 July 2007.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
BaBar Cluster Had been unstable mainly because of failing disks Very few (
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
TCD Site Report Stuart Kenny*, Stephen Childs, Brian Coghlan, Geoff Quigley.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
Farming Andrea Chierici CNAF Review Current situation.
The Genome Sequencing Center's Data Center Plans Gary Stiehr
Chapter 19 Upgrading and Expanding Your PC
The Beijing Tier 2: status and plans
2016 Citrix presentation.
Installing and Running a CMS T3 Using OSG Software - UCR
Welcome! Thank you for joining us. We’ll get started in a few minutes.
Статус ГРИД-кластера ИЯФ СО РАН.
Computer hardware f1031 – computer hardware.
AGLT2 Site Report Shawn McKee/University of Michigan
GridPP Tier1 Review Fabric
1. 2 VIRTUAL MACHINES By: Satya Prasanna Mallick Reg.No
Managing Clouds with VMM
The Infrastructure of the CDS Group
Presentation transcript:

UCSD CMS 2009 T2 Site Report Frank Wuerthwein James Letts Sanjay Padhi Abhishek Rana Haifen Pi Presented by Terrence Martin

Whats New? UCSD Site Move from SDSC to Mayer Hall along with Hardware expansion T2_US Phedex Downlink Commissioning Xen deployments Examining Hadoop Glide-in WMS Infrastructure DBS Accounting at UCSD Cacti UAF Ram Disk for users analysis

Whats coming up? Next Hardware expansion Expansion of T2 center infrastructure Possible/Probable transition from Dcache/SRM to Hadoop/BeStMan Glide-in WMS rollout Network path upgrades to Starlight Possible 24/48 Port 10GB in each rack

Site Move Summer 2009 moved the T2 from the SDSC to Mayer Hall Use APC Hot Aisle Containment

UCSD T2_US Phedex Links In PhEDEx, we commissioned all the T2_US downlinks to UCSD this year

Xen Deployment We deployed a production Xen Host Currently using the host for a variety of services Recently moved GUMS server to Xen replacement an older single CPU installation Nagios running in Xen Looking for more services to move to Xen and will likely upgrade RAM in Xen system over the next few weeks (Max 48GB)

Examining Hadoop Encouraged by success at UNL Haifeng Pi is heading the effort with support from Terrence Martin Currently deployed a small hadoop storage system using production nodes Next steps are SRM/BeStMan integration along with analysis side testing of Hadoop performance

Glidein-WMS See Slides

Local PhEDEx Accounting Allows us to keep track of requests from different local groups & purposes.

UAF Ramdisk for Users Uses tmpfs on user interactive nodes ~10GB is a useful size for tmpfs Approximately 4x improvement in performance for ntuple analysis Tmpfs will use swap if it needs tosu - gfactory; cd glideinsubmit/glidein_POSTCCRC_v2;./factory_startup stop

Hardware Expansions in 2009 Expanding the T2 capacity from 60KW to 140KW of power/cooling Increasing rack space Purchasing Multi-core nodes based on Intels new CPU and Memory architecture Continuing with storage in nodes infrastructure so CPU upgrade will include Disk capacity increase

Networking Improvements in 2009 Spoke with campus late 2008 regarding upgraded 10GBps paths from UCSD to Starlight bypassing CENIC/Internet2 Possiblity for UCSD to get a direct connection (Layer 1 or Layer 2) to Starlight Examining Fulcrum 10Gbps over copper for use in racks to solve issues with Multi-core nodes and CMS analysis data requirements

Monitoring Continuing to add and improve our cacti based RRD monitoring Added additional sensors for condor User guest, password guest Redeploying Nagios Targeting nagios to monitor very specific components Expanding network monitoring to better understand network flows form external sites Deploy PerfSONAR