December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
Presenter Name Facility Name EDG Testbed Status Moving to Testbed Two.
NIKHEF Testbed 1 Plans for the coming three months.
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
Minerva Infrastructure Meeting – October 04, 2011.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Michigan Grid Testbed Report Shawn McKee University of Michigan UTA US ATLAS Testbed Meeting April 4, 2002.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
STAR scheduling future directions Gabriele Carcassi 9 September 2002.
Ofer Rind - RHIC Computing Facility Site Report The RHIC Computing Facility at BNL HEPIX-HEPNT Vancouver, BC, Canada October 20, 2003 Ofer Rind RHIC Computing.
Alain Romeyer - 15/06/20041 CMS farm Mons Final goal : included in the GRID CMS framework To be involved in the CMS data processing scheme.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
BNL Facility Status and Service Challenge 3 Zhenping Liu, Razvan Popescu, Xin Zhao and Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
BNL Service Challenge 3 Site Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
First attempt for validating/testing Testbed 1 Globus and middleware services WP6 Meeting, December 2001 Flavia Donno, Marco Serra for IT and WPs.
BNL ATLAS Database service update Yuri Smirnov, Iris Wu BNL, USA LCG Database Deployment and Persistency Workshop, CERN, Geneva October 17-19, 2005.
LCG Storage workshop at CERN. July Geneva, Switzerland. BNL’s Experience dCache1.8 and SRM V2.2 Carlos Fernando Gamboa Dantong Yu RHIC/ATLAS.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
The GRID and the Linux Farm at the RCF CHEP 2003 – San Diego CHEP 2003 – San Diego March 27, 2003 March 27, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind,
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
January 26, 2003Eric Hjort HRMs in STAR Eric Hjort, LBNL (STAR/PPDG Collaborations)
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
BNL Service Challenge 3 Site Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Open Science Grid Build a Grid Session Siddhartha E.S University of Florida.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
The EDG Testbed Deployment Details
U.S. ATLAS Grid Production Experience
SAM at CCIN2P3 configuration issues
Presentation transcript:

December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab

December 26, 2015 Overview US ATLAS Computing Facility Overview. Star Computing Facility Overview. Future Plan

December 26, 2015 Internet HPSS BNL USATLAS Grid Configuration Submit Grid Jobs LSF (Condor) Server1 LSF (Condor) Server2 Gatekeeper Job manager Disks Grid Job Requests Globus- client 17TB 70MB/S atlas00 aafs amds Mover aftpexp00 GridFtp giis01 Information Server AFS server Globus RC GDMP Server

December 26, 2015 BNL USATLAS site status hardware and software configuration list of principal gatekeeper and specialized gatekeepers – gremlin.usatlas.bnl.gov: › Dual PII 550, 18GB local disks, Fast ethernet › Redhat 7.2,Globus 2.0 Beta suite, Condor, lsf software. BU:ATLAS packages – spider.usatlas.bnl.gov: › Dual PIII 700, 36GB raid disks, Gigabit Network connection › Redhat 7.2, Globus 2.0 suite, Globus Replica Catalog, Patched version of GridFtp which supports stable parallel data transfer. › GDMP – Aftpexp.bnl.gov › PIII 800, 72 GB Cheetah SCSI, Dual gigabit network connection › Redhat 7.2, Globus Replica Catalog, Patched version of GridFtp. – Giis01.usatlas.bnl.gov › BNL MDS site organization server, Backup USATLAS giis server. › Participate ivdgl testbed – BNL VO Server.

December 26, 2015 Continued computational cluster (type, size, queues) Lsf, two worker nodes (Dual 700Mhz, 9GB), can be scaled up to 50 worker nodes. Condor, two worker nodes (Dual 700Mhz, 9GB), user and account model: each user has NIS account. Some of Grid users are mapped to grid_a (group account), some of users are mapped to their local accounts. software environment:

December 26, 2015 Internet HPSS STAR Grid Configuration Submit Grid Jobs STAR Gatekeeper Job manager Disks Cabinets Grid Job Requests Globus- client 70MB/S NSF Server rmds Mover rftpexp00 Stargrid GridFtp HRM giis01 Information Server AFS server LSF & Condor Globus RC GDMP Server

December 26, 2015 BNL site status hardware and software configuration list of principal gatekeeper and specialized gatekeepers – Stargrid01.rcf.bnl.gov: › Dual PII 450, 62GB local disks, Fast ethernet. › Redhat 6.2,Globus 2.0 suite, Condor, lsf software. › HRM software. – Stargrid02.rcf.bnl.gov: › Dual PIII 1.4GHz, 146GB raid disks, Dual Gigabit Network connection › Redhat 7.3, Globus 2.0 suite, Globus Replica Catalog 2.1, GDMP › Patched version of GridFtp 1.0 which supports stable parallel data transfer. › HRM Software – rftpexp.bnl.gov › PIII 800, 68 GB SCSI, Dual gigabit network connection › Redhat 7.1, Patched version of GridFtp 2.1. – Giis01.usatlas.bnl.gov › BNL MDS site organization server. › Participate ivdgl testbed › MDS 2.0, 2.1

December 26, 2015 Current and Future Works List and brief description of Grid related R&D project your site (and people) are participating in or supplying resources – Deploy HRM & SRM: storage resource management. Can be used to replica data from HPSS to another HPSS, Dantong – HPSS enabled GridFtp. Enhance GridFtp to copy files from/to HPSS, Dantong – Facility (Grid) Monitoring, use home grow monitoring tools and Ganglia to monitor the local fabrication ( Integrate it into MDS, Jason Smith and Dantong Yuhttp:// /ganglia/ – Network research: network performance monitoring, tuning. Work on difference network tools, iperf, netperf, network weather server. Near future plans for upgrades and projects: Deploy VDT 1.1.2, increase the size of Grid lsf and condor pool to all available atlas computing nodes. Continue on Grid Monitoring Project.