CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.

Slides:



Advertisements
Similar presentations
Open Science Grid Living on the Edge: OSG Edge Services Framework Kate Keahey Abhishek Rana.
Advertisements

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Dec 14, 20061/10 VO Services Project – Status Report Gabriele Garzoglio VO Services Project WBS Dec 14, 2006 OSG Executive Board Meeting Gabriele Garzoglio.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
CVMFS AT TIER2S Sarah Williams Indiana University.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Wide Area Network Access to CMS Data Using the Lustre Filesystem J. L. Rodriguez †, P. Avery*, T. Brody †, D. Bourilkov *, Y.Fu *, B. Kim *, C. Prescott.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
A Regional Analysis Center at the University of Florida Jorge L. Rodriguez University of Florida September 27, 2004
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
1 Development of a High-Throughput Computing Cluster at Florida Tech P. FORD, R. PENA, J. HELSBY, R. HOCH, M. HOHLMANN Physics and Space Sciences Dept,
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
The Grid Effort at UF Presented by Craig Prescott.
GCRC Meeting 2004 BIRN Coordinating Center Software Development Vicky Rowley.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
OSG Storage VDT Support and Troubleshooting Concerns Tanya Levshina.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
9/22/10 OSG Storage Forum 1 CMS Florida T2 Storage Status Bockjoo Kim for the CMS Florida T2.
Summary of OSG Activities by LIGO and LSC LIGO NSF Review November 9-11, 2005 Kent Blackburn LIGO Laboratory California Institute of Technology LIGO DCC:
Performance analysis comparison Andrea Chierici Virtualization tutorial Catania 1-3 dicember 2010.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
What is OSG? (What does it have to do with Atlas T3s?) What is OSG? (What does it have to do with Atlas T3s?) Dan Fraser OSG Production Coordinator OSG.
Gene Oleynik, Head of Data Storage and Caching,
The Beijing Tier 2: status and plans
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Belle II Physics Analysis Center at TIFR
LCG 3D Distributed Deployment of Databases
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
The CCIN2P3 and its role in EGEE/LCG
Pierre Girard ATLAS Visit
Presentation transcript:

CHEPREO Tier-3 Center Achievements

FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community –But some also participate in CMS production activities Hardware & manpower requirement non-trivial Requires sufficient hardware to make worthwhile Grid enabled gatekeeper and other grid services are also required The FIU Tier-3 is deployed to: –Provides services and resources for local physicists CMS analysis, & computing Education Research and Outreach Group –Cyberinfrastructure: serves as a “reference implementation” of an operational grid-enabled resource for the FIU CI community at large FIU Grid community CHEPREO Computer Science groups

FIU Tier-3 at A Glance GATE KEEPER ROCKS based meta-cluster consists of: –Grid-enabled computing cluster Approx: 20 Dual Xeon boxes –Service nodes User login node, frontend, webservers, frontier/squid server, development… –Local CMS interactive analysis Purchased with FIU startup funds A single 8 core server with 16GB RAM A large 3ware based 16 TB fileserver A Computing Element site on the OSG production Grid –A production CE even before the OSG (Grid3) –Supports all of the OSG VOs –Maintained with latest vers of OSG software cache

FIU Tier-3 Usage at Glance Current usage: Since Nov –About 40K hrs logged through OSG Grid gatekeeper CMS, LIGO, nanhub, OSGedu… VOs have used the site through this period –About 85% of that was utilized by cmsProd during CSA07 We generated about 60K events out of the 60M world wide effort We were one of 2 or 3 Tier-3s that participated in CSA07 world wide effort !

Tier3 Team at FIU CHEPREO/CIARA funded positions –Micha Niskin: Lead systems administrator –Ernesto Rubi: Networking CHEPREO undergraduate fellows: Working on deploying CMS and storage services –Ramona Valenzula –David Puldon –Ricardo Leante

CHEPREO Tier-3 Center Proposed Activities

Tier-3 Computing Facility In support of the Open Science Grid –Still plan to involve our group with OSG integration and validation activities Participate in integration testbed Help carryout testing of new OSG releases Help debug and troubleshoot new grid middleware deployments and applications –This work is currently man-power limited Our systems administrator is busy with new hardware deployments, and procurement and training new CHEPREO fellows In support of CMS computing: production activities –Not yet involved in CCRC08 or CSA08: –Will require new workernodes with 1.0+ GB of RAM/core to participate 10 new dual quadcore nodes + new gatekeeper Specs have been defined bids will be submitted Expect quotes by end of March –Will require additional services for CMS production Storage Element:We evenutually plan to deploy our own dcache based SE but now pointing our cmssoft installation to our local Tier-2 at UF FRONTIER/squid now a must for participation: use existing Xeon hardware, CHEPREO fellows now working on this task

Tier-3 Computing Facility In support of local CMS interactive analysis activities: Issue: How to access CMS data from FIU, several solutions being investigated. They involve close coordination with our CHEPREO Tier-2 partners at UF and Caltech 1.Download data to a Tier-2 and then download to FIU via FDT Deployment, utilization and benchmarking of FDT tool within CMS’ Distributed Data Management system 2.With UF, download data to WAN enabled LUSTER filesystem Tests already demonstrated line speed access with LUSTER over LAN at UF between UF’s Tier-2 and HPC Testing underway on isolated testbed now with UF’s HPC group –Applied kernel patches and reconfiguring network route to FIU test server –pecial Access to HPC facility, unlikely general CMS wide solution 3.L-store: Access CMS data stored in L-store depots in Tennessee region and beyond Vanderbilt is working on L-store/CMSSW integration and we would like to help in the testing when available We would also like to deploy our own L-store depot on-site with Vanderbilt supplied hardware currently at FIU