DOSAR Workshop V September 27, 2007 Michael Bryant Louisiana Tech University Louisiana Tech Site Report.

Slides:



Advertisements
Similar presentations
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Advertisements

IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
DOSAR Workshop VII April 2, 2009 Louisiana Tech Site Report Michael S. Bryant Systems Manager, CAPS/Physics Louisiana Tech University
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Computing at COSM by Lawrence Sorrillo COSM Center.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Cluster currently consists of: 1 Dell PowerEdge Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge.
ISU DOSAR WORKSHOP Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University April 5, 2007.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Enabling Data Intensive Science with PetaShare Tevfik Kosar Center for Computation & Technology Louisiana State University April 6, 2007.
TechFair ‘05 University of Arlington November 16, 2005.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
Cyberinfrastructure for Distributed Rapid Response to National Emergencies Henry Neeman, Director Horst Severini, Associate Director OU Supercomputing.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update Dave Reese Joint Techs February 2007.
LONI Overview State-wide IT initiative: $25M – Gov. Mike Foster, present LONI - $40M, Gov. Kathleen Blanco, LONI - $10M, Gov. Kathleen.
DOSAR VO ACTION AGENDA ACTION ITEMS AND GOALS CARRIED FORWARD FROM THE DOSAR VI WORKSHOP AT OLE MISS APRIL 17-18, 2008.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
DOSAR Workshop at Sao Paulo Dick Greenwood What’s Next for DOSAR? Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at the Sao Paulo, Brazil.
1 1APCTP LHC Konkuk University. Introduction to GSDC Project Activities in 2009 Strategies and Plans in 2010 GSDC office opening ceremony CERN.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
INUNDATION TESTBED PI MEETING: LSU FVCOM Progress Chunyan Li (with ACKNOWLEDGEMENT to UMASS Team and Dr. Zheng) Louisiana State University 1.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Louisiana State Grid Dick Greenwood Developments in the Louisiana State Grid (LONI) Dick Greenwood Louisiana Tech University DOSAR III Workshop at The.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
Workshop on High Performance Computing at Louisiana Tech University DATE : Wednesday, October 29th, 3:00-4:30pm VENUE : Nethken Hall (NH new Playstation3.
CCS Overview Rene Salmon Center for Computational Science.
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Dr. Saif-ur-Rehman Muhammad Waqar Asia Tier Center Forum.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
ISU DOSAR WORKSHOP Dick Greenwood DOSAR/OSG Statement of Work (SoW) Dick Greenwood Louisiana Tech University April 5, 2007.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at Sao Paulo September 16-17, 2005.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
1 LONI (The Louisiana Optical Network Initiative) Tevfik Koşar Center for Computation and Technology Louisiana State University DOSAR Workshop, Arlington-TX.
The State of DOSAR DOSAR VI Workshop at Ole Miss April Dick Greenwood Louisiana Tech University.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
DOSAR Roadmap Jae Yu DOSAR Roadmap 5 th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007 Jae Yu Univ. of Texas, Arlington LHC Tier 3 Efforts.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University DOSAR II Workshop at UT-Arlington March 30-31, 2005.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
OSG All Hands Meeting P. Skubic DOSAR OSG All Hands Meeting March 5-8, 2007 Pat Skubic University of Oklahoma Outline What is DOSAR? History of DOSAR Goals.
High Energy Physics at the OU Supercomputing Center for Education & Research Henry Neeman, Director OU Supercomputing Center for Education & Research University.
Brief introduction about “Grid at LNS”
6th DOSAR Workshop University Mississippi Apr. 17 – 18, 2008
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Super Computing By RIsaj t r S3 ece, roll 50.
PK-CIIT Grid Operations in Pakistan
Presentation transcript:

DOSAR Workshop V September 27, 2007 Michael Bryant Louisiana Tech University Louisiana Tech Site Report

9/27/2007DOSAR Workshop V 2 Louisiana Tech University and LONI COMPUTING IN LOUISIANA

At the Center for Applied Physics Studies (CAPS), ▫ Small 8 node cluster with 28 processors (60 Gigaflops)  Used by our local researchers and the Open Science Grid  Dedicated Condor Pool of both 32-bit and 64-bit (w/ compat) machines running RHEL5 Additional resources at LTU through the Louisiana Optical Network Initiative (LONI) ▫ Intel Xeon 5TF Linux cluster (not yet ready):  128 nodes (512 CPUs), 512 GB RAM  TF peak performance ▫ IBM Power5 AIX cluster  13 nodes (104 CPUs), 224 GB RAM  TF peak performance 9/27/2007DOSAR Workshop V 3 Computing Locally at LTU

Focused on High Energy Physics, High Availability (HA) and Grid computing, and Biomedical Data Mining ▫ High Energy Physics:  Fermilab (D0), CERN (ATLAS), and ILC:  Dr. Lee Sawyer, Dr. Dick Greenwood (Institutional Rep.), Dr. Markus Wobisch » Joe Steele is now at TRIUMF in Vancouver  Jefferson Lab (G0, Qweak experiments)  Dr. Kathleen Johnston, Dr. Neven Simicevic, Dr. Steve Wells, Dr. Klaus Grimm ▫ HA and Grid computing :  Dr. Box Leangsuksun  Vishal Rampure  Michael Bryant (me) 9/27/2007DOSAR Workshop V 4 Louisiana Tech Researchers

40Gb/sec bandwidth state-wide Next-generation network for research Connected to the National LambdaRail (NLR, 10Gb/sec) in Baton Rouge Spans 6 universities and 2 health centers The Louisiana Optical Network Initiative (LONI) is a high speed computing and networking resource supporting scientific research and the development of new technologies, protocols, and applications to positively impact higher education and economic development in Louisiana. 9/27/2007DOSAR Workshop V 5 Louisiana Optical Network Initiative -

1 x Dell 50 TF Intel Linux cluster housed at the state's Information Systems Building (ISB) ▫ “Queen Bee” named after Governor Kathleen Blanco who pledged $40 million over ten years for the development and support of LONI. ▫ 680 nodes (5,440 CPUs), 688 GB RAM  Two quad-core 2.33 GHz Intel Xeon 64-bit processors  8 GB RAM per node ▫ Measured 50.7 TF peak performance ▫ According to the June, 2007 Top500 listing*, Queen Bee ranked the 23rd fastest supercomputer in the world. 6 x Dell 5 TF Intel Linux clusters housed at 6 LONI member institutions ▫ 128 nodes (512 CPUs), 512 GB RAM  Two dual-core 2.33 GHz Xeon 64-bit processors  4 GB RAM per node ▫ Measured TF peak performance 5 x IBM Power5 575 AIX clusters housed at 5 LONI member institutions ▫ 13 nodes (104 CPUs), 224 GB RAM  Eight 1.9 GHz IBM Power5 processors  16 GB RAM per node ▫ Measured TF peak performance 9/27/2007DOSAR Workshop V 6 LONI Computing Resources *  Combined total of 84 Teraflops

National Lambda Rail Louisiana Optical Network IBM P5 Supercomputers LONI Members Dell 80 TF Cluster NEXT ??? 9/27/2007DOSAR Workshop V 7 LONI: The big picture… by Chris Womack

Goal: enable domain scientists to focus on their primary research problem, assured that the underlying infrastructure will manage the low-level data handling issues. Novel approach: treat data storage resources and the tasks related to data access as first class entities just like computational resources and compute tasks. Key technologies being developed: data-aware storage systems, data-aware schedulers (i.e. Stork), and cross- domain meta-data scheme. Provides and additional 200TB disk, and 400TB tape storage 9/27/2007DOSAR Workshop V 8

UNO Tulane LSU ULL LaTech High Energy Physics Biomedical Data Mining Coastal Modeling Petroleum Engineering Synchrotron X-ray Microtomography Computational Fluid Dynamics Biophysics Molecular Biology Computational Cardiac Electrophysiology Petroleum Engineering Geology Participating institutions in the PetaShare project, connected through LONI. Sample research of the participating researchers pictured (i.e. biomechanics by Kodiyalam & Wischusen, tangible interaction by Ullmer, coastal studies by Walker, and molecular biology by Bishop).

9/27/2007DOSAR Workshop V 10 LONI and the Open Science Grid ACCESSING RESOURCES ON THE GRID

Located here at Louisiana Tech University OSG production site Using our small 8 node Linux cluster ▫ Dedicated Condor Pool using 20 of the 28 CPUs ▫ 8 nodes (28 CPUs), 36 GB RAM  2 x Dual 2.2 GHz Xeon 32-bit processors, 2GB RAM per node  2 x Dual 2.8 GHz Xeon 32-bit processors, 2GB RAM per node  2 x Dual 2.0 GHz Operton 64-bit processors, 2GB RAM per node  1 x Two quad-core 2.0 GHz Xeon 64-bit processors, 16GB RAM  1 x Two quad-core 2.0 GHz Xeon 64-bit processors, 8GB RAM We would like to… ▫ Expand to Windows Co-Linux Condor Pool ▫ Combine with IfM and CS clusters Plan to move to OSG ITB when the LONI 5TF Linux cluster at LTU becomes available 9/27/2007DOSAR Workshop V 11 OSG Compute Element: LTU_OSG

Located at the Center for Computation & Technology (CCT) at Louisiana State University (LSU) in Baton Rouge, La. OSG production site Using the LONI 5TF Linux cluster at LSU (Eric) ▫ PBS opportunistic single-processor queue ▫ Only 64 CPUs (16 nodes) available from the 512 CPUs total  128 nodes, 512 GB RAM  Two dual-core 2.33 GHz Xeon 64-bit processors  4 GB RAM per node ▫ The 16 nodes are shared with other PBS queues Played a big role in the DZero reprocessing effort ▫ Dedicated access to LONI cluster during reprocessing ▫ 384 CPUs total were used simultaneously Continuing to run DZero MC production at both sites 9/27/2007DOSAR Workshop V 12 OSG Compute Element: LTU_CCT

9/27/2007DOSAR Workshop V 13 Reprocessing at LTU_CCT LTU_CCT (LONI)

9/27/2007DOSAR Workshop V 14 Reprocessing at LTU_CCT (cont.) LTU_CCT (LONI)

9/27/2007DOSAR Workshop V 15 DZero MC Production for LTU* Weekly production by site Cumulative production by site * LTU_CCT and LTU_OSG are combined 8.5 million events total

9/27/2007DOSAR Workshop V 16 LONI OSG CEs and PanDA Scalability + High Availability CURRENT STATUS AND FUTURE PLANS

Upgraded to OSG Upgraded to RHEL5 Added two new Dell Precision Workstations (16 CPUs, two quad- core 2.0GHz Xeon 64-bit processors, 16GB and 8GB) Connected to LONI 40Gbps network in June (finally!)  Allows us to run D0 MC again Running DZero MC production jobs (sent using Joel’s AutoMC daemon) Installed standalone Athena on caps10 for testing ATLAS analysis 9/27/2007DOSAR Workshop V 17 Current Status of LTU_OSG

Switched to the LONI 5TF (Eric) cluster from SuperMike/Helix Upgraded to OSG Running DZero MC production jobs (sent using Joel’s AutoMC daemon) Running ATLAS production test jobs ▫ Problems so far:  Pacman following symlinks! (/panasas/osg/app -> /panasas/osg/grid/app on headnode)  Conflict with 32-bit Python install on 64-bit OS ( not supported)  OSG_APP Python path was wrong  Incorrect Tier2 DQ2 URL ▫ 3 successful tests, need a few more before running full production 9/27/2007DOSAR Workshop V 18 Current Status of LTU_CCT

Create OSG CEs at each of the six LONI sites Possibly creating a LONI state-wide grid ▫ Tevfik Kosar is building a campus grid at LSU Begin setting up PetaShare storage at each LONI site PanDA scalability tests on Queen Bee ▫ Proposing to PanDA team and LONI allocation committee Involving other non-HEP projects to DOSAR using PanDA (see talk tomorrow) Applying HA techniques to PanDA and the Grid (see talk tomorrow) 9/27/2007DOSAR Workshop V 19 What’s next?

9/27/2007DOSAR Workshop V 20 QUESTIONS / COMMENTS?