Holland Computing Center David R. Swanson, Ph.D. Director.

Slides:



Advertisements
Similar presentations
Condor use in Department of Computing, Imperial College Stephen M c Gough, David McBride London e-Science Centre.
Advertisements

Computing Infrastructure
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
Statewide IT Conference30-September-2011 HPC Cloud Penguin on David Hancock –
State of HCC 2012 Dr. David R. Swanson Director, Holland Computing Center.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
Campus High Throughput Computing (HTC) Infrastructures (aka Campus Grids) Dan Fraser OSG Production Coordinator Campus Grids Lead.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Center For Research Computing (CRC), University of Notre Dame, Indiana Application of ND CRC to be a member of the OSG Council Jarek Nabrzyski CRC Director.
Project Overview:. Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services.
HPC at IISER Pune Neet Deo System Administrator
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Florida Tier2 Site Report USCMS Tier2 Workshop Fermilab, Batavia, IL March 8, 2010 Presented by Yu Fu For the Florida CMS Tier2 Team: Paul Avery, Dimitri.
1 Developing a Data Management Plan C&IT Resources for Data Storage and Data Security Patrick Gossman Deputy CIO for Research January 16, 2014.
Research Support Services Research Support Services.
Introduction to HPC resources for BCB 660 Nirav Merchant
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
The Sharing and Training of HPC Resources at the University of Arkansas Amy Apon, Ph.D. Oklahoma Supercomputing Symposium October 4, 2006.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Common Practices for Managing Small HPC Clusters Supercomputing 12
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Center for Research Computing at Notre Dame Jarek Nabrzyski, Director
Purdue Campus Grid Preston Smith Condor Week 2006 April 24, 2006.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Copyright © 2011 Penguin Computing, Inc. All rights reserved PODShell: Simplifying HPC in the Cloud Workflow June 2011.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Derek Weitzel Grid Computing. Background B.S. Computer Engineering from University of Nebraska – Lincoln (UNL) 3 years administering supercomputers at.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Opportunistic Computing Only Knocks Once: Processing at SDSC Ian Fisk FNAL On behalf of the CMS Collaboration.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Campus Grid Technology Derek Weitzel University of Nebraska – Lincoln Holland Computing Center (HCC) Home of the 2012 OSG AHM!
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Jefferson Lab Site Report Sandy Philpott HEPiX Fall 07 Genome Sequencing Center Washington University at St. Louis.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Compute and Storage For the Farm at Jlab
What is HPC? High Performance Computing (HPC)
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Software Defined Storage
Windows Server 2016 Software Defined Storage
Presentation transcript:

Holland Computing Center David R. Swanson, Ph.D. Director

∑√ i=1 N i ⇒ ∫ √ N δi i

Computational and Data- Sharing Core Store and Share documents Store and Share data and databases Computing resources Expertise

Who is HCC? HPC provider for University of Nebraska System-wide entity, evolved over last 11 years Support from President, Chancellor, CIO, VCRED 10 FTE, 6 students

HCC Resources Lincoln: Tier-2 Machine Red (1500 cores, 400 TB) Campus clusters PrairieFire, Sandhills (1500 cores, 25TB) Omaha: Large IB cluster Firefly (4000 cores, 150 TB) 10 Gb/s connection to Internet2 (DCN)

Staff Dr. Adam Caprez, Dr. Ashu Guru, Dr. Jun Wang Tom Harvill, Josh Samuelson, John Thiltges Dr. Brian Bockleman (OSG development, grid computing) Dr. Carl Lundstedt, Garhan Attebury (CMS) Derek Weitzel, Chen He, Kartik Vedelaveni (GRAs) Carson Cartwright, Kirk Miller, Shashank Reddy (ugrads)

HCC -- Schorr Center 2200 sq. ft. machine room 10 full-time staff PrairieFire, Sandhills, Red and Merritt 2100 TB storage 10 gbps network

Three Types of Machines ff.unl.edu ::: large capacity cluster... more coming soon prairiefire.unl.edu // sandhills.unl.edu ::: special purpose cluster merritt.unl.edu ::: shared memory machine red.unl.edu ::: grid enabled cluster for US CMS (OSG)

prairiefire 50 nodes from SUN 2 socket, quad-core opterons (400 cores) 2 GB/core (800 GB) ethernet and SDR Infiniband SGE or Condor submission

Sandhills 46 fat nodes 4 socket opterons 32 cores/node (128 GB/node) 1504 cores total QDR Infiniband Maui/Torque or Condor submission

Merritt 64 itanium processors 512 GB RAM shared memory NFS storage (/home, /work) PBS only, interactive for debugging only

Red Open Science Grid machine part of US CMS project 240 TB storage (dCache) over 1100 compute cores certificates required, no login accounts

HCC -- PKI 1800 sq. ft. machine room (500 kVA UPS + generator) 2 full-time staff Firefly 150 TB Panasas storage 10 gbps network

Firefly Opteron cores 150 TB Panasas storage Login or grid submissions Maui (PBS) Infiniband, Force10 GigE

TBD Opteron cores 400 TB Lustre storage Login or grid submissions Maui (PBS) QDR Infiniband, GigE

First Delivery...

Last year’s Usage Approaching 1 Million cpu hours/week

HCCC Cloud Resources Built on Open Stack Able to run Windows code

Resources & Expertise Storage of large data sets (2100 TB) High Performance Storage (Panasas) High bandwidth transfers (9 gbps, ~50 TB/day) 20 gbps between sites, 10 gbps to Internet2 High Performance Computing: ~10,000 cores Grid computing and High Throughput Computing

Usage Options Shared Access Free Opportunistic Storage limited Shell or Grid deployment Priority Access

Usage Options Priority Access Fee assessed Reserved queue Expandable Storage Shell or Grid deployment

Computational and Data- Sharing Core Will meet computational demands with a combination of Priority Access, Shared, and Grid resources Storage will include a similar mixture, but likely consist of more dedicated resources Often a trade-off between Hardware, Personnel and Software Commercial Software saves Personnel time Dedicated Hardware requires less development (grid protocols)

Computational and Data- Sharing Core Resource organization at HCC Per research group -- free to all NU faculty and staff Associate quotas, fairshare or reserved portions of machines with these groups /home/swanson/acaprez/... accounting is straightforward

Computational and Data- Sharing Core Start now - facilities and staff already in place It’s free - albeit shared Complaints currently encouraged (!) Iterations required

More information David Swanson: (402) K Schorr Center /// 158H PKI /// Your Office Tours /// Short Courses

Sample Deployments CPASS site ( DaliLite, Rosetta, OMMSA LogicalDoc ( )