What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Ravi Sankar Technology Evangelist | Microsoft
Can’t We All Just Get Along? Sandy Ryza. Introductions Software engineer at Cloudera MapReduce, YARN, Resource management Hadoop committer.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
CHEP 2015 Analysis of CERN Computing Infrastructure and Monitoring Data Christian Nieke, CERN IT / Technische Universität Braunschweig On behalf of the.
Using the Trigger Test Stand at CDF for Benchmarking CPU (and eventually GPU) Performance Wesley Ketchum (University of Chicago)
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Open Science Grid: More compute power Alan De Smet
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Name Title Microsoft Windows Azure: Migrating Web Applications.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
NVIDIA Confidential. Product Description World’s most popular 3D content creation tool Used across Design, Games and VFX markets Over +300k 3ds Max licenses,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Free, online, technical courses Take a free online course. Microsoft Virtual Academy.
Communicating with Users about HTCondor and High Throughput Computing Lauren Michael, Research Computing Facilitator HTCondor Week 2015.
Matlab, R and Other Jobs in CHTC. chtc.cs.wisc.edu No suitable R No Matlab runtime Missing shared libraries Missing compilers … Running On Bare Bones.
Edge Based Cloud Computing as a Feasible Network Paradigm(1/27) Edge-Based Cloud Computing as a Feasible Network Paradigm Joe Elizondo and Sam Palmer.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
High Throughput Parallel Computing (HTPC) Dan Fraser, UChicago Greg Thain, Uwisc.
A+ Guide to Hardware: Managing, Maintaining, and Troubleshooting, Sixth Edition Chapter 9, Part 11 Satisfying Customer Needs.
1 Integrating GPUs into Condor Timothy Blattner Marquette University Milwaukee, WI April 22, 2009.
Chapter 4 COB 204. What do you need to know about hardware? 
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Stern Center for Research Computing
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
EXPOSE GOOGLE APP ENGINE AS TASKTRACKER NODES AND DATA NODES.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Common Practices for Managing Small HPC Clusters Supercomputing 12
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Some key aspects of NVIDIA GPUs and CUDA. Silicon Usage.
Derek Wright Computer Sciences Department University of Wisconsin-Madison Condor and MPI Paradyn/Condor.
Using the Weizmann Cluster Nov Overview Weizmann Cluster Connection Basics Getting a Desktop View Working on cluster machines GPU For many more.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MemcachedGPU Scaling-up Scale-out Key-value Stores Tayler Hetherington – The University of British Columbia Mike O’Connor – NVIDIA / UT Austin Tor M. Aamodt.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
Copyright © Curt Hill SIMD Single Instruction Multiple Data.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Capacity Planning in a Virtual Environment Chris Chesley, Sr. Systems Engineer
Next Generation of Apache Hadoop MapReduce Owen
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Five todos when moving an application to distributed HTC.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Our New Submit Server. chtc.cs.wisc.edu One thing out of the way... Your science is King! We need rules to facilitate resource sharing. Given good reasons.
Advanced Computing Facility Introduction
Workstations & Thin Clients
CI Updates and Planning Discussion
What is HPC? High Performance Computing (HPC)
Examples Example: UW-Madison CHTC Example: Global CMS Pool
Heterogeneous Computation Team HybriLIT
Jeremy Maris Research Computing IT Services University of Sussex
Vanderbilt Tier 2 Project
Southwest Tier 2 Center Status Report
Sergio Fantinel, INFN LNL/PD
Software Engineering Introduction to Apache Hadoop Map Reduce
3.2 Virtualisation.
Advanced Computing Facility Introduction
Condor and Multi-core Scheduling
What’s Different About Overlay Systems?
Cloud Computing Architecture
The Neuronix HPC Cluster:
H2020 EU PROJECT | Topic SC1-DTH | GA:
Argon Phase 3 Feedback June 4, 2019.
Presentation transcript:

What’s Coming? What are we Planning?

› Better docs › Goldilocks – This slot size is just right › Storage › New Hardware What’s next for CHTC

› Why?  Because the CHTC web site sucks. › How?  Improved web documentation on what resources we actually have, how to get started and how to use our tools, like for Matlab and R. › When?  Starting now and continuing until we’re done! Better Documentation

› Why?  We never have the right mix of small vs. big vs. whole machine slots. › How?  Condor enhancements let us ‘watch’ the job queue and re-arrange our slots to best fit queued jobs.  You’ll need to specify in your submit file how many cores, memory, disk, etc. your job needs. Documented now in Condor Will be documented on CHTC Web site. › When?  Beta: March 5  Production: March 23 Dynamic Slots

› Why?  We’re getting requests from researchers for large amount of storage for processing. › How?  We are deploying a Hadoop Filesystem.  We are adding ‘edge’ connectivity so you can read/write data remotely, for example from OSG nodes. › When?  Beta: March 15  Production: May 1 Storage

› GPUs  We have two GPU systems you can test with today (8 core, 16GB RAM, 896 NVIDIA Telsa M2050 cores)  In partnership with Icecube, we are hosting their GTX9000 GPU cluster (288 CPU cores, 21k NVIDIA Tesla M2070 cores, 1,152 GB RAM, Infiniband) that you may use opportunistically. Expected availability is May 1. › Big Memory  No immediate plans to purchase, but…  Dynamic slots will help  We can also use the IceCube cluster to opportunistically validate your large memory code and go from there › Low latency networking, MPI-capable nodes  No ‘owned’ resources, but we do have partnerships with groups who do have such machines that we can introduce you to, like the IceCube GTX New Hardware