S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.

Slides:



Advertisements
Similar presentations
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Advertisements

IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
BY MANISHA JOSHI.  Extremely fast data processing-oriented computers.  Speed is measured in “FLOPS”.  For highly calculation-intensive tasks.  For.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM Roadshow Introduction to LinkSCEEM/SESAME/IMAN1 4 May 2014, J.U.S.T Presented by Salman Matalgah Computing Group leader SESAME.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
7th Annual Hong Kong Innovative Users Group Meeting 11th & 12th December 2006.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Lsevlet: web-based processing tool for LiDAR data J Ramon Arrowsmith December 14, 2004.
UK -Tomato Chromosome Four Sarah Butcher Bioinformatics Support Service Centre For Bioinformatics Imperial College London
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
VMWare Clusters Basics, Pros, Cons, Possible RADICL implementation By: Nathan Krussel.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Scheduling of Tiled Nested Loops onto a Cluster with a Fixed Number of SMP Nodes Maria Athanasaki, Evangelos Koukis, Nectarios Koziris National Technical.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
THE QUE GROUP WOULD LIKE TO THANK THE 2013 SPONSORS.
Introduction to LinkSCEEM and SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME.
HPC at IISER Pune Neet Deo System Administrator
Technology Expectations in an Aeros Environment October 15, 2014.
A Makeshift HPC (Test) Cluster Hardware Selection Our goal was low-cost cycles in a configuration that can be easily expanded using heterogeneous processors.
How to buy a PC Brad Leach David Howarth James Sawruk Andrew U.
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
27 May 2004 C.N. Papanicolas EGEE and the role of IASA ( In close collaboration with UOA ) IASA GRID Steering Committee: George Kallos Lazaros.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Redefining the Desktop Stu Baker AUL for Library Technology
Planning and Designing Server Virtualisation.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
Linux Panel Panel Moderator: Sandra Vucinic. Panel Members Michael Brown - Glen Raven, Inc. Mark Farnham - Rightsizing, Inc. Rich Niemiec – TUSC Sandra.
Common Practices for Managing Small HPC Clusters Supercomputing 12
Windows 2000 Division of Information Technology Windows 2000 Project Team Mary Dickerson, MCSE, Project Leader University of Houston Windows 2000 University.
IM&T Vacation Program Benjamin Meyer Virtualisation and Hyper-Threading in Scientific Computing.
CCA Forum Fall Meeting1 5-6 October CCA Common Component Architecture cca-forum.org Server Migration David E. Bernholdt ORNL.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
UCSB Projects & Progress 2011 UC Santa Barbara Projects & Progress 2010 A brief look at some of the things we’ve been working on this past year.
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
Personal Chris Ward CS147 Fall  Recent offerings from NVIDA show that small companies or even individuals can now afford and own Super Computers.
 Hardware compatibility means that software will run properly on the computer in which it is installed.  When purchasing software, look for one of these.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
Desktop Introduction. MASSIVE is … A national facility $8M of investment over 3 years Two high performance computing facilities, located at the Australian.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Unit 2 Assignment 2 M2 RECOMMENDED PC. ZOOSTORM Processor – Intel Core i Operating system – Windows 8 Memory – 8GB RAM Hard Drive – 1TB.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
NIIF HPC services for research and education
What is HPC? High Performance Computing (HPC)
Low-Cost High-Performance Computing Via Consumer GPUs
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Footer.
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
Presentation transcript:

S&T IT Research Support 11 March, 2011 ITCC

Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for researchers

Mission Focus HPC operations Cluster consulting & hosting Electro Mechanical support and consulting Linux support and consulting Data acquisition consulting

Year’s Accomplishments Cluster News – 3 New clusters installed and operating – Academic NIC facility substantially upgraded – Supercomputing white paper in development

Year’s Accomplishments Direct Support news – 3 user groups launched – Linux migration planning started Operational Transition for Fall 2011

Year’s Accomplishments Team Development – One new staff member hired – National Instruments Certified LabVIEW Associate Developer in training – Junior Level Linux Professional in training

Challenges Ahead Hiring suitable staff Retaining existing staff Growing user communities Refining mission within resource constraints Continuing HPC upgrades

HPC - Running Processes vs. CPUs

HPC - Percent CPU Utilization

Dr. Wunsch Cluster 4 Nodes – each node has Core Xeon X5620 CPUs 7 Tesla C2050 GPUs 24 GB RAM Resulting in a total of – 32 Processor Cores – 28 GPUs resulting in 28 Tflops of Processing Power – 96 GB of RAM

Dr. Dawes Cluster 13 Nodes – each node has Core Xeon X5680 CPUs 96GB RAM ,000RPM SATA Drives in RAID 1 for 2.3 TB of high speed scratch space. Resulting in a total of: – 156 Processor Cores – 1248GB of RAM – 29.9TB of high speed scratch space.

User Group Info LabVIEW Users Group – website: grp/topics?hl=en – HPC Users Group – website: grp/topics?hl=en – Linux Users Group – website: grp/topics?hl=en –