Future of High Performance Computing at UNB Virendra Bhavsar & Chris MacPhee Advanced Computational Research Laboratory (ACRL) Faculty of Computer Science.

Slides:



Advertisements
Similar presentations
Supercomputing Institute for Advanced Computational Research © 2009 Regents of the University of Minnesota. All rights reserved. The Minnesota Supercomputing.
Advertisements

Introductions to Parallel Programming Using OpenMP
C3.ca in Atlantic Canada Virendra Bhavsar Director, Advanced Computational Research Laboratory (ACRL) Faculty of Computer Science University of New Brunswick.
Beowulf Supercomputer System Lee, Jung won CS843.
Dr. Virendrakumar (Virendra) C. Bhavsar
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
Advanced Computational Research Laboratory (ACRL) Virendra C. Bhavsar Faculty of Computer Science University of New Brunswick Fredericton, NB, E3B 5A3.
Applied Geomatics Research Group (AGRG) Workshop Workshop April 26, 2004 Presenters: Dr Bob Maher, Senior Research Scientist Dr. David Woolnough, Principal/AVC,
Potato Genomics and Bioinformatics
Software Tools for Dynamic Resource Management Irina V. Shoshmina, Dmitry Yu. Malashonok, Sergay Yu. Romanov Institute of High-Performance Computing and.
NPACI Panel on Clusters David E. Culler Computer Science Division University of California, Berkeley
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
UVic Advanced Networking Day 28 November 2005 University of Victoria Research Computing Facility Colin Leavett-Brown.
Transforming Research in Atlantic Canada ACEnet. Objectives Describe ACEnet Describe our relationship with the ORAN and with CA*Net 4 Briefly describe.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
WSV206. X64 Server $40,000,000$1,000,000$1,000.
Parallel and Distributed Intelligent Systems Virendrakumar C. Bhavsar Professor and Director, Advanced Computational Research Laboratory Faculty of Computer.
Compaq - Indiana University Visit IU’s Compaq Parallel PC Cluster.
Issues in Teaching Software Engineering Virendra C. Bhavsar Professor and Director, Advanced Computational Research Laboratory Faculty of Computer Science.
Parallel Computing The Bad News –Hardware is not getting faster fast enough –Too many architectures –Existing architectures are too specific –Programs.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
UK National Grid Service Andrew Richards NGS Executive Director NGS IF08 – Manchester 4-5 th November
A Vision of Computer Science at UNB Virendrakumar C. Bhavsar Professor and Director, Advanced Computational Research Laboratory Faculty of Computer Science,
Current Job Components Information Technology Department Network Systems Administration Telecommunications Database Design and Administration.
Dr. Walaa Nasr.  By the end of this lecture the students will be able to identify :  Computer system  Hardware and its contents  Software  Networks.
Presented by Nazia leyla Grid computing. Grid computing outline  What is “Grid Computing” Grid computing (or the use of a computational grid) is applying.
Research Support Services Research Support Services.
OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference Presented by Valerie Spencer.
Sponsored by the Huntsville Advanced Defense Technology Cluster Initiative (HADTCI)STTRsummit.vcsi.org | Doing Business with UAHuntsville.
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
East meets West: Collaboration goes National WARM CONGRATULATIONS to IRMACS Welcome to D-DRIVE whose mandate is to study and develop resources specific.
HYDRA: Using Windows Desktop Systems in Distributed Parallel Computing Arvind Gopu, Douglas Grover, David Hart, Richard Repasky, Joseph Rinkovsky, Steve.
CSG - Research Computing Redux John Holt, Alan Wolf University of Wisconsin - Madison.
AC3 Atlantic Canada Computing Consortium Mark Whitmore Memorial University of Newfoundland.
1 Best Permutations for the Dynamic Plant Layout Problem Jose M. Rodriguez †, F. Chris MacPhee ‡, David J. Bonham †, Joseph D. Horton ‡, Virendrakumar.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
UNB ACRL: Current Infrastructure, Programs, and Plans Virendra Bhavsar Professor and Director, Advanced Computational Research Laboratory (ACRL) Faculty.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
Case Study in Computational Science & Engineering - Lecture 2 1 Parallel Architecture Models Shared Memory –Dual/Quad Pentium, Cray T90, IBM Power3 Node.
Multi-Agent Systems for e-Commerce Virendra C. Bhavsar Professor and Director, Advanced Computational Research Laboratory Faculty of Computer Science,
CCS Overview Rene Salmon Center for Computational Science.
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
Bioinformatics Group at UNB: Strengths in Computer Science Virendra C. Bhavsar Faculty of Computer Science University of New Brunswick Fredericton, NB,
PaGrid: A Mesh Partitioner for Computational Grids Virendra C. Bhavsar Professor and Dean Faculty of Computer Science UNB, Fredericton This.
Scientific Computing at SLAC: The Transition to a Multiprogram Future Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear.
Overview of the Communication Networks and Services Research (CNSR) Project A multi-institute proposal to the Atlantic Innovation Fund by Dalhousie University.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
PORTAL: An On-Line Regional Transportation Data Archive with Transportation Systems Management Applications Casey Nolan Portland State University CUPUM.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Science Support for Phase 4 Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
Intermediate Parallel Programming and Cluster Computing Workshop Oklahoma University, August 2010 Running, Using, and Maintaining a Cluster From a software.
CDLI – Guidance Room Career Development Guidance Room Career Planning Career Planning Post Secondary Information Post Secondary Information Financial.
EGEE is a project funded by the European Union under contract IST Generic Applications Requirements Roberto Barbera NA4 Generic Applications.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Nov. 18, EGEE and gLite are registered trademarks High-End Computing - Clusters.
Compute and Storage For the Farm at Jlab
NIIF HPC services for research and education
White Rose Grid Infrastructure Overview
Chair-Mounted Computer Workstation
Communication and Electronic Engineering Department
Presentation transcript:

Future of High Performance Computing at UNB Virendra Bhavsar & Chris MacPhee Advanced Computational Research Laboratory (ACRL) Faculty of Computer Science University of New Brunswick Fredericton, NB

ACRL Current Status Hardware: Symphony IBM SP3 (16 processors) Condor Cluster PC Cluster (82 processors) Personnel: Virendra Bhavsar, Director Chris Macphee, TASP Sean Seeley, System Support

UNB Chemistry Dr. Scott Brownridge Dr. Larry Calhoun Andrew Flight Dr. Friedrich Grein Sophia Kondratova Andrew MacKay UNB Computer Science Dr. Eric Aubanel Adam Amos-Binks Lingke Bu Weibin Gao Sili Huang Dmitry Korkin Richard Xiao Lei Chris MacPhee Aihua Wang Jie Zhang Users UNB Mechanical Engineering Dr. Mohammad Bagher Ayani Dr. Mohammad Kermani Dr. Faysal El Khettabi Dr. Ilan Yaar UNB Physics Dr. Eugene K Ho Yue Li Dr. Zong-Chao Yan Jun-Yi Zhang UNBSJ Brian Robichaud (Computer Science) Mark Forrest (Computer Science) Shawn McGinn (Computer Science) Jason Mercer (SASE) Adam White (Computer Science) UNB Other Dr. Evelyn Richards (Forestry) Dr. Robert Tenzer (Geodesy) MTA Chemistry Erin Green Mariana Di Laudo Shenna LaPointe Chester Weatherby Dr. Stacey Wetmore Sarah Whittleton External Other Dr. Jalal Almhana (UdeM) Dr. Michael Clayton (Acadia)

Usage for 2002

Total Usage for 2000 – Present Month CPU hours

Condor Usage Condor Cluster Hardware: 82 processors Software: Amber 7 Autodock Gaussian MCNP MPI / PVM (in progress) Provided 10,000 CPU hours via MCNP in 2 weeks! Note: - Powerful for sequential jobs (3x more powerful than Symphony) - Inefficient for parallel jobs

Immediate Future Symphony IBM SP3 (16 processors) ME Cluster ??? (64 processors) TAPoR Cluster ??? (~32 processors) ACRL N/W 1 Gbps to ITS / CA*net 3

ACRL's NOI Advanced Computational Research Centre (ACRC) Possible Partners: Mount Allison University, National Research Council, New Brunswick Community Colleges & Companies, University de Moncton Configuration: 512 processors / 730 GB disk space 24 visualization workstations $250,000 in software (Gaussian, Portland Group Compiler, etc) Budget:~$3,000,000

Atlantic Canada Excellent Network (ACEnet) Combined CFI Proposal Partners: MUN (Mark Whitmore); Saint Francis Xavier University (Peter Poole) Saint Mary's University (David Clarke); UNB (Virendra Bhavsar) Configuration: TBA Budget:~$20,000,000 ACEnet Combined NOI

1. Future Hardware/Software Requirements »Hardware Requirements: Clusters? SMPs? High speed interconnect »Software Requirements: Gaussian with Linda? Portland Compiler? 2. ACRL  Advanced Computational Research Centre (ACRC) »CFI Proposal –Scientific content, Equipment content, Budget, … »Management Structure »Sustainable model – Personnel, maintenance costs, travel, phones, etc. 3. Other Items Future