Slide 1 Auburn University Computer Science and Software Engineering Scientific Computing in Computer Science and Software Engineering Kai H. Chang Professor.

Slides:



Advertisements
Similar presentations
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Advertisements

The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
High-Performance Computing
Priority Research Direction (I/O Models, Abstractions and Software) Key challenges What will you do to address the challenges? – Develop newer I/O models.
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
Materials by Design G.E. Ice and T. Ozaki Park Vista Hotel Gatlinburg, Tennessee September 5-6, 2014.
Life and Health Sciences Summary Report. “Bench to Bedside” coverage Participants with very broad spectrum of expertise bridging all scales –From molecule.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
May 17, Capabilities Description of a Rapid Prototyping Capability for Earth-Sun System Sciences RPC Project Team Mississippi State University.
Evaluating GPU Passthrough in Xen for High Performance Cloud Computing Andrew J. Younge 1, John Paul Walters 2, Stephen P. Crago 2, and Geoffrey C. Fox.
Astrophysics, Biology, Climate, Combustion, Fusion, Nanoscience Working Group on Simulation-Driven Applications 10 CS, 10 Sim, 1 VR.
Early Adopter: ASU - Intel Collaboration in Parallel and Distributed Computing Yinong Chen, Eric Kostelich, Yann-Hang Lee, Alex Mahalov, Gil Speyer, and.
Darema Dr. Frederica Darema NSF Dynamic Data Driven Application Systems (Symbiotic Measurement&Simulation Systems) “A new paradigm for application simulations.
4.x Performance Technology drivers – Exascale systems will consist of complex configurations with a huge number of potentially heterogeneous components.
Role of Deputy Director for Code Architecture and Strategy for Integration of Advanced Computing R&D Andrew Siegel FSP Deputy Director for Code Architecture.
1 Discussions on the next PAAP workshop, RIKEN. 2 Collaborations toward PAAP Several potential topics : 1.Applications (Wave Propagation, Climate, Reactor.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
K E Y : SW Service Use Big Data Information Flow SW Tools and Algorithms Transfer Application Provider Visualization Access Analytics Curation Collection.
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: Dennis Hoppe (HLRS) ATOM: A near-real time Monitoring.
Company Overview for GDF Suez December 29, Enthought’s Business Enthought provides products and consulting services for scientific software solutions.
Extreme scale parallel and distributed systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
IPlant cyberifrastructure to support ecological modeling Presented at the Species Distribution Modeling Group at the American Museum of Natural History.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
SciDAC All Hands Meeting, March 2-3, 2005 Northwestern University PIs:Alok Choudhary, Wei-keng Liao Graduate Students:Avery Ching, Kenin Coloma, Jianwei.
Opportunities in Parallel I/O for Scientific Data Management Rajeev Thakur and Rob Ross Mathematics and Computer Science Division Argonne National Laboratory.
Center for Component Technology for Terascale Simulation Software CCA is about: Enhancing Programmer Productivity without sacrificing performance. Supporting.
Big Data Vs. (Traditional) HPC Gagan Agrawal Ohio State ICPP Big Data Panel (09/12/2012)
Towards Exascale File I/O Yutaka Ishikawa University of Tokyo, Japan 2009/05/21.
Leibniz Supercomputing Centre Garching/Munich Matthias Brehm HPC Group June 16.
Presented by Scientific Data Management Center Nagiza F. Samatova Network and Cluster Computing Computer Sciences and Mathematics Division.
K E Y : SW Service Use Big Data Information Flow SW Tools and Algorithms Transfer Transformation Provider Visualization Access Analytics Curation Collection.
Introduction to Research 2011 Introduction to Research 2011 Ashok Srinivasan Florida State University Images from ORNL, IBM, NVIDIA.
© 2009 IBM Corporation Parallel Programming with X10/APGAS IBM UPC and X10 teams  Through languages –Asynchronous Co-Array Fortran –extension of CAF with.
SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING | GEORGIA INSTITUTE OF TECHNOLOGY HPCDB Satisfying Data-Intensive Queries Using GPU Clusters November.
Biryaltsev E.V., Galimov M.R., Demidov D.E., Elizarov A.M. HPC CLUSTER DEVELOPMENT AND OPERATION EXPERIENCE FOR SOLVING THE INVERSE PROBLEMS OF SEISMIC.
Belgrade, 26 September 2014 George S. Markomanolis, Oriol Jorba, Kim Serradell Overview of on-going work on NMMB HPC performance at BSC.
Breakout Group: Debugging David E. Skinner and Wolfgang E. Nagel IESP Workshop 3, October, Tsukuba, Japan.
Earth System Curator and Model Metadata Discovery and Display for CMIP5 Sylvia Murphy and Cecelia Deluca (NOAA/CIRES) Hannah Wilcox (NCAR/CISL) Metafor.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Xolotl: A New Plasma Facing Component Simulator Scott Forest Hull II Jr. Software Developer Oak Ridge National Laboratory
NA-MIC National Alliance for Medical Image Computing Core 1b – Engineering Computational Platform Jim Miller GE Research.
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
K E Y : DATA SW Service Use Big Data Information Flow SW Tools and Algorithms Transfer Hardware (Storage, Networking, etc.) Big Data Framework Scalable.
NASA Earth Exchange (NEX) A collaborative supercomputing environment for global change science Earth Science Division/NASA Advanced Supercomputing (NAS)
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Benefits. CAAR Project Phases Each of the CAAR projects will consist of a: 1.three-year Application Readiness phase ( ) in which the code refactoring.
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
A Quick Tour of the NOAA Environmental Software Infrastructure and Interoperability Group Cecelia DeLuca Dr. Robert Detrick visit March 28, 2012
Parallel Programming Models
F1-17: Architecture Studies for New-Gen HPC Systems
VisIt Project Overview
Productive Performance Tools for Heterogeneous Parallel Computing
Organizations Are Embracing New Opportunities
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
Dynamic Data Driven Application Systems
ICT NCP Infoday Brussels, 23 June 2010
A Cloud System for Machine Learning Exploiting a Parallel Array DBMS
Dynamic Data Driven Application Systems
Cloud DIKW based on HPC-ABDS to integrate streaming and batch Big Data
Discussion: Cloud Computing for an AI First Future
Energy-Efficient Storage Systems
$1M a year for 5 years; 7 institutions Active:
POWSYBL “Power System Blocks”
Panel on Research Challenges in Big Data
Microsoft Azure Services Platform
Convergence of Big Data and Extreme Computing
Presentation transcript:

Slide 1 Auburn University Computer Science and Software Engineering Scientific Computing in Computer Science and Software Engineering Kai H. Chang Professor and Chair December 5, 2014

Slide 2 Auburn University Computer Science and Software Engineering Outline High Performance Computing  Architecture and I/O Optimization – Dr. Weikuan Yu  Computation Software – Dr. Tony Skjellum  Scientific Software Development Tools - Dr. Jeff Overbey

Slide 3 Auburn University Computer Science and Software Engineering HPC Capabilities – Dr. Weikuan Yu Software: Unstructured Data Accelerator (UDA)  Accelerator for Big Data Analytics  Transferred to Mellanox In-house supercomputer: Eagles  108 nodes; InfiniBand, 10GigE and SSD  GPGPU (Fermi and Kepler)  Donations from Mellanox, Solarflare, and NVIDIA On-campus network: TigerSphere Software and algorithm development  2 postdoctoral researchers  11 graduate students  3 undergraduate students  Alumni in prestigious labs (IBM, ORNL, Microsoft).

Slide 4 Auburn University Computer Science and Software Engineering EPIO: An Elastic Parallel I/O Framework for Computational Climate Modeling Objectives  Explore and leverage computing technologies such as parallel I/O and cloud computing for NASA’s BigData.  Design new I/O approaches to managing gigantic data from NASA climate modeling codes.  Introduce elastic and non-intrusive data management plug-in ( EPIO ) into the ESMF I/O framework.  Integrate EPIO into representative NASA codes (GOES-5, Model-E & GCE) for exploitation of benefits Initial Result on the Sith Cluster (1024 nodes) at ORNL Vertical Level 1 Vertical Level 10 Approach  Evaluate and optimize data communication and I/O in NASA climate codes (GOES-5 and Model-E)  Aggregate and parallelize data for elasticity & efficiency GEOS-5 History GrDAS NetCDF4/ HDF5 EPIO Global Climate Model AGCM OGCM Atmosphere Analysis New I/O component Impact  Provide new processing and analytics approaches for climate data management.  Enable streamlined data movement through workflows of climate simulation, analytics and visualization systems.  Enable portable and efficient NASA climate codes on petascale National Leadership Computing Facilities, and prepare them for future exascale.

Slide 5 Auburn University Computer Science and Software Engineering Areas for Potential R&D Collaboration - Dr. Tony Skjellum  Over 30 years of experience formulating and parallelizing programs  Long-term R&D in scalable parallel mathematical libraries  Strong experience with simulation codes (e.g., based on PETSc, integration engines, and homegrown)  Leading R&D in MPI standardization, implementation, and utilization  Additional R&D and research skills in CUDA, OpenCL, OpenMP, SSE/AVX, and cluster computing  Strengths in algorithmic optimization and systems programming  On-going research in fault tolerant algorithms and parallel programming  Mixed-language programming expertise (C, C++, Fortran)  Experience with cache-friendly and data distribution independent algorithms

Slide 6 Auburn University Computer Science and Software Engineering Software Development Tools for Scientific Computing - Dr. Jeff Overbey Improving the Eclipse integrated development environment to support scientific software development –Project lead: Photran (Fortran Development Tools for Eclipse) –Committer: PTP (Eclipse Parallel Tools Platform) –June release: 184,846 downloads –Commercially: IBM Parallel Environment Developer Edition –Collaboration with IBM, UNLP, NCSA, LSU, Oregon Refactorings for Fortran legacy code migration Refactorings for GPU computing

Slide 7 Auburn University Computer Science and Software Engineering Other Related Expertise  Modeling Simulation - Dr. Levent Yilmaz  Operating Systems – Dr. Xiao Qin  Databases – Dr. Jeff Ku

Slide 8 Auburn University Computer Science and Software Engineering Questions?