ITS Training and Awareness Session Research Support Jeremy Maris & Tom Armour ITS

Slides:



Advertisements
Similar presentations
L ondon e-S cience C entre Application Scheduling in a Grid Environment Nine month progress talk Laurie Young.
Advertisements

EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Why Grids Matter to Europe Bob Jones EGEE.
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
NGS computation services: API's,
National e-Science Centre Glasgow e-Science Hub Opening: Remarks NeSCs Role Prof. Malcolm Atkinson Director 17 th September 2003.
Condor use in Department of Computing, Imperial College Stephen M c Gough, David McBride London e-Science Centre.
SWITCH Visit to NeSC Malcolm Atkinson Director 5 th October 2004.
© University of Reading David Spence 20 April 2014 e-Research: Activities and Needs.
© University of Reading IT Services ITS Support for e­ Research Stephen Gough Assistant Director of IT Services 18 June 2008.
© University of Reading IT Services e-Research workshop Mike Roch, Director of IT Services 17 June 2009.
Research Computing The Apollo HPC Cluster
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Supercomputing Institute for Advanced Computational Research © 2009 Regents of the University of Minnesota. All rights reserved. The Minnesota Supercomputing.
OeRC and the South East Regional e-Research Consortium Anne Trefethen, David Wallom.
A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
Overview of Wisconsin Campus Grid Dan Bradley Center for High-Throughput Computing.
RCAC Research Computing Presents: DiaGird Overview Tuesday, September 24, 2013.
CS4 - Introduction to Scientific Computing Alan Usas Topics Covered Algorithms and Data Structures –Primality testing, bisection, Newton’s method,
John Kewley e-Science Centre GIS and Grid Computing Workshop 13 th September 2005, Leeds Grid Middleware and GROWL John Kewley
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
A quick introduction to CamGrid University Computing Service Mark Calleja.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Astronomical GRID Applications at ESAC Science Archives and Computer Engineering Unit Science Operations Department ESA/ESAC.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Bioinformatics Jan Taylor. A bit about me Biochemistry and Molecular Biology Computer Science, Computational Biology Multivariate statistics Machine learning.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Stern Center for Research Computing
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
The Research Computing Center Nicholas Labello
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
INFSO-RI Enabling Grids for E-sciencE V. Breton, 30/08/05, seminar at SERONO Grid added value to fight malaria Vincent Breton EGEE.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
Bioinformatics Core Facility Guglielmo Roma January 2011.
Building the e-Minerals Minigrid Rik Tyer, Lisa Blanshard, Kerstin Kleese (Data Management Group) Rob Allan, Andrew Richards (Grid Technology Group)
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Slide 1 UCSC 100 Gbps Science DMZ – 1 year 9 month Update Brad Smith & Mary Doyle.
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
High throughput biology data management and data intensive computing drivers George Michaels.
CSE 5810 Biomedical Informatics and Cloud Computing Zhitong Fei Computer Science & Engineering Department The University of Connecticut CSE5810: Introduction.
The Helmholtz Association Project „Large Scale Data Management and Analysis“ (LSDMA) Kilian Schwarz, GSI; Christopher Jung, KIT.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
High Performance Computing (HPC)
NIIF HPC services for research and education
Accessing the VI-SEEM infrastructure
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Early Results of Deep Learning on the Stampede2 Supercomputer
Volunteer Computing for Science Gateways
White Rose Grid Infrastructure Overview
Cluster / Grid Status Update
Jeremy Maris Research Computing IT Services University of Sussex
Small site approaches - Sussex
WORKFLOW PETRI NETS USED IN MODELING OF PARALLEL ARCHITECTURES
USF Health Informatics Institute (HII)
HII Technical Infrastructure
Early Results of Deep Learning on the Stampede2 Supercomputer
EOSCpilot All Hands Meeting 8 March 2018 Pisa
Overview of HPC systems and software available within
Alternative Processor Panel Results 2008
TeraScale Supernova Initiative
Question 1 How are you going to provide language and/or library (or other?) support in Fortran, C/C++, or another language for massively parallel programming.
Campus and Phoenix Resources
Presentation transcript:

ITS Training and Awareness Session Research Support Jeremy Maris & Tom Armour ITS

Research Support Our Remit Research Group Meetings Work So Far High Performance Computing Plans for the Future Questions?

Our Remit Our Remit  Meet with research groups on a regular basis to ensure the university’s core infrastructure provides appro priate support for researcher’s computing requirements

Our Remit Our Remit  Meet with research groups on a regular basis to ensure the university’s core infrastructure provides appro priate support for researcher’s computing requirements  To simplify and minimise the work needed by research groups to use the University’s core infrastructur e;

Our Remit Our Remit  Meet with research groups on a regular basis to ensure the university’s core infrastructure provides appro priate support for researcher’s computing requirements  To simplify and minimise the work needed by research groups to use the University’s core infrastructur e;  Provide advice and support to researchers about using IT systems within their research (including HPC)

Our Remit Our Remit  Meet with research groups on a regular basis to ensure the university’s core infrastructure provides appro priate support for researcher’s computing requirements  To simplify and minimise the work needed by research groups to use the University’s core infrastructur e;  Provide advice and support to researchers about using IT systems within their research (including HPC)  The HPC support team would arrange appropriate training events and seminars to help research teams make appropriate and optimal use of HPC systems

Research Group Meetings Research Group Meetings  Astronomy  Particle Physics  Informatics  Geography  Sussex Research Hive  Engineering Thermo-Fluids  Life Sciences  Economics  Sussex Doctoral School

Work So Far Work So Far Have already made good contacts with the research community.

Work So Far Work So Far Have already made good contacts with the research community. Specification, installation and commissioning of HPC clusters for IT Services and Physics (ATLAS analysis)

Work So Far Work So Far Have already made good contacts with the research community. Specification, installation and commissioning of HPC clusters for IT Services and Physics (ATLAS analysis) Moved new HPC facility from Chichester Machine Room to the new Shawcross Data Centre

Work So Far Work So Far Have already made good contacts with the research community. Specification, installation and commissioning of HPC clusters for IT Services and Physics (ATLAS analysis) Moved new HPC facility from Chichester Machine Room to the new Shawcross Data Centre Continuing support for existing HPC Facilities in Maths & Physics – Engineering and Life Sciences

High Performance Computing (HPC) High Performance Computing (HPC) About HPC … Cluster configuration Software available Users

What is High Performance Computing ? High performance computing –Maximising number of cycles per second

What is High Performance Computing ? High performance computing –Maximising number of cycles per second High throughput computing –Maximising number of cycles per year

What is High Performance Computing ? High performance computing –Maximising number of cycles per second High throughput computing –Maximising number of cycles per year Facilitating the storage, access and processing of data –Coping with the massive growth in data

High Performance Computing High performance computing –tasks must run quickly –single problem split across many processors –Task parallel, MPI or SMP Simulations Markov models (finding patterns over time in complex systems) Theoretical Chemistry Computational Fluid Dynamics Imaging processing (3D image reconstruction, 4D visualisation) Sequence assembly Whole genome analysis

High throughput computing A lot of work done over a long time frame –One program run many times, e.g. searching a large data set –Loosely coupled –Data parallel (embarrassingly parallel) ATLAS experiment analysis Computational Linguistics Parameter exploration (simulations) Algebraic geometry Genomics (sequence alignment, BLAST etc) Virtual Screening (e.g. in drug discovery) Statistical analysis (e.g. bootstrap analysis)

Growth in data Explosion of data – store and locate and process –expanding 3 times faster than Moore’s law –1TB per instrument day from sequencers –15PB year from CERN LHC –Imaging data eg MRI, CT, Microscopy together with metadata –Gene expression data from high density genomic microarrays Research data now added to and accessed from repositories –challenges re data warehousing

New ways to process data New ways to process, explore and model. –Genome wide association studies (GWAS) Analysis of the genome of multiple individuals genetic contribution to cancer –Tumour expression data – comparing tumours –Image processing techniques re faster research or diagnosis/treatment (Microscopy, MRI, CT) –Simulations at all scales Climate (Geography) Systems Biology – modelling simple organisms Sackler Centre for Consciousness Science Computational power required 4 – 10 times that of increase in data

Interdisciplinary research New techniques – collaborations with other sciences to give new understanding Sussex Research Themes –Mind and Brain –Digital and Social Media –Culture and Heritage –Citizenship and Democratisation –Global Transformations and Environment and Health

Computational fluid dynamics Modelling flows of cancer cells in blood system to characterise the dynamic forces and biochemistry at work during in vitro cell adhesion. Hoskins, Kunz, Dong Penn State.

New HPC facilities Feynman –8 x 12 core nodes (2.67GHz, 4GB/core) –108 cores, 439GB RAM –20TB NFS home file system Apollo –10 x 12 core nodes –2 x 48 core nodes (2.2 GHz, 256GB) –228cores, 1TB RAM –4 x12 core nodes donated by Dell –20TB NFS home file system 81 TB high-performance Lustre parallel file system QDR infiniband (40GHz)

Other HPC systems Zeus – 16 x 8 core nodes (2.4GHz, 1.5GB/Core) – 96 cores Infiniband Archimedes –20 x 4 core nodes (3GHz, 2GB/Core) - 80 cores Qsnet Informatics –7 x 8 core nodes (2.3GHz, >=2GB/Core) 56 cores GigE Thermofluids –11 x 8 core nodes ~100 cores GigE Legacy –Dirac (56 nodes 2 core, 1GB/core) 112 cores GigE –Boston (8 nodes 2 core 1GB/core) 16 cores GigE –Informatics (80 1.8GHz, 2GB/core ) replaced with R815 –CCNR (80 1.8GHz, 512k/core) 80 cores, GigE

Software Intel Parallel Studio –Compilers (Fortran, C) –Profiling –Debugging High Performance Libraries –MKL etc –NAG Matlab STATA AIMPRO, ADF, Gaussian, Amber (Chemistry) GAP (Maths) ATHENA (LHC ATLAS software) Researchers own software Software built and installed as requested

Trial Users Maths Physics Chemistry Economics Informatics

Plans for the Future Plans for the Future Integration of Legacy HPC Systems

Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences

Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics)

Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration

Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities eg the National Grid Service and others.

Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities eg the National Grid Service and others. Continuing support for CISC (DICOM archive + fMRI analysis)

Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities eg the National Grid Service and others. Continuing support for CISC (DICOM archive + fMRI analysis) Access to external facilities eg the National Grid Service and others.

Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities e.g. the National Grid Service and others. Continuing support for CISC (DICOM archive + fMRI analysis) Access to external facilities e.g. the National Grid Service and others. Condor pool for Windows programs, e.g. Matlab

Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities e.g. the National Grid Service and others. Continuing support for CISC (DICOM archive + fMRI analysis) Access to external facilities e.g. the National Grid Service and others. Condor pool for Windows programs, e.g. Matlab Integrating with Linux Support

Questions?