Download presentation
Presentation is loading. Please wait.
Published byKayleigh Stedman Modified over 10 years ago
1
ITS Training and Awareness Session Research Support Jeremy Maris & Tom Armour ITS researchsupport@its.sussex.ac.uk
2
Research Support Our Remit Research Group Meetings Work So Far High Performance Computing Plans for the Future Questions?
3
Our Remit Our Remit Meet with research groups on a regular basis to ensure the university’s core infrastructure provides appro priate support for researcher’s computing requirements
4
Our Remit Our Remit Meet with research groups on a regular basis to ensure the university’s core infrastructure provides appro priate support for researcher’s computing requirements To simplify and minimise the work needed by research groups to use the University’s core infrastructur e;
5
Our Remit Our Remit Meet with research groups on a regular basis to ensure the university’s core infrastructure provides appro priate support for researcher’s computing requirements To simplify and minimise the work needed by research groups to use the University’s core infrastructur e; Provide advice and support to researchers about using IT systems within their research (including HPC)
6
Our Remit Our Remit Meet with research groups on a regular basis to ensure the university’s core infrastructure provides appro priate support for researcher’s computing requirements To simplify and minimise the work needed by research groups to use the University’s core infrastructur e; Provide advice and support to researchers about using IT systems within their research (including HPC) The HPC support team would arrange appropriate training events and seminars to help research teams make appropriate and optimal use of HPC systems
7
Research Group Meetings Research Group Meetings Astronomy Particle Physics Informatics Geography Sussex Research Hive Engineering Thermo-Fluids Life Sciences Economics Sussex Doctoral School
8
Work So Far Work So Far Have already made good contacts with the research community.
9
Work So Far Work So Far Have already made good contacts with the research community. Specification, installation and commissioning of HPC clusters for IT Services and Physics (ATLAS analysis)
10
Work So Far Work So Far Have already made good contacts with the research community. Specification, installation and commissioning of HPC clusters for IT Services and Physics (ATLAS analysis) Moved new HPC facility from Chichester Machine Room to the new Shawcross Data Centre
11
Work So Far Work So Far Have already made good contacts with the research community. Specification, installation and commissioning of HPC clusters for IT Services and Physics (ATLAS analysis) Moved new HPC facility from Chichester Machine Room to the new Shawcross Data Centre Continuing support for existing HPC Facilities in Maths & Physics – Engineering and Life Sciences
12
High Performance Computing (HPC) High Performance Computing (HPC) About HPC … Cluster configuration Software available Users
13
What is High Performance Computing ? High performance computing –Maximising number of cycles per second
14
What is High Performance Computing ? High performance computing –Maximising number of cycles per second High throughput computing –Maximising number of cycles per year
15
What is High Performance Computing ? High performance computing –Maximising number of cycles per second High throughput computing –Maximising number of cycles per year Facilitating the storage, access and processing of data –Coping with the massive growth in data
16
High Performance Computing High performance computing –tasks must run quickly –single problem split across many processors –Task parallel, MPI or SMP Simulations Markov models (finding patterns over time in complex systems) Theoretical Chemistry Computational Fluid Dynamics Imaging processing (3D image reconstruction, 4D visualisation) Sequence assembly Whole genome analysis
17
High throughput computing A lot of work done over a long time frame –One program run many times, e.g. searching a large data set –Loosely coupled –Data parallel (embarrassingly parallel) ATLAS experiment analysis Computational Linguistics Parameter exploration (simulations) Algebraic geometry Genomics (sequence alignment, BLAST etc) Virtual Screening (e.g. in drug discovery) Statistical analysis (e.g. bootstrap analysis)
18
Growth in data Explosion of data – store and locate and process –expanding 3 times faster than Moore’s law –1TB per instrument day from sequencers –15PB year from CERN LHC –Imaging data eg MRI, CT, Microscopy together with metadata –Gene expression data from high density genomic microarrays Research data now added to and accessed from repositories –challenges re data warehousing
19
New ways to process data New ways to process, explore and model. –Genome wide association studies (GWAS) Analysis of the genome of multiple individuals genetic contribution to cancer –Tumour expression data – comparing tumours –Image processing techniques re faster research or diagnosis/treatment (Microscopy, MRI, CT) –Simulations at all scales Climate (Geography) Systems Biology – modelling simple organisms Sackler Centre for Consciousness Science Computational power required 4 – 10 times that of increase in data
20
Interdisciplinary research New techniques – collaborations with other sciences to give new understanding Sussex Research Themes –Mind and Brain –Digital and Social Media –Culture and Heritage –Citizenship and Democratisation –Global Transformations and Environment and Health
21
Computational fluid dynamics Modelling flows of cancer cells in blood system to characterise the dynamic forces and biochemistry at work during in vitro cell adhesion. Hoskins, Kunz, Dong Penn State.
22
New HPC facilities Feynman –8 x 12 core nodes (2.67GHz, 4GB/core) –108 cores, 439GB RAM –20TB NFS home file system Apollo –10 x 12 core nodes –2 x 48 core nodes (2.2 GHz, 256GB) –228cores, 1TB RAM –4 x12 core nodes donated by Dell –20TB NFS home file system 81 TB high-performance Lustre parallel file system QDR infiniband (40GHz)
23
Other HPC systems Zeus – 16 x 8 core nodes (2.4GHz, 1.5GB/Core) – 96 cores Infiniband Archimedes –20 x 4 core nodes (3GHz, 2GB/Core) - 80 cores Qsnet Informatics –7 x 8 core nodes (2.3GHz, >=2GB/Core) 56 cores GigE Thermofluids –11 x 8 core nodes ~100 cores GigE Legacy –Dirac (56 nodes 2 core, 1GB/core) 112 cores GigE –Boston (8 nodes 2 core 1GB/core) 16 cores GigE –Informatics (80 cores @ 1.8GHz, 2GB/core ) replaced with R815 –CCNR (80 cores @ 1.8GHz, 512k/core) 80 cores, GigE
24
Software Intel Parallel Studio –Compilers (Fortran, C) –Profiling –Debugging High Performance Libraries –MKL etc –NAG Matlab STATA AIMPRO, ADF, Gaussian, Amber (Chemistry) GAP (Maths) ATHENA (LHC ATLAS software) Researchers own software Software built and installed as requested
25
Trial Users Maths Physics Chemistry Economics Informatics
26
Plans for the Future Plans for the Future Integration of Legacy HPC Systems
27
Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences
28
Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics)
29
Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration
30
Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities eg the National Grid Service and others.
31
Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities eg the National Grid Service and others. Continuing support for CISC (DICOM archive + fMRI analysis)
32
Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities eg the National Grid Service and others. Continuing support for CISC (DICOM archive + fMRI analysis) Access to external facilities eg the National Grid Service and others.
33
Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities e.g. the National Grid Service and others. Continuing support for CISC (DICOM archive + fMRI analysis) Access to external facilities e.g. the National Grid Service and others. Condor pool for Windows programs, e.g. Matlab
34
Plans for the Future Plans for the Future Integration of Legacy HPC Systems Involve non-traditional users, especially Humanities and Social Sciences Evaluate use of GPU technology (Sackler Centre for Consciousness, Physics) Assisting Physics re GridPP integration Access to external facilities e.g. the National Grid Service and others. Continuing support for CISC (DICOM archive + fMRI analysis) Access to external facilities e.g. the National Grid Service and others. Condor pool for Windows programs, e.g. Matlab Integrating with Linux Support
35
Questions?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.