Download presentation
Presentation is loading. Please wait.
1
DaaS and Kubernetes at PSI
Stephan Egli :: Paul Scherrer Institut :: Photon Science Department DaaS and Kubernetes at PSI CALIPSOplus JRA2 Meeting, May 23rd 2018
2
Experiences gained at PSI
Purpose of this talk: What existing solutions do we have so far ? What are new options that we might explore more in future ? Intention: provide input for discussion: how to merge the best ideas and experiences gained at our different sites ? which building blocks should be part of a new blueprint ? Illustrate need for extensive tests and explorations of options to be able to make the right decisions in time and therefore important to compare each other experiences Disclaimer: I just summarize the situation. All errors and omissions are my fault. The results achieved are all due to the long term commitment and the tremendous efforts invested by the colleagues from the Science IT department of PSI and colleagues from the ESS /Data Archive project !
3
Data Analysis as a Service Project
See Webpage: Main goal: make offline data analysis of large datasets easier for researchers Needs sophisticated and high performance storage infrastructure with Good connectivity to online systems Good Software environments and support Current typical usage: between and CPU hours per group and month for the main user groups cSaxs, Tomcat, MX and Swissfel
4
DaaS Infrastructure Overview
5
Online-Offline connectivity
SLS
6
Spectrum Scale GPFS Active File Management
7
Software Environment, Expert Support
Provide standard software for interactive analysis and visualization, like Matlab, Mathematica, iPython environments etc in different versions as well as domain specific Extended environment module system to mitigate the problem of providing different SW (p-modules) version and development environments to different researchers and for different architectures Provide ready-to-use scientific software packages - e.g. MX: solve protein structures from the SLS and FEL data, collected using both conventional methods (rotating sample) and serial crystallography methods. Provide SW development environments to allow researchers to build, develop and refine the scientific codes Provide support for different compiler chains (gcc, intel, OpenMP, MPI, Cuda) Provide help to scientists in tuning the algorithms and optimizing them. Often gives the largest overall performance boost. This needs local experts knowledgable both in science and IT algorithms and code optimizations, e.g. running parallelized ptychographic reconstruction codes. Jupyter Notebooks for web based interactive work.
8
Interactive analysis with Jupyter Notebooks
Environment Cluster Queues
9
Further components in use
Batch system SLURM as batch scheduler. Resource management done by integrating with Linux c-groups Remote access via NoMachine, classical GUI login, users see all the same environment and then work either interactively or submit batch jobs. Remote data transfer: Globus Online (gridFTP based), rsync for special use cases. Integration with data catalog and archive system (see later)
10
Data Catalog and Container Orchestration
Data catalog is an important component for the overall data management life-cycle Gateway to the archive system for long term storage Necessary component to implement the data policy Challenge: integration into existing and historically grownenvironments demands a flexible framework. We use the SciCat data catalog, see Architecture based on microservices which are very well suited to run in containers Needs a container orchestration platform. We chose Kubernetes. Experience with Kubernetes is very good, both in terms of functionality and operational stability. Initially built for long running web service type applications Persistency layer implemented via MongoDB
11
Overall Data Catalog Architecture
12
Kubernetes Dashboard: Overview over all test and prod environments
13
Single Pod Infos
14
Beamline Ingestors based on Node-Red
15
Data catalog GUI, user view
16
Scientific Metadata View
17
Access to Datacatalog via OpenAPI REST API
18
Containers for Data Analysis
Disclaimer: only minimal own experience so far Potential advantage: adaptability to existing environments at different sites containers allow to provide OS environments tailored to the needs of the different scientist groups containers make it easier to share full work environments New Container implementations for better HPC support Shifter-NG: Linux containers for HPC (NERSC, CSCS, tested within HEP application together with Science IT): Allows an HPC system to efficiently and safely allow end-users to run a docker image. Integration with batch scheduler systems. Security oriented to HPC systems, native performance of custom HPC hardware. Compatible with Docker. Singularity: Mobility of Compute, see Leverage resources like HPC interconnects, resource managers, file systems, GPUs and/or accelerators, etc.
19
Kubernetes for Data Analysis ?
Originally main application area: (long running) web services Can be exploited for Jupyter notebooks (ready to use helm charts) Meanwhile concepts for Kubernetes extended: Jobs/Batch Ressources Ideas for Integration with Shifter/Singularity type containers Kubernetes: (OCI compliant runtimes): Remark: Kubernetes also planned to be used by the Controls colleagues for machine and beamline control system infrastructure
20
Some Open points and Questions
If we make use of container technology in a HPC/HTC environment: which container image type(s) to use ? Should it be Docker compatible in any case ? How to overcome Docker limitations: docker's main design goal is to provide completely independent container images, while a HPC cluster always is built on the sharing of some specially efficient HW components. Inefficiency on parallel filesystems due to its stacked container format ? how do we handle storage resources efficiently for HTC applications ? This implies integration of parallel FS, network performance and security aspects how do we manage resources (batch systems vs container orchestration, HPC Cluster vs “Cloud”) , see e.g. . Choose one or the other or both merged in some way ? Do containers make the virtualization layer unnecessary ? Or do we still need it e.g. for optimal reproducability ?
21
Tools (in use) at other sites
CERN: for HEP use cases Reusable Analysis platform REANA/RECAST , : Workflow Engine where each step is a Kubernetes Job HTCondor and Docker/Kubernetes: SDSC/EPFL: Renga - Securely manage, share and process large-scale data across untrusted parties operating in a federated environment. Automatically capture complete lineage up to original raw data for detailed traceability, auditability & reproducibility. Aiida :Automated Interactive Infrastructure and Database for Computational Science and Materials Cloud: : A platform for open science
22
Summary This is just a sketch of the situation as far as I am aware of it There are a lot of interesting developments currently ongoing The whole topic is WIP, constantly moving and adapting The future path(es) still need to be explored by all of us and it will help to share our experiences Finding good solutions and at the same time minimizing the risks is favoring an iterative approach and resources/willingness to test, implement (or abandon) solutions
23
Acknowledgements Thanks go the help from all colleagues in the IT department involved, in particular the Science IT and the colleagues from the ESS project
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.