Dr Andrew Peter Hammersley ESRF ESRF MX COMPUTIONAL AND NETWORK CAPABILITIES (Note: This is not my field, so detailed questions will have to be relayed.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Chapter 1:Introduction to the world of computers
What is Hardware? Hardware is the physical parts of the computer system – the parts that you can touch and see. A motherboard, a CPU, a keyboard and a.
Operating System.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Click Here to Begin. Objectives Purchasing a PC can be a difficult process full of complex questions. This Computer Based Training Module will walk you.
1. Topics Is Cloud Computing the way to go? ARC ABM Review Configuration Basics Setting up the ARC Cloud-Based ABM Hardware Configuration Software Configuration.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Component 4: Introduction to Information and Computer Science Unit 1: Basic Computing Concepts, Including History Lecture 1 This material was developed.
Introduction Lecture 1 CSCI 1405, CSCI 1301 Introduction to Computer Science Fall 2009.
By Mr. Abdalla A. Shaame 1. What is Computer An electronic device that stores, retrieves, and processes data, and can be programmed with instructions.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
COMPUTER CONCEPTS.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
How Computers Work. A computer is a machine f or the storage and processing of information. Computers consist of hardware (what you can touch) and software.
Department of Computer and Information Science, School of Science, IUPUI Dale Roberts, Lecturer Computer Science, IUPUI CSCI.
EPICS Archiving Appliance Test at ESS
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Operating System. Architecture of Computer System Hardware Operating System (OS) Programming Language (e.g. PASCAL) Application Programs (e.g. WORD, EXCEL)
 Design model for a computer  Named after John von Neuman  Instructions that tell the computer what to do are stored in memory  Stored program Memory.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Types of Computers Mainframe/Server Two Dual-Core Intel ® Xeon ® Processors 5140 Multi user access Large amount of RAM ( 48GB) and Backing Storage Desktop.
(Preliminary) Results of Evaluation of the CCT SB110 Peter Chochula and Svetozár Kapusta 1 1 Comenius University, Bratislava.
PI on a Mini-PC PI on a Mini-PC Remote Data Collection with Intelligence Roland Heersink Industrial Evolution Chuck Wells OSIsoft.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Operating Systems. Without an operating system your computer would be useless! A computer contains an Operating System on its Hard Drive. This is loaded.
1 Computer and Network Bottlenecks Author: Rodger Burgess 27th October 2008 © Copyright reserved.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
Common Practices for Managing Small HPC Clusters Supercomputing 12
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Component 4: Introduction to Information and Computer Science
Hardware. Make sure you have paper and pen to hand as you will need to take notes and write down answers and thoughts that you can refer to later on.
Chapter 19 Upgrading and Expanding Your PC. 2Practical PC 5 th Edition Chapter 19 Getting Started In this Chapter, you will learn: − If you can upgrade.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Systems and Communications1 CERN Openlab – 10/11 December 2013 ESRF: IT Challenges.
NL Service Challenge Plans Kors Bos, Sander Klous, Davide Salomoni (NIKHEF) Pieter de Boer, Mark van de Sanden, Huub Stoffers, Ron Trompert, Jules Wolfrat.
Computer Systems. This Module Components Home PC Inputs Processor Memory Motherboards Auxiliary Storage Outputs.
Parts of the computer.
INSIDE THE COMPUTER. What is Computer ? Computer is Electronic device which work on electricity. OR An electronic device for storing and processing data,
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
COMPUTER BASICS HOW TO BUILD YOUR OWN PC. CHOOSING PARTS Motherboard Processor Memory (RAM) Disk drive Graphics card Power supply Case Blu-ray/DVD drive.
High Speed Detectors at Diamond Nick Rees. A few words about HDF5 PSI and Dectris held a workshop in May 2012 which identified issues with HDF5: –HDF5.
1Thu D. NguyenCS 545: Distributed Systems CS 545: Distributed Systems Spring 2002 Communication Medium Thu D. Nguyen
Communications & Networks National 4 & 5 Computing Science.
General Computer Knowledge COE 201- Computer Proficiency.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
BMTS 242: Computer and Systems Lecture 2: Memory, and Software Yousef Alharbi Website
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
UNIX Operating System. A Brief Review of Computer System 1. The Hardware CPU, RAM, ROM, DISK, CD-ROM, Monitor, Graphics Card, Keyboard, Mouse, Printer,
General Concepts of ICT. Be able to identify the main components of a general- purpose computer:  central processing unit (CPU)  main/internal memory.
Computer Hardware Introduction What’s inside that box?
Introduction to Computers - Hardware
NFV Compute Acceleration APIs and Evaluation
WP18, High-speed data recording Krzysztof Wrona, European XFEL
HDR Handling at EMBL-HH Status Report
Operating System.
Heterogeneous Computation Team HybriLIT
How can a detector saturate a 10Gb link through a remote file system
Статус ГРИД-кластера ИЯФ СО РАН.
Computing Infrastructure for DAQ, DM and SC
TYPES OFF OPERATING SYSTEM
Autoprocessing updates at the MX beamlines
Types of Computers Mainframe/Server
TeraScale Supernova Initiative
Cluster Computers.
Presentation transcript:

Dr Andrew Peter Hammersley ESRF ESRF MX COMPUTIONAL AND NETWORK CAPABILITIES (Note: This is not my field, so detailed questions will have to be relayed to the relevant people. Alexander Popov of the ESRF MX Group is a remote participant.)

CURRENT ESRF MX PROCESSING HARDWARE ESRF has 6 (or 7) MX End-stations (all can be used simultaneously) Eiger detectors (1 for MX and 1 for Soft Matter) connected by 10 Gbit Ethernet to “Local Buffer Storage” and on to storage disks and a dedicated MX processing cluster General Parallel File System (GPFS) is used for high throughput to / from disk MX Processing Cluster: Linux 64, Debian 7 mxproc BULLX 9 blades, 108 cores, X GHz 6-core 147Gflops, 2GB RAM per core mxnice Dell C8220, 9 blades 180 cores, E5-2680v2 2.8GHz 10-core 227Gflops, 3.2GB 1600MHz RAM per core

ESRF PROCESSING CLUSTER Courtesy of E. Eyer, ESRF

Systems and Communications Infiniband To X8 CR106 To X8 CTB054 Dasnet 7 L3 Public network 232 Router (X8) Old Detector PCs without 1Gbe/10Gbe 202 Ge1net 202 Old Detector PCs with 1Gbe private (ge1) 198 (NFS) Compute Clusters (all have a 10ge1net interface) 197 (NFS) Slow Detector PC with 10Ge SFEprivate SFE Dectris Eiger-4M PC 197 (GPFS) GPFS admin (GPFS) Detector PC With GPFS client ge1net 196/197/ (NFS) LBSv1 Detector PC with 10Gbe and LBSv1 LBSprivate SFA HDDs Directmon NSD SFE private SFE Fast Detector PC with 10Gbe gz X 2 X 2+ ESRF NETWORK SCHEMATIC NFS CIFS NFS CIFS NSD

CURRENT ESRF MX PROCESSING SOFTWARE User Interface / Control Software: MXCube Viewing and Quick-Look Quality Control: ADXVIEW, ALBULA for viewing; DOZOR (Alexander Popov) for B-factor. ALSO running on general MX processing cluster Integration Software: Mainly XDS (MOSFLM used for screening crystals) ~80 – 90% of users use XDS at present Typically 3 processing pipe-lines are used Eiger File Conversion: The Eiger HDF5 files must be converted to be input into XDS or MOSFLM. Conversion programs are called as sub-processes to avoid creating extra files. NOTE: The two converter programs are NOT interchangeable! Diagnostics: A test program is run which should take 1 minute at low load.

CURRENT ESRF MX PROCESSING STATUS Current Processing status: ~80% of the time, the one minute program runs in roughly one minute However in ~20% of the time it can take up to ~10 times longer This is unacceptable, and can lead users to think that the programs have crashed. It is thought that CPU overload is the main problem Solution: A hardware upgrade is in process

ESRF MX DATA PROCESSING UPGRADE Add 8 new blades: 224 cores, E5-2680v4 2.4GHz 14-core, 2.3GB 2400MHz RAM per core Processing capacity should be doubled (DIALS should be added as another processing pipe-line) The upgraded cluster should be available for MX processing in September 2016 The “Local Buffer Store” is no longer necessary as GPFS is performant enough direct to disk. This is due to phased out in ~1 year. A “Local Preview Processor” may be added instead to provide dedicated processing power for preview / data quality analysis

ESRF: PRESENT HIGH RATE DATA MX EXPERIMENT Up to 750 images per second Bursts of 1000 – 2000 images Typical data collection: 180 degrees in 0.1 degree slices; 100 images per second with 18 seconds collection, ~3 Mbytes per image ~300 Mbytes per second Exceeds 1Gbit Ethernet Time between experiments: Min ~10 seconds / Typical 5 minutes

Current MX Eiger Set-up Data Reduction Problem 100 Tbytes Experiment; Reduce (azimuthal integration) by factor 10 Investigating “Remote Direct Memory Access” for fast transparent data access. The data reduction processor will access the detector data automatically. 35 Gbits per second measured Proof of method by End of June Concept could be used for MX for previewing and quality control ESRF CURRENT DATA RATE CHALLENGE Detector PC GPFSGPFS 10 Gbit Dedicated Link Local Buffer Store Infiniband MX Processing Cluster Detector PC GPFSGPFS 40 Gbits Dedicated Link SSD’s GPU’s Reduction Processor Infiniband Processing / Storage RDMA

ESRF: FUTURE CHALLENGES Project for XFEL type data collection Embed crystals in gel with “0.5 crystals per shot on average” Squeeze gel continuously through beam Collect data at 500 Hz (750 Hz works, but not continuously for long periods of time) 1.8 million images per hour. Roughly 6 Tbytes per hour Each crystal randomly orientated Data set 20,000 to 30,000 diffraction images, from ~100,000 images Integration with CrystFEL and/or cctbx Where will the processing power come from? (Initially one-off experiments) (Big ESRF wide problem: Too many files for the file systems!)

ACKNOWLEDGEMENTS THANKS FOR PROVIDING THE INFORMATION: Olof Svensson: Data Analysis Unit David Von Stetten: MX Group Jens Meyer: Beam-line Control Unit Bruno Lebayle: Systems and Communications Andy Götz: Software Group