One independent ‘policy-bridge’ PKI

Slides:



Advertisements
Similar presentations
E-Science Update Steve Gough, ITS 19 Feb e-Science large scale science increasingly carried out through distributed global collaborations enabled.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
The DutchGrid Platform Collaboration of projects from –Computer Science, HEP and service providers Participating and supported projects –Virtual Laboratory.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Agenda Network Infrastructures LCG Architecture Management
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Monitoring the Grid at local, national, and Global levels Pete Gronbech GridPP Project Manager ACAT - Brunel Sept 2011.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
The DutchGrid Platform – An Overview – 1 DutchGrid today and tomorrow David Groep, NIKHEF The DutchGrid Platform Large-scale Distributed Computing.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
…building the next IT revolution From Web to Grid…
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
CERN IT Department CH-1211 Geneva 23 Switzerland t CF Computing Facilities Agile Infrastructure Monitoring CERN IT/CF.
Hadoop IT Services Hadoop Users Forum CERN October 7 th,2015 CERN IT-D*
David Groep Nikhef Amsterdam PDP & Grid Welcome to the NL-T1 and the Dutch National e-Infrastructure LHCOPN-LHCOne workshop October 2015 David Groep, Nikhef.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Accounting in LCG Dave Kant CCLRC, e-Science Centre.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Elasticsearch – An Open Source Log Analysis Tool Rob Appleyard and James Adams, STFC Application-Level Logging for a Large Tier 1 Storage System.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
(BiG) Grid: from research to e-Science David Groep, NIKHEF Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see
Grid Computing: dealing with GB/s dataflows David Groep, NIKHEF Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see
IT Monitoring Service Status and Progress 1 Alberto AIMAR, IT-CM-MM.
Grid Computing at NIKHEF Shipping High-Energy Physics data, be it simulated or measured, required strong national and trans-Atlantic.
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
Monitoring Evolution and IPv6
WLCG Workshop 2017 [Manchester] Operations Session Summary
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
WLCG Network Discussion
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Daniele Cesini – INFN-CNAF - 19/09/2017
U.S. ATLAS Tier 2 Computing Center
WinCC-OA Log Analysis SCADA Application Service - Reporting
LHCOPN update Brookhaven, 4th of April 2017
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
INFN Computing infrastructure - Workload management at the Tier-1
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Grid related projects CERN openlab LCG EDG F.Fluckiger
LCG-France activities
Dagmar Adamova, NPI AS CR Prague/Rez
LHC DATA ANALYSIS INFN (LNL – PADOVA)
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
The LHC Computing Grid Visit of Her Royal Highness
Deployment of IPv6-only CPU on WLCG – an update from the HEPiX IPv6 WG
IT Monitoring Service Status and Progress
Collaboration Spotting: Visualisation of LHCb process data
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
UK Status and Plans Scientific Computing Forum 27th Oct 2017
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Ákos Frohner EGEE'08 September 2008
WLCG and support for IPv6-only CPU
Grid Canada Testbed using HEP applications
LHC Data Analysis using a worldwide computing grid
an overlay network with added resources
Open Source Continuous Integration System
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
IPv6 update Duncan Rand Imperial College London
Building a minimum viable Security Operations Centre
The LHC Computing Grid Visit of Professor Andreas Demetriou
The LHCb Computing Data Challenge DC06
Presentation transcript:

One independent ‘policy-bridge’ PKI Image source: joint CERN (wLCG) and EGI Organisations participating in the global collaboration of e-Infrastructures Even just for wLCG, supporting the CERN LHC programme More than 200 independent institutes with end-users More than 50 countries & regions More than 300 service centres Handful regional ‘service coordination organisations’ 300 000 CPU cores, 200+PByte storage One independent ‘policy-bridge’ PKI

Big ‘as in Large’ Data Atlas only http://dashb-atlas-ddm.cern.ch/ddm2

Distributed analysis – ‘Atlas neighbours’ http://dashb-atlas-ddm.cern.ch/ddm2

Interconnecting the Grid – the LHCOPN/LHCOne network LHC Optical Private Network 10 – 100 Gbps dedicated global networks Scaled to T0-T1 data transfers

LHC OPN and LHCOne

The Flow of Data at Nikhef 10 – 40 Gbps per server 240 Gbps interconnect 44 disk servers ~3 PiB (~3000 TByte) 2 control & DB nodes Peerings: SURFnet, SURFsara, Kurchatov, AMOLF, CWI, LHCOPN, LHCOne via SARA >200 Gbps uplinks

Nikhef Data Processing network

Data Flows in the LHC Computing Grid

But that’s only a small subset LCG “FTS” visualisation sees but part of the data only shows the centrally managed data transfers sees only traffic from Atlas, CMS, and LHCb cannot show the quality, nor bandwidth used But each of our nodes sees all its transfers server logging is in itself data we collect it all

One day worth of logs … ~12GB/day tbn18.nikhef.nl:/var/log/ 631M dpm/log.1 1.1G dpns/log.1 639M srmv2.2/log.1 plus 44 disk server nodes @250 Mbyte/day And then our storage manager is still ‘decent’ …

Big Data Analytics for log analysis Analysis of log data is typical ‘big data’ problem CERN tried Hadoop (‘map-reduce’) RAL went with … ELK* For logs specifically, it’s mostly efficient search ElasticSearch (www.elastic.co) LogStash (collect and parse logs, import to ES) Kibana – analysis based on Apache Lucene + graphing Integrated into a single ‘stack’: ELK *Appleyard et al., http://pos.sissa.it/archive/conferences/239/027/ISGC2015_027.pdf

Analyse ‘by hand’: Kibana & Lucene Graphic from Rob Appleyard, RTFC RAL – at ISGC 2015

ELK Cluster and Interface

Filled with data

Filling it up: from 0 to ~ 500 million 16 M records/day Graphics courtesy Jouke Roorda, Olivier Verbeek, Rens Visser

Data flows: a user at UC Irvine reading a data set from Nikhef, with some background from CERN and KIT Karlsruhe Graphics courtesy Jouke Roorda, Olivier Verbeek, Rens Visser

Queries and responses Graphics courtesy Jouke Roorda, Olivier Verbeek, Rens Visser

Code sample: Jouke Roorda, PDP-sample-app