Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 1 Untertitel durch Klicken bearbeiten e-Infrastructures & Big Data Handling Volker Guelzow DESY Moscow,

Slides:



Advertisements
Similar presentations
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
Advertisements

Lesson 3: Working with Storage Systems
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Cloud Computing Cloud Computing Class-1. Introduction to Cloud Computing In cloud computing, the word cloud (also phrased as "the cloud") is used as a.
For more notes and topics visit:
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Cloud Computing Characteristics A service provided by large internet-based specialised data centres that offers storage, processing and computer resources.
Enabling Dynamic Data and Indirect Mutual Trust for Cloud Computing Storage Systems.
Evolution, by tackling new challenges| CHEP 2015, Japan | Patrick Fuhrmann | 16 April 2015 | 1 Patrick Fuhrmann On behave of the project team Evolution,
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
1 HiGrade Kick-off Welcome to DESY Hamburg Zeuthen.
…building the next IT revolution From Web to Grid…
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
David P. Anderson Space Sciences Laboratory University of California – Berkeley Public and Grid Computing.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
Company small business cloud solution Client UNIVERSITY OF BEDFORDSHIRE.
Sync and Exchange Research Data b2drop.eudat.eu This work is licensed under the Creative Commons CC-BY 4.0 licence B2DROP EUDAT’s Personal.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Communications & Networks National 4 & 5 Computing Science.
Big Data Minds 2015 | DESY-IT | V. Gülzow | Seite 1 Untertitel durch Klicken bearbeiten Volker Guelzow DESY & HTW-Berlin Hamburg, Sept. 24th, 2015 Big.
FILE MANAGEMENT Computer Basics 1.3. FILE EXTENSIONS.txt.pdf.jpg.bmp.png.zip.wav.mp3.doc.docx.xls.xlsx.ppt.pptx.accdb.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Interfaces. Peripheral devices connect to the CPU, via slots on the back of the computer.
IS Infrastructure Managing Infrastructure and Services Copyright © 2016 Curt Hill.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
DESY. Status and Perspectives in Particle Physics Albrecht Wagner Chair of the DESY Directorate.
12 NOVEMBER 2015 LoS Engagement in the Netherlands The Support4research project Jan Bot, SURFsara Support4research team SURF.
Research organization technology David Groep, October 2007.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
WHAT SURF DOES FOR RESEARCH SURF’s Science Engagement TNC15 June 18, 2015 Sylvia Kuijpers (SURFnet)
IPCEI on High performance computing and big data enabled application: a pilot for the European Data Infrastructure Antonio Zoccoli INFN & University of.
Sync n share and QoS in dCache | NEC 2015, Montenegro| Patrick Fuhrmann | 02 Oct 2015 | 1 Patrick Fuhrmann On behave of the dCache team by especially Tigran.
What is Cloud Computing 1. Cloud computing is a service that helps you to perform the tasks over the Internet. The users can access resources as they.
A strategic view of document and digital object management for the University of the Witwatersrand, Johannesburg Prof Derek W. Keats Deputy Vice Chancellor.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
EGI-Engage EGI Webinar - Introduction - Gergely Sipos EGI.eu / MTA SZTAKI 6/26/
Report from US ALICE Yves Schutz WLCG 24/01/2007.
OCR Computing OGAT Input, output and storage.
Ian Bird LHCC Referees; CERN, 2 nd June 2015 June 2,
Canadian Bioinformatics Workshops
Grid Computing at NIKHEF Shipping High-Energy Physics data, be it simulated or measured, required strong national and trans-Atlantic.
MERANTI Caused More Than 1.5 B$ Damage
(Prague, March 2009) Andrey Y Shevel
Cloud Computing Cloud computing: (the Internet represents the Cloud).
Pasquale Migliozzi INFN Napoli
National e-Infrastructure Vision
dCache Scientific Cloud
Dagmar Adamova, NPI AS CR Prague/Rez
UK GridPP Tier-1/A Centre at CLRC
UK Status and Plans Scientific Computing Forum 27th Oct 2017
3 - STORAGE: TYPES AND CHARACTERTISTICS
Data Management & Analysis in MATTER
XROOTd for cloud storage
JASMIN Success Stories
Computing Infrastructure for DAQ, DM and SC
Interfaces.
Amazon Storage as a Service
Building a Database on S3
EGI Webinar - Introduction -
Revision PowerPoint By Nicole Davidson.
Status JRA2 WP24 Demonstrator of a Photon Science Analysis Service (DaaS) Mirjam van Daalen 6/28/2019 Mirjam van Daalen PSI.
The LHC Computing Grid Visit of Professor Andreas Demetriou
Hazardous substance management in dynamic scientific environments
Efficient Migration of Large-memory VMs Using Private Virtual Memory
Presentation transcript:

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 1 Untertitel durch Klicken bearbeiten e-Infrastructures & Big Data Handling Volker Guelzow DESY Moscow, Oct 7th, 2015 as a cross topic challenge within CREMLIN

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 2 We all know: No computing means > no experiments > no science > Therefore: „computing“ has to be included from the beginning > Computing is a „cross workpackages“ topic in CREMLIN

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 3 No „one fits all“ solution … but many common issues

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 4 Higgs Discovery

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 5 Higgs Discovery Data Volume

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 6 PETRA III 66 Limnephilus flacivornis Head & Thorax Courtesy: Dr. F. Beckmann

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 7 Untertitel durch Klicken bearbeiten

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 8 New Challenges

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 9 Untertitel durch Klicken bearbeiten

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 10 Important elements for ICT for megascience infrastructures > Computing is fairly easy, the software is the key issue > Develop high sophisticated „Big Data“ management solutions  … fitting on site experiments and off site experiments  Provide sufficient hardware resources (capacity and speed)  Sufficient bandwidth  Think about data formats (eg NEXUS in Photon Science)  Set up a Authorization and Authentication Infrastructure (AAI)  Easy access (like Eduroam,…)  Portals to access the data from remote  Datapolicies (owner, store how long, who has access, ……)  Long term data preservation  Develope solutions in cooperation with other Lab‘s > New „kids on the block“ like clouds…. > Don‘t forget about the energy

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 11 dCache Big Data Cloud LOFAR antenna Huge amounts of data X-FEL (Free Electron Lasers) Fast Ingest Supercomputing for Human Brain Intensity frontier

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 12 dCache – OwnCloud Data Management Spinning Disks Tape, Blue-ray … Unlimited hierarchical Storage Space NFS 4.1 CDMI WEB 2.0 dCache SSD’S

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 13

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 14

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 15 Connectivity > Existing links for LHC: > JINR has a 10G link to Amsterdam (and than via 100 G to Cern) > Kurchatov has 10 G to the Wigner Centre > Tier 2 connectivity goes via NorduNet or ESNET but not Geant > Connectivity is essential for megascience facilities ( Gb/s)

Cremlin Kick Off | DESY-IT | V. Gülzow | Seite 16 Conclusion > There is no „one fits all“ solution > if possible, use standard data formats > All participating centres do have a outstanding knowledge in ICT, let‘s put things together > common data management solutions can form a basis for big data management > A proper user interface is mandatory > Connectivity to the world is a key factor > Well supported by the EC