The Network & ATLAS Workshop on transatlantic networking panel discussion CERN, June 11 2010 Kors Bos, CERN, Geneva & NIKHEF, Amsterdam ( ATLAS Computing.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

Connect communicate collaborate LHCONE – Linking Tier 1 & Tier 2 Sites Background and Requirements Richard Hughes-Jones DANTE Delivery of Advanced Network.
55:035 Computer Architecture and Organization Lecture 7 155:035 Computer Architecture and Organization.
Clean Up Role WCF Role Web Site Role’ Cache Build Role Create as many roles as you need ‘knobs’ to adjust scale Loss of an instance results.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
15/07/2010Swiss WLCG Operations Meeting Summary of the last GridKA Cloud Meeting (07 July 2010) Marc Goulette (University of Geneva)
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
Submitting: Barak Pinhas Gil Fiss Laurent Levy
CHEP 2015 Analysis of CERN Computing Infrastructure and Monitoring Data Christian Nieke, CERN IT / Technische Universität Braunschweig On behalf of the.
Tier-2 Network Requirements Kors Bos LHC OPN Meeting CERN, October 7-8,
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
Challenges of Storage in an Elastic Infrastructure. May 9, 2014 Farid Yavari, Storage Solutions Architect and Technologist.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Status of CMS Matthew Nguyen Recontres LCG-France December 1 st, 2014 *Mostly based on information from CMS Offline & Computing Week November 3-7.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
1 High-Level Carrier Requirements for Cross Layer Optimization Dave McDysan Verizon.
Take on messages from Lecture 1 LHC Computing has been well sized to handle the production and analysis needs of LHC (very high data rates and throughputs)
Module 8: Implementing the Placement of Domain Controllers.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
WLCG Networking Tony Cass, Edoardo Martelli 11 th April 2015.
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
CERN – IT Department CH-1211 Genève 23 Switzerland t Working with Large Data Sets Tim Smith CERN/IT Open Access and Research Data Session.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Performance Plumbing Adam Bechtel 24 June WARNING  These slides won’t make any sense unless you hear the presentation.
EGI-InSPIRE EGI-InSPIRE RI DDM solutions for disk space resource optimization Fernando H. Barreiro Megino (CERN-IT Experiment Support)
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
Data Placement Intro Dirk Duellmann WLCG TEG Workshop Amsterdam 24. Jan 2012.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Data Management Highlights in TSA3.3 Services for HEP Fernando Barreiro Megino,
Hey, You, Get Off of My Cloud Thomas Ristenpart, Eran Tromer, Hovav Shacham, Stefan Savage Presented by Daniel De Graaf.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
J. Templon Nikhef Amsterdam Physics Data Processing Group Storage federations Why & How : site perspective Jeff Templon Pre-GBD october CDN vision.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
WLCG after 1 year with data: Prospects for the future Ian Bird; WLCG Project Leader openlab BoS meeting CERN4 th May 2011.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
WLCG: The 1 st year with data & looking to the future WLCG: Ian Bird, CERN WLCG Project Leader WLCG Project LeaderLCG-France; Strasbourg; 30 th May 2011.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Grid Computing Jeff Templon Programme: Group composition (current): 2 staff, 10 technicians, 1 PhD. Publications: 2 theses (PD Eng.) 16 publications.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Computing Fabrics & Networking Technologies Summary Talk Tony Cass usual disclaimers apply! October 2 nd 2010.
SLACFederated Storage Workshop Summary Andrew Hanushevsky SLAC National Accelerator Laboratory April 10-11, 2014 SLAC.
J. Templon Nikhef Amsterdam Physics Data Processing Group “Grid” Computing J. Templon SAC, 26 April 2012.
LHCONE Workshop Richard P Mount February 10, 2014 Concerns from Experiments ATLAS Richard P Mount SLAC National Accelerator Laboratory.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
Fault – Tolerant Distributed Multimedia Streaming Web Application By Nirvan Sagar – Srishti Ganjoo – Syed Shahbaaz Safir
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
R.Starink Nikhef Amsterdam PDP/CT PDA Ronald Starink.
1 LCG-France 22 November 2010 Tier2s connectivity requirements 22 Novembre 2010 S. Jézéquel (LAPP-ATLAS)
WLCG Network Discussion
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Methodology: Aspects: cost models, modelling of system, understanding of behaviour & performance, technology evolution, prototyping  Develop prototypes.
The Data Lifetime model
Evolution of the distributed computing model The case of CMS
Disk capacities in 2017 and 2018 ALICE Offline week 12/11/2017.
How to address the increasing connectivity needs of the HEP community?
Network Requirements Javier Orellana
LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic
Chapter 7: Consistency & Replication IV - REPLICATION MANAGEMENT -Sumanth Kandagatla Instructor: Prof. Yanqing Zhang Advanced Operating Systems (CSC 8320)
Migrating Your Data to the Cloud? Location Matters
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

The Network & ATLAS Workshop on transatlantic networking panel discussion CERN, June Kors Bos, CERN, Geneva & NIKHEF, Amsterdam ( ATLAS Computing Coordinator )

Data Placement Hierarchy

Data placement & usage We have diskspce: And it is almost full We have: ~800 active users ~1M jobs/week ~1B events/week But ….

Alternative: more caching and less pre- placement Pull needed data to where there is free CPU power Get the data from the “best” site(s) – Network should know what is “best” Retire least frequently used data Dynamically makes most popular data available Across cloud boundaries

Network of sites with a big cache Few sites equipped to archive data Latency may favor continents view

Evolution of requirements No longer T hierarchy T1 and T2 will become equivalent in the network (OPNng) Traffic between countries as much as within No longer disk space but network bandwidth will scale with #users and #data More analysis-user demand driven traffic pattern rather than the DC of pre-placement We must have intelligent layer for data brokering Bandwidth between each of the sites must be at least what we now have between a T1 and T2 site within a cloud