An attempt to summarize…or … some highly subjective observations Matthias Kasemann, CERN & DESY.

Slides:



Advertisements
Similar presentations
IBM SMB Software Group ® ibm.com/software/smb Maintain Hardware Platform Health An IT Services Management Infrastructure Solution.
Advertisements

Performance Testing - Kanwalpreet Singh.
Nick Brook University of Bristol The LHC Experiments & Lattice EB News Brief overview of the expts  ATLAS  CMS  LHCb  Lattice.
Chapter 22: Cloud Computing and Related Security Issues Guide to Computer Network Security.
Chapter 19: Network Management Business Data Communications, 4e.
8.
The CrossGrid project Juha Alatalo Timo Koivusalo.
Assessment of Core Services provided to USLHC by OSG.
HEPiX Catania 19 th April 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 19 th April 2002 HEPiX 2002, Catania.
Advances in Technology and CRIS Nikos Houssos National Documentation Centre / National Hellenic Research Foundation, Greece euroCRIS Task Group Leader.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Part VII: Special Topics Introduction to Business 3e 18 Copyright © 2004 South-Western. All rights reserved. Using Information Technology.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
ISS-AliEn and ISS-gLite Adrian Sevcenco RO-LCG 2011 WORKSHOP Applications of Grid Technology and High Performance Computing in Advanced Research.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Nightly System Growth Graphs Abstract For over 10 years of development the ATLAS Nightly Build System has evolved into a factory for automatic release.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
Did we achieve the goals for this course? What did we learn? COMP251: Computer Architecture John MacCormick.
National Statistical Offices/NSO’s/ Capabilities to Collect ICT Indicators Yasin Mossa Central Statistical Authority of Ethiopia Geneva, 9 Sept.2003.
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Computing Coordination Aspects for HEP in Germany International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science nLCG.
1 CHEP2007 Highlights. 2 Computing in High-Energy Physics Beijing 2003 – La Jolla Interlaken Mumbai Berlin Chicago.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Bob Lucas University of Southern California Sept. 23, 2011 Transforming Geant4 for the Future Bob Lucas and Rob Roser USC and FNAL May 8, 2012.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 1 Automate your way to.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
25 April Unified Cryptologic Architecture: A Framework for a Service Based Architecture Unified Cryptologic Architecture: A Framework for a Service.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
CHEP 2007 IAC November CHEP 2007 Introduction Schedule (Friday summary talks) Plenary topics and speakers Parallel track coordinators Poster.
Chapter 8 System Management Semester 2. Objectives  Evaluating an operating system  Cooperation among components  The role of memory, processor,
NC College of Engineering 1 Grid Computing: Harnessing Underutilized Resources Compiled by Compiled by Rajesh & Anju NCCE,Israna, Panipat.
© 2015 Open Grid Forum ETSI CSC activities Wolfgang Ziegler Area Director Applications, OGF Fraunhofer Institute SCAI Open Grid Forum 44, May 21-22, 2015.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
CHEP 2012 ADC talks G. Carlino (INFN Napoli) on behalf of the Computing Speaker Committee ADC Weekly, August 29 th.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
The HEP Software Foundation Initiative 5 th September 2014 John Harvey, Pere Mato PH-SFT, CERN.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Update on CHEP from the Computing Speaker Committee G. Carlino (INFN Napoli) on behalf of the CSC ICB, October
Enterprise Requirements: Industry Workshops and OGF Robert Cohen, Area Director, Enterprise Requirements.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
HPDC Grid Monitoring Workshop June 25, 2007 Grid monitoring from the VO/user perspectives Shava Smallen.
Measuring Performance II and Logic Design
CEPC software & computing study group report
Accessing the VI-SEEM infrastructure
Chapter 19: Network Management
Job Scheduling in a Grid Computing Environment
Grid Scheduling Architecture – Research Group
Windows Server* 2016 & Intel® Technologies
Chapter 18 MobileApp Design
Building Grids with Condor
Introduction to Cloud Computing
Enterprise Application Architecture
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
Dean Martin Cadwallader Dean of the Graduate School
ExaO: Software Defined Data Distribution for Exascale Sciences
Report on GLUE activities 5th EU-DataGRID Conference
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Presentation transcript:

An attempt to summarize…or … some highly subjective observations Matthias Kasemann, CERN & DESY

9/7/07M.Kasemann: An attempt to summarize CHEP 072/31 Topics I want to present here…. This is NOT a summary of the summaries This are my observations (including my miss understandings and errors) CHEP07: the Conference Topics presented in plenary sessions here: Computing issues at our Laboratories3 x Computing Technology & Evolution 4 x Computing for HEP5 x Grid Projects: Status and Developments5 x Networking2 x Some Conference impressions…

9/7/07M.Kasemann: An attempt to summarize CHEP 073/31 Total: Expected Audience: - attract 500 people - 90% from outside of Canada - 50% from US - Many from T1 and T2 centers CHEP07: The Conference

9/7/07M.Kasemann: An attempt to summarize CHEP 074/31 CHEP07: Some Statistics 429 abstracts submitted with 1208 Authors

9/7/07M.Kasemann: An attempt to summarize CHEP 075/31 Computing at Laboratories Infrastructure provisioning:

9/7/07M.Kasemann: An attempt to summarize CHEP 076/31

9/7/07M.Kasemann: An attempt to summarize CHEP 077/31 Computing technology evolution We heard an inspiring presentation by Eng Lim Goh from SGI: … one fully digitized film is 4 PB and 1.25 GB/s to play…

9/7/07M.Kasemann: An attempt to summarize CHEP 078/31 Computing technology evolution James Sexton / IBM “The few next years of computing is all about fundamental physical limits”

9/7/07M.Kasemann: An attempt to summarize CHEP 079/31 Computing technology evolution “HEP applications are embarrassingly parallel - as long as there is enough memory for the application”

9/7/07M.Kasemann: An attempt to summarize CHEP 0710/31 S.Jarp: LHC Software and CPU architectures “We have floating point work wrapped in ‘if/else’ logic” Overall estimate: 50% is floating point “our LHC programs typically utilizes only 1 instruction per CPU clock cycle (= 1/8 of maximum)” “We are not getting out of first gear” Recommendations: 1. get ready for LHC 2. increase Instruction Level Parallelism Assist compiler to make it more effective 3. improve Multithreading possibilities of applications 4. simplify / restructure code

9/7/07M.Kasemann: An attempt to summarize CHEP 0711/31 S.Jarp: LHC Software and CPU architectures (2)

9/7/07M.Kasemann: An attempt to summarize CHEP 0712/31 The LHC & The Experiments T.Virdee Many of us work in software and computing aspects to do a physics experiment. T.Virdee started with physics ….

9/7/07M.Kasemann: An attempt to summarize CHEP 0713/31 The LHC & The Experiments T.Virdee … reminded us of the schedule … Some scary pictures:

9/7/07M.Kasemann: An attempt to summarize CHEP 0714/31 The LHC & The Experiments T.Virdee … and concluded: I like to add: … if we manage to analyze the data (which we will … in one way or the other)

9/7/07M.Kasemann: An attempt to summarize CHEP 0715/31 LHC Computing I.Fisk “In the beginning computing was centralized” - no longer

9/7/07M.Kasemann: An attempt to summarize CHEP 0716/31 LHC Computing: work ahead of us

9/7/07M.Kasemann: An attempt to summarize CHEP 0717/31 Analysis Tools for LHC D.Liko

9/7/07M.Kasemann: An attempt to summarize CHEP 0718/31 DAQ for LHC Experiments S.Chapeland

9/7/07M.Kasemann: An attempt to summarize CHEP 0719/31 Running Experiments using the Grid F.Wuerthwein

9/7/07M.Kasemann: An attempt to summarize CHEP 0720/31 Advanced Computation for the ILC P.Tennenbaum

9/7/07M.Kasemann: An attempt to summarize CHEP 0721/31 WLCG: Status & Challenges Les Robertson

9/7/07M.Kasemann: An attempt to summarize CHEP 0722/31 WLCG: work ahead of us, concerns

9/7/07M.Kasemann: An attempt to summarize CHEP 0723/31 WLCG Workshop Summary J.Shiers

9/7/07M.Kasemann: An attempt to summarize CHEP 0724/31 Grid Interoperability L.Field ç

9/7/07M.Kasemann: An attempt to summarize CHEP 0725/31 Grid Interoperability L.Field I might add: This work is key to collaborate and succeed in distributed computing

9/7/07M.Kasemann: An attempt to summarize CHEP 0726/31 The Future of Grid Computing M.Livny A great talk, here are my notes: Grid computing is no more... - an easy source of money - a tool to get the troupes mobilized - an easy sell of software tools - an easy way to get papers published or press releases posted Distributed Computing is here to stay... and is doing well LHC came up with the Tier architecture well before Grid computing was founded Claims of benefits provided by distributed computing systems: - high availability + reliability - high system performance - ease of modular and incremental growth - automatic load + resource sharing - good response to Temporary overload - easy expansion in capacity and/or function

9/7/07M.Kasemann: An attempt to summarize CHEP 0727/31 The Future of Grid Computing M.Livny … and some more notes: All the component of the system should be unified in their desire to achieve a common goal. This goal will determine the rules according to which each of these elements will be controlled. - in contradiction to a top down organization When we focus on fundamentals we can deliver stable distributed computing capabilities Our software stack is a misch-masch, not for technical reasons software sources with different quality assurance procedures Where are we heading? - we have to improve our software quality - we have to restructure our organization - we must add capabilities: -- storage management -- heterogeneous security model -- intra VO scheduling via "just in time" overlay frameworks

9/7/07M.Kasemann: An attempt to summarize CHEP 0728/31 Networks for HEP and Data Intensive Science and the Digital Divide H.Newman

9/7/07M.Kasemann: An attempt to summarize CHEP 0729/31 Networks for HEP and Data Intensive Science and the Digital Divide H.Newman

9/7/07M.Kasemann: An attempt to summarize CHEP 0730/31 CHEP Impressions

9/7/07M.Kasemann: An attempt to summarize CHEP 0731/31 Thank you… …for a very well organized conference … for interesting presentations and discussions by all of you not to forget the progress made through many discussions and working sessions in the hallways and corridors