Summary GRID and Computing Takashi Sasaki KEK Computing Research Center.

Slides:



Advertisements
Similar presentations
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
Advertisements

Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
GridFTP: File Transfer Protocol in Grid Computing Networks
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
Grid Computing for High Energy Physics in Japan Hiroyuki Matsunaga International Center for Elementary Particle Physics (ICEPP), The University of Tokyo.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Mass Storage System Forum HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof Bärring (CERN)
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
Data GRID deployment in HEPnet-J Takashi Sasaki Computing Research Center KEK.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
© 2006 Open Grid Forum Enabling Pervasive Grids The OGF GIN Effort Erwin Laure GIN-CG co-chair, EGEE Technical Director
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
DESY Photon Science XFEL official start of project: 5 June 2007 FLASH upgrade to 1 GeV done, cool down started PETRA III construction started 2 July 2007.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
1. Introduction REU 2006-Packet Loss Distributions of TCP using Web100 Zoriel M. Salado, Mentors: Dr. Miguel A. Labrador and Cesar D. Guerrero 2. Methodology.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Software Overview Akiya Miyamoto KEK JSPS Tokusui Workshop December-2012 Topics MC production Computing reousces GRID Future events Topics MC production.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Easy Access to Grid infrastructures Dr. Harald Kornmayer (NEC Laboratories Europe) Dr. Mathias Stuempert (KIT-SCC, Karlsruhe) EGEE User Forum 2008 Clermont-Ferrand,
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
LHC Computing, CERN, & Federated Identities
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
1 GridFTP and SRB Guy Warner Training, Outreach and Education Team, Edinburgh e-Science.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Update on CHEP from the Computing Speaker Committee G. Carlino (INFN Napoli) on behalf of the CSC ICB, October
BDTS and Its Evaluation on IGTMD link C. Chen, S. Soudan, M. Pasin, B. Chen, D. Divakaran, P. Primet CC-IN2P3, LIP ENS-Lyon
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
ILC_3: DISTRIBUTED COMPUTING TOWARD ILC (PROPOSAL) CC-IN2P3 and KEK Computing Research Center (KEK-CRC) Hiroyuki Matsunaga (KEK) 2014 Joint Workshop of.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
SRB at KEK Yoshimi Iida, Kohki Ishikawa KEK – CC-IN2P3 Meeting on Grids at Lyon September 11-13, 2006.
Grid Interoperability
Status report on LHC_2: ATLAS computing
Status and Plans on GRID related activities at KEK
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
A high-performance computing facility for scientific research
NAREGI at KEK and GRID plans
Project: COMP_01 R&D for ATLAS Grid computing
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
Presentation transcript:

Summary GRID and Computing Takashi Sasaki KEK Computing Research Center

Covered talks Extension –LHC_2 ATLAS Computing –Comp_3 GRID Interoperability –SDA_1 Event generators and Higgs Physics at LHC New proposal –Bio_1 Geant4 new developments

LHC_2 ATLAS Computing

Activities in 2006 Mainly tests for data transfer –In the overall ATLAS framework: `SC4’ (Service Challenge 4) –Also special tests Communications mainly by (Visits in February and March 1997)

SC4 (Lyon → Tokyo) RTT (Round Trip Time) ~ 280 msec The available bandwidth limited to 1 Gbps Linux kernel 2.4, no tuning, standard LCG middleware (GridFTP) ~ 20 MB/s (15 files parallel, each 10 streams) Not satisfactory (packet loss)

Test with iperf (memory to memory) Linux kernel Congestion control –TCP Reno vs. BIC TCP Try also PSPacer (from AIST, Tsukuba) Best result: BIC TCP + PSPacer Tokyo → Lyon: >800 Mbps (with 2 streams): shown below

Summary for iperf results SL(C)4 (kernel 2.6 with BIC TCP): much better in congestion control than SL3 (kernel 2.4) Software Pacer (PSPacer by AIST) in addition gives a stable and good performance 1 stream10 streams Lyon → Tokyo0-5 MB/s2-20 MB/s Tokyo → Lyon10-15 MB/s44-60 MB/s 1 stream2 to 8 streams Lyon → Tokyo45 MB/s Tokyo → Lyon70 MB/s100 MB/s

Year 2007 Not only purely technical R&Ds, but also studies for data movement in view of physicists’ point of view The available network bandwidth will increase (probably this year) A new computer system has been installed at ICEPP and more man power is soon available for technical studies More intensive R&D for this year toward the LHC start-up

LHC_2: ATLAS Computing The Tier-1 center at CC-IN2P3 hosts the tier-2 center at ICEPP –Toward LHC commissioning, both of them are working hard to prepare Monitoring and improvements are necessary for long distance communication –Packet loss happens easily if one of router between two points becomes busy The “traceroute” command shows them –Standard TCP/IP has a logic for very slow windows size recovery and this slows down the transfer rate –PSPacer helped to obtain higher performance Support for physicists will be considered

Comp_3 GRID interoperability

Project presentation The LIA "Comp_3" project is proposing to work toward Grid Interoperability The LIA "Comp_3" project is proposing to work toward Grid Interoperability –Was an accepted project for the 2006 call –We propose to extend the project for 2007 Work will concentrate mainly on Work will concentrate mainly on –EGEE / NAREGI interoperability Crucial for ILCCrucial for ILC Will become important for LHCWill become important for LHC –NAREGI is a huge effort in Japan, and will certainly become a piece of the W-LCG organization –SRB / iRODS data grid (see later) ImplementationImplementation DevelopmentDevelopment Interoperability with EGEE / NAREGIInteroperability with EGEE / NAREGI NAREGI: billion Yen Extended up to 2009

Interoperability between EGEE and NAREGI 2 possible approaches 2 possible approaches –Implement the GIN (Grid Interoperability Now) layer in NAREGI Defined by the GIN group from the OGFDefined by the GIN group from the OGF Short term solution in order to get the Interoperability Now !Short term solution in order to get the Interoperability Now ! Pragmatic approachPragmatic approach –Work with longer term standards defined within the OGF Develop a Meta Scheduler compatible with many Grid implementationsDevelop a Meta Scheduler compatible with many Grid implementations Based on SAGA (Simple API for Grid Applications) "Instead of interfacing directly to Grid Services, the applications can so access basic Grid Capabilities with a simple, consistent and stable API"Based on SAGA (Simple API for Grid Applications) "Instead of interfacing directly to Grid Services, the applications can so access basic Grid Capabilities with a simple, consistent and stable API" and JSDL (Job Submission Description Language)and JSDL (Job Submission Description Language)

Next steps on NAREGI / EGEE interoperability Continue work on both directions: GIN and SAGA / JSDL Continue work on both directions: GIN and SAGA / JSDL Try cross job submission on both Grid middleware Try cross job submission on both Grid middleware Explore data exchange between NAREGI and EGEE Explore data exchange between NAREGI and EGEE

The Storage Resource Broker (SRB) SRB is relatively light data grid system developed at SDSC SRB is relatively light data grid system developed at SDSC Considerable experience has been gained at KEK, SLAC, RAL and CC-IN2P3 Considerable experience has been gained at KEK, SLAC, RAL and CC-IN2P3 Heavily used for BaBar data transfer since years (up to 5 TB/day) Heavily used for BaBar data transfer since years (up to 5 TB/day) Very interesting solution to store and share biomedical data (images) Very interesting solution to store and share biomedical data (images) Advantages Advantages –Easy and fast development of applications –Extensibility –Reliability –Easiness of administration

From SRB to iRODS iRODS (iRule Oriented Data Systems ) is the SRB successor iRODS (iRule Oriented Data Systems ) is the SRB successor –KEK and CC-IN2P3 are both involved in iRODS developments and tests –Should bring many new functionalities

Next step for SRB / LCG interoperability In order to have LCG and SRB fully interoperable we need to develop an SRB / SRM interface In order to have LCG and SRB fully interoperable we need to develop an SRB / SRM interface This will be a common area of work for KEK and CC-IN2P3 in the near future This will be a common area of work for KEK and CC-IN2P3 in the near future –Iida-san 8 month stay at CC-IN2P3 Then we will explore the possibility to make SRB an alternative for LCG storage Then we will explore the possibility to make SRB an alternative for LCG storage

Comp_3:GRID interoperability GRID is the fundamental infrastructure for distributed data analysis GRID interoperability is the key issue –Interoperability among different GRID middleware (globas based), e.g. GLite, NAREGI and etc –Interoperability to storage resource managements systems, e.g. SRB iRODs, the successor of SRB, is underdevelopment and common interests between both sides

SDA_1 Event generators and Higgs Physics at LHC

Activity of SDA1 in 2006 Subject: Developing NLO event- Generator in LHC physics Exchange in 2006: J→F:8 people, 45 days in total F→J:3 people, 25 days in total Publications(related): 1.“NLO-QCD calculation in GRACE “, Y. Kurihara et.al.Nucl.Phys.Proc.Suppl. 157(2006) event generator for pp / p anti-p collisions.”S. Tsuno et.al., Comput.Phys.Commun.175: , “Algebraic evaluation of rational polynomials in one-loop amplitudes.” T. Binoth et.al, JHEP0702(2006)13. 4.“New one-loop techniques and first applications to LHC phenomenology.”,T. Binoth et.al, Nucl.Phys.Proc.Suppl.160:61- 65,2006.

NLO Inclusive Generator Fully exclusive Generator

H→  & background studies High p T photon detector performance Jet rejection Photon conversion Primary vertex reconstruction SM background studies Reducible background Irreducible background (event generator) Japanese Experiment group Inner detector (SCT) e  identification Event generator French Experiment group Liquid Argon calorimeter e  identification H→  analysis tools Japanese theory group NLO Event generator Resummation Parton shower French theory group NLO Event generator Resummation subtraction method

SDA_1:Event generators and Higgs Physics at LHC The collaboration among experimentalists and theorists in both sides Estimation of background in H->gamma gamma will be done precisely using DIPHOX This will help to discover Higgs or decrease systematic errors on the estimation of the mass

BIO_1 Geant4 new development

Courseware in a book style

All EM interactions activated Example of virtual lab: a sandwich calorimeter lead/aerogel

Geant4 at the cellular scale 3D phantoms PIXE analysis Microbeam Geant4 p ionisation p & H charge change © Geant4 DNA new Physics processes

Visualization samples p, 200 MeV

Monte Carlo simulations & jobs Metadata management Results visualization Image anonymization Image visualization Working station Starting of the installation at Centre Jean Perrin Internet connexion Genius

Bio_1:Geant4 new developments Geant4 is develped by the internatinal collaboration –Significant contributions from French and Japan Both side will work for the Gean4 kernel improvemtns and also the development of applications in following area –Educational Applications –Medicine, Biology and Space extensions –Computing grids

Summary of summary GRID is a lifeline of research –Still further research and investment on resources are necessary LHC will be commissioned within one year probably –Infrastructure and good tools for analysis are necessary –Collaboration between experimentalists and theorists is mandate for data analysis Geant4 is a toolkit developed in HEP and transferred to other fields, e.g., space, medical, biology and so on –Still many issues are left for improvements in the Geant4 kernel –Application developments in education and bio-medical fields are common interests We saw strong collaboration between French and Japan even before beginning of AIL AIL will boost the researches in these collaborations