CERN 14/01/20021 Data Handling Scheme for the Italian Ground Segment (IGS), as part of AMS-02 Ground Segment (P.G. Rancoita) Functions of a “Regional Center”

Slides:



Advertisements
Similar presentations
DISKLESS WORKSTATION IN LINUX
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Secure Off Site Backup at CERN Katrine Aam Svendsen.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
1/16/2008CSCI 315 Operating Systems Design1 Introduction Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
AMS Computing Y2001-Y2002 AMS Technical Interchange Meeting MIT Jan 22-25, 2002 Vitali Choutko, Alexei Klimentov.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Computer Parts When you build your own computer, choose from these parts…
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
AMS TIM, CERN Apr 12, 2005 AMS Computing and Ground Centers Status Report Alexei Klimentov —
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems.
Local IBP Use at Vanderbilt University Advanced Computing Center for Research and Education.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Status of AMS Regional Data Center The Second International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Condor and DRBL Bruno Gonçalves & Stefan Boettcher Emory University.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Systems in AMS02 AMS July 2003 Computing and Ground MIT Alexei Klimentov —
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Terminal Servers in Schools A second life for your older computers.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Status of the new NA60 “cluster” Objectives, implementation and utilization NA60 weekly meetings Pedro Martins 03/03/2005.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
Computer and Network Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH-LBC RTTC meeting,
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
The Hungarian ClusterGRID Project Péter Stefán research associate NIIF/HUNGARNET
Ideas and test setup for data transfer from CERN to Italian Ground Segment. M. Boschini, A. Favalli, M. Levtchenko CERN – March, 31, 2003.
R. Krempaska, October, 2013 Wir schaffen Wissen – heute für morgen Controls Security at PSI Current Status R. Krempaska, A. Bertrand, C. Higgs, R. Kapeller,
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
MC Data transfer A possible approach 28th, July, 2003 CERN (M. Boschini)
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real.
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real.
Introduction to Diskless Remote Boot Linux Introduction to Diskless Remote Boot Linux Jazz Wang Yao-Tsung Wang Jazz Wang Yao-Tsung Wang.
Brief introduction about “Grid at LNS”
Chapter 1: Introduction
BEST CLOUD COMPUTING PLATFORM Skype : mukesh.k.bansal.
Server pschiu.
Chapter 1: Introduction
Chapter 1: Introduction
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
Diskless Remote Boot Linux
Chapter 1: Introduction
Chapter 1: Introduction
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
Chapter 1: Introduction
Networking Computer network A collection of computing devices that are connected in various ways in order to communicate and share resources Usually,
NFS.
Chapter 1: Introduction
Language Processors Application Domain – ideas concerning the behavior of a software. Execution Domain – Ideas implemented in Computer System. Semantic.
Chapter 1: Introduction
Chapter 1: Introduction
Chapter 1: Introduction
Chapter 1: Introduction
Chapter 1: Introduction
Presentation transcript:

CERN 14/01/20021 Data Handling Scheme for the Italian Ground Segment (IGS), as part of AMS-02 Ground Segment (P.G. Rancoita) Functions of a “Regional Center” can be carried out by facilities localized in different physical places General scheme for the data handling of IGS -- for Data transfer from and to CERN facility -- transfer can involve both scientific and MC data

CERN 14/01/20022 AMS-02 Italian Ground Segment Scheme

CERN 14/01/20023 The IGS provides data transfer for a 2nd copy of AMS-02 data and the MC production (and subsequent transfer and archiving) of about 20% of MC data It includes : the Data Transfer Facility (DTF) at CERN the Data Archiving (IGSDS) and MC production (IGSMCPF) at the ASDC the Data Transfer Management and Survey Facility (DTMS) in Milano AMS-02 Italian Ground Segment

CERN 14/01/20024 Infrastructure in Milano Dedicated computing room Additional&Independent air cond. Additional (>) 20 KW UPS System GARR connectivity 16 Mb/s (can be increased) We are thus able to provide ~ 7days/7 24 hr/24 uptime...

CERN 14/01/20025 Milano Data Center...

CERN 14/01/20026 Data Handling R&D Being Milano responsible of developing and testing, in collaboration with AMS SW group, the Data Transfer system relating IGS and DTF, in Milano we have (or we are going to have) prototypes of: Italian Data Transfer Facility, which will be located at CERN. IGS receiving Facility. Monitor& Survey Facility, which will be located in Milano

CERN 14/01/20027 Data Handling R&D All the machines for the 3 facilities will be similar and equivalent (or better) to: AMD Athlon 1.4 GHz 128 MB RAM IDE/ATA-100 disks 100 Mb/s dedicated LAN.

CERN 14/01/20028 Data Handling R&D Sw layout on them: RedHat 7.1 kernel MySql 3.23 bbftp OpenSsh 2.9p2

CERN 14/01/20029 Analysis Data Handling For scientific data analyses, we are also setting up a facility, in which particle tracing in the magnetosphere is the most CPU intensive activity It is made out of a 20 hosts Linux farm, with 1 master Server and 19 diskless clients. Clients boot by means of Etherboot and necessary SW is stored on an EPROM mounted on board of the Network card. AMD Athlon 1.2 GHz 256 MB RAM 100 Mb/s dedicated private LAN

CERN 14/01/ Analysis Facility R&D

CERN 14/01/ Linux farm

CERN 14/01/ Linux farm cont.

CERN 14/01/ Linux farm cont.

CERN 14/01/ AMS-02 Italian Ground Segment Next presentation (Matteo) about: Data Transfer (and Handling) Tests performed and going on