AMS TIM, CERN Jul 23, 2004 AMS Computing and Ground Centers Alexei Klimentov —

Slides:



Advertisements
Similar presentations
AMS COMPUTING VIII International Workshop Advanced Computing And Analysis Techniques in Physics Research Moscow, June 24-29, 2002 Vitali Choutko, Alexei.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
1999 Summer Student Lectures Computing at CERN Lecture 2 — Looking at Data Tony Cass —
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
AMS Computing Y2001-Y2002 AMS Technical Interchange Meeting MIT Jan 22-25, 2002 Vitali Choutko, Alexei Klimentov.
CERN 14/01/20021 Data Handling Scheme for the Italian Ground Segment (IGS), as part of AMS-02 Ground Segment (P.G. Rancoita) Functions of a “Regional Center”
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
AMS TIM, CERN Apr 12, 2005 AMS Computing and Ground Centers Status Report Alexei Klimentov —
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, Interlaken Alexei Klimentov — ETH Zurich and
Status of AMS Regional Data Center The Second International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
Systems in AMS02 AMS July 2003 Computing and Ground MIT Alexei Klimentov —
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)
CHEP as an AMS Regional Center The Third International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W. Lee,
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Computing Strategy of the AMS-02 Experiment V. Choutko 1, B. Shan 2 A. Egorov 1, A. Eline 1 1 MIT, USA 2 Beihang University, China CHEP 2015, Okinawa.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
POCC activities AMS02 meeting at KSC October 11 th 2010.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
1 AMS-02 POCC & SOC MSFC, 9-Apr-2009 Mike Capell Avionics & Operations Lead Senior Research Scientist ISS: 108x80m 420T 86KW 400km AMS: 5x4x3m 7T 2.3+KW.
AMS02 Software and Hardware Evaluation A.Eline. Outline  AMS SOC  AMS POC  AMS Gateway Computer  AMS Servers  AMS ProductionNodes  AMS Backup Solution.
Virtual Server Server Self Service Center (S3C) JI July.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
WP18, High-speed data recording Krzysztof Wrona, European XFEL
LCG Service Challenge: Planning and Milestones
PC Farms & Central Data Recording
Update on Plan for KISTI-GSDC
The INFN TIER1 Regional Centre
Universita’ di Torino and INFN – Torino
Proposal for a DØ Remote Analysis Model (DØRAM)
Short to middle term GRID deployment plan for LHCb
Presentation transcript:

AMS TIM, CERN Jul 23, 2004 AMS Computing and Ground Centers Alexei Klimentov —

2 Alexei Klimentov. AMS CERN. July AMS Computing and Ground Data Centers AMS-02 Ground Centers –AMS centers at JSC –Ground data transfer –Science Operation Center prototype »Hardware and Software evaluation »Implementation plan  AMS/CERN computing and manpower issues  MC Production Status –AMS-02 MC (2004A) –Open questions : »plans for Y2005 » AMS-01 MC

3 Alexei Klimentov. AMS CERN. July AMS-02 Ground Support Systems Payload Operations Control Center (POCC) at CERN (first 2-3 months in Houston) CERN Bldg.892 wing A “control room”, usual source of commands receives Health & Status (H&S), monitoring and science data in real-time receives NASA video voice communication with NASA flight operations Backup Control Station at JSC (TBD) Monitor Station in MIT “backup” of “control room” receives Health & Status (H&S) and monitoring data in real-time voice communication with NASA flight operations Science Operations Center (SOC) at CERN (first 2-3 months in Houston) CERN Bldg.892 wing A receives the complete copy of ALL data data processing and science analysis data archiving and distribution to Universities and Laboratories Ground Support Computers (GSC) at Marshall Space Flight Center receives data from NASA -> buffer -> retransmit to Science Center Regional Centers Madrid, MIT, Yale, Bologna, Milan, Aachen, Karlsruhe, Lyon, Taipei, Nanjing, Shanghai,… analysis facilities to support geographically close Universities

4 Alexei Klimentov. AMS CERN. July AMS facilities NASA facilities

5 Alexei Klimentov. AMS CERN. July 2004.

6 AMS Ground Centers at JSC  Requirements to AMS Ground Systems at JSC  Define AMS GS HW and SW components  Computing facilities –“ACOP” flight –AMS pre-flight –AMS flight –“after 3 months”  Data storage  Data transmission Discussed with NASA in Feb

7 Alexei Klimentov. AMS CERN. July AMS-02 Computing facilities at JSC CenterLocationFunction(s)ComputersQty POCC Bldg.30, Rm 212Commanding Telemetry Monitoring On-line processing Pentium MS Win Pentium Linux 19” Monitors Networking switches Terminal Console MCC WS SOC Bldg.30, Rm 3301Data Processing Data Analysis Data, Web, News Servers Data Archiving Pentium Linux IBM LTO tapes drives Networking switches 17” color monitors Terminal console “Terminal room” tbdNotebooks, desktops100 AMS CSR Bldg.30M, Rm 236MonitoringPentium Linux 19” color monitor MCC WS

8 Alexei Klimentov. AMS CERN. July AMS Computing at JSC (TBD) YearResponsibleActions LR-8 monthsN.Bornas, P.Dennett, A.Klimentov, A.Lebedev, B.Robichaux, G.Carosi Set-up at JSC the “basic version” of the POCC Conduct tests with ACOP for commanding and data transmission LR-6 monthsP.Dennett, A.Eline, P.Fisher, A.Klimentov, A.Lebedev, A.Eline, “Finns” (?) Set-up POCC “basic version” at CERN Set-up “AMS monitoring station” in MIT Conduct tests with ACOP/MSFC/JSC commanding and data transmission LRA.Klimentov, B.RobichauxSet-up POCC “flight configuration” at JSC LR L – 2 weeks V.Choutko, A.Eline, A.Klimentov, B.Robichaux A.Lebedev, P.Dennett Set-up SOC “flight configuration” at JSC Set-up “terminal room and AMS CSR” Commanding and data transmission verification L+2 months (tbd) A.KlimentovSet-up POCC “flight configuration” at CERN Move part of SOC computers from JSC to CERN Set-up SOC “flight configuration” at CERN L+3 months (tbd) A.Klimentov, A.Lebedev, A.Eline, V.Choutko Activate AMS POCC at CERN Move all SOC equipment to CERN Set-up AMS POCC “basic version” at JSC LR – launch ready date : Sep 2007, L – AMS-02 launch date

9 Alexei Klimentov. AMS CERN. July Data Transmission  Will AMS need a dedicated line to send data from MSFC to ground centers or the public Internet can be used ?  What Software (SW) must be used for a bulk data transfer and how reliable is it ?  What data transfer performance can be achieved ? G.Carosi,A.Eline,P.Fisher, A.Klimentov High Rate Data Transfer between MSFC Al and POCC/SOC, POCC and SOC, SOC and Regional centers will become a paramount importance

10 Alexei Klimentov. AMS CERN. July Global Network Topology

11 Alexei Klimentov. AMS CERN. July 2004.

12 Alexei Klimentov. AMS CERN. July 2004.

13 Alexei Klimentov. AMS CERN. July A.Elin, A.Klimentov, K.Scholberg and J.Gong ‘amsbbftp’ tests CERN/MIT & CERN/SEU Jan/Feb 2003

14 Alexei Klimentov. AMS CERN. July Data Transmission Tests (conclusions)  In its current configuration Internet provides sufficient bandwidth to transmit AMS data from MSFC Al to AMS ground centers at rate approaching 9.5 Mbit/sec  We are able to transfer and store data on a high end PC reliably with no data loss  Data transmission performance is comparable of what achieved with network monitoring tools  We can transmit data simultaneously to multiple cites

15 Alexei Klimentov. AMS CERN. July Data and Computation for Physics Analysis batch physics analysis batch physics analysis detector event tags data raw data event reconstruction event reconstruction event simulation event simulation interactive physics analysis analysis objects (extracted by physics topic) event filter (selection & reconstruction) event filter (selection & reconstruction) processed data (event summary data ESD/DST)

16 Alexei Klimentov. AMS CERN. July Symmetric Multi-Processor (SMP) Model Experiment Tape Storage TeraBytes of disks

17 Alexei Klimentov. AMS CERN. July AMS SOC (Data Production requirements)  Reliability – High (24h/day, 7days/week)  Performance goal – process data “quasi-online” (with typical delay < 1 day)  Disk Space – 12 months data “online”  Minimal human intervention (automatic data handling, job control and book-keeping)  System stability – months  Scalability  Price/Performance Complex system that consists of computing components including I/O nodes, worker nodes, data storage and networking switches. It should perform as a single system. Requirements :

18 Alexei Klimentov. AMS CERN. July Production Farm Hardware Evaluation ProcessorIntel PIV 3.4+GHz, HT Memory1 GB System disk and transient data storage400 GB, IDE disk Ethernet cards2x1 GBit Estimated Cost2500 CHF ProcessorIntel Pentium dual-CPU Xeon 3.2+GHz Memory2 GB System diskSCSI 18 GB double redundant Disk storage3x10x400 GB RAID 5 array or 4x8x400 GB RAID 5 array Effective disk volume 11.6 TB Ethernet cards3x1 GBit Estimated cost33000 CHF (or 2.85 CHF/GB) “Processing node” disk server

19 Alexei Klimentov. AMS CERN. July AMS-02 Ground Centers. Science Operations Center. Computing Facilities. CERN/AMS Network AMS Physics Services N Central Data Services Shared Disk Servers 25 TeraByte disk 6 PC based servers 25 TeraByte disk 6 PC based servers tape robots tape drives LTO, DLT tape robots tape drives LTO, DLT Shared Tape Servers Home directories & registry consoles & monitors Production Facilities, Linux dual-CPU computers Linux, Intel and AMD Engineering Cluster 5 dual processor PCs Data Servers, Analysis Facilities (linux cluster) dual processor PCs 5 PC servers AMS regional Centers batch data processing batch data processing interactive physics analysis Interactive and Batch physics analysis

20 Archiving and Staging (CERN CASTOR) Analysis Facilities Data Server Cell #1 PC Linux 3.4+GHz PC Linux Server 2x3.4+GHz, RAID 5,10TB Disk Server Disk Server Gigabit Switch (1 Gbit/sec) AMS data NASA data metadata AMS Science Operation Center Computing Facilities Production Farm Disk Server Disk Server Simulated data MC Data Server PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz Gigabit Switch PC Linux 3.4+GHz PC Linux Server 2x3.4+GHz Web, News Production, DB servers AFS Server Cell #7 PC Linux Server 2x3.4+GHz, RAID 5,10TB Tested, prototype in production Not tested and no prototype yet

21 Alexei Klimentov. AMS CERN. July AMS-02 Science Operations Center  Year 2004 –MC Production (18 AMS Universites and Labs) »SW : Data processing, central DB, data mining, servers »AMS-02 ESD format –Networking (A.Eline, Wu Hua, A.Klimentov) »Gbit private segment and monitoring SW in production since April –Disk servers and data processing (V.Choutko, A.Eline, A.Klimentov) »dual-CPU Xeon 3.06 GHz 4.5 TB disk space in production since Jan »2 nd server : dual-CPU Xeon 3.2 GHz, 9.5 TB will be installed in Aug (3 CHF/GB) »data processing node : PIV single CPU 3.4 GHz Hyper-Threading mode in production since Jan –Datatransfer station (Milano group : M.Boschini, D.Grandi,E.Micelotta and A.Eline) »Data transfer to/from CERN (used for MC production) »Station prototype installed in May »SW in production since January u Status report on next AMS TIM

22 Alexei Klimentov. AMS CERN. July AMS-02 Science Operations Center  Year 2005 –Q 1 : SOC infrastructure setup »Bldg.892 wing A : false floor, cooling, electricity –Mar 2005 setup production cell prototype »6 processing nodes + 1 disk server with private Gbit ethernet –LR-24 months (LR – “launch ready date”) Sep 2005 » 40% production farm prototype (1 st bulk computers purchasing) »Database servers »Data transmission tests between MSFC AL and CERN

23 Alexei Klimentov. AMS CERN. July AMS-02 Computing Facilities. FunctionComputerQty Disks (Tbytes) and Tapes Ready(*) LR-months (AMD) dual-CPU, 2.5+GHz3 3x0.5TB Raid Array LR-2 POCC POCC Intel and AMD, dual-CPU, 2.8+GHz456 TB Raid ArrayLR Monitor Station in MITIntel and AMD, dual-CPU, 2.8+GHz51 TB Raid ArrayLR-6 Science Operation Centre : Production FarmIntel and AMD, dual-CPU, 2.8+GHz5010 TB Raid ArrayLR-2 Database Serversdual-CPU 2.8+ GHz Intel or Sun SMP20.5TBLR-3 Event Storage and Archiving Disk Servers dual-CPU Intel 2.8+GHz6 50 Tbyte Raid Array Tape library (250 TB) LR Interactive and Batch Analysis SMP computer, 4GB RAM, 300 Specint95 or Linux farm 101 Tbyte Raid ArrayLR-1 “Ready” = operational, bulk of CPU and disks purchasing LR-9 Months

24 Alexei Klimentov. AMS CERN. July People and Tasks (“my” incomplete list) 1/4  Architecture  POIC/GSC SW and HW  GSC/SOC data transmission SW  GSC installation  GSC maintenance A.Mujunen,J.Ritakari A.Mujunen,J.Ritakari, P.Fisher,A.Klimentov A.Mujunen, J.Ritakari A.Klimentov, A.Elin MIT, HUT MIT AMS-02 Status : Concept was discussed with MSFC Reps MSFC/CERN, MSFC/MIT data transmission tests done HUT have no funding for Y

25 Alexei Klimentov. AMS CERN. July People and Tasks (“my” incomplete list) 2/4  Architecture  TReKGate, AMS Cmd Station  Commanding SW and Concept  Voice and Video  Monitoring  Data validation and online processing  HW and SW maintenance P.Fisher, A.Klimentov, M.Pohl P.Dennett, A.Lebedev, G.Carosi, A.Klimentov, A.Lebedev G.Carosi V.Choutko, A.Lebedev V.Choutko, A.Klimentov More manpower will be needed starting LR-4 months AMS-02 POCC

26 Alexei Klimentov. AMS CERN. July People and Tasks (“my” incomplete list) 3/4  Architecture  Data Processing and Analysis  System SW and HEP appl.  Book-keeping and Database  HW and SW maintenance V.Choutko, A.Klimentov, M.Pohl V.Choutko, A.Klimentov A.Elin, V.Choutko, A.Klimentov M.Boschini et al, A.Klimentov More manpower will be needed starting from LR – 4 months AMS-02 SOC Status : SOC Prototyping is in progress SW debugging during MC production Implementation plan and milestones are fulfilled

27 Alexei Klimentov. AMS CERN. July People and Tasks (“my” incomplete list) 4/4  INFN Italy  IN2P3 France  SEU China  Academia Sinica  RWTH Aachen  …  PG Rancoita et al G.Coignet and C.Goy J.Gong Z.Ren T.Siedenburg M.Pohl, A.Klimentov AMS-02 Regional Centers Status : Proposal prepared by INFN groups for IGS and J.Gong/A.Klimentov for CGS can be used by other Universities. Successful tests of distributed MC production and data transmission between and 18 Data transmission, book-keeping and process communication SW (M.Boschini, V.Choutko, A.Elin and A.Klimentov) released.

28 Alexei Klimentov. AMS CERN. July AMS/CERN computing and manpower issues  AMS Computing and Networking requirements summarized in Memo –Nov 2005 : AMS will provide a detailed SOC and POCC implementation plan –AMS will continue to use its own computing facilities for data processing and analysis, Web and News services –There is no request to IT for support for AMS POCC HW or SW –SW/HW ‘first line’ expertise will be provided by AMS personnel –Y2005 – 2010 : AMS will have guaranteed bandwidth of USA/Europe line –CERN IT-CS support in case of USA/Europe line problems –Data Storage : AMS specific requirements will be defined in annual basis –CERN support of mails, printing, CERN AFS as for LHC experiments. Any license fees will be paid by AMS collaboration according to IT specs –IT-DB, IT-CS may be called for consultancy within the limits of available manpower Starting from LR-12 months the Collaboration will need more people to run computing facilities

29 Alexei Klimentov. AMS CERN. July Year 2004 MC Production  Started Jan 15, 2004  Central MC Database  Distributed MC Production  Central MC storage and archiving  Distributed access (under test)  SEU Nanjing, IAC Tenerife, CNAF Italy joined production since Apr 2004

30 Alexei Klimentov. AMS CERN. July Y2004 MC production centers MC CenterResponsible GB% CIEMATJ.Casuas CERNV.Choutko, A.Eline,A.Klimentov YaleE.Finch Academia SinicaZ.Ren, Y.Lei LAPP/LyonC.Goy, J.Jacquemier INFN MilanoM.Boschini, D.Grandi CNAF & INFN BolognaD.Casadei UMDA.Malinine EKP, KarlsruheV.Zhukov GAM, MontpellierJ.Bolmont, M.Sapinski INFN Siena&Perugia, ITEP, LIP, IAC, SEU, KNU P.Zuccon, P.Maestro, Y.Lyublev, F.Barao, C.Delgado, Ye Wei, J.Shin

31 Alexei Klimentov. AMS CERN. July MC Production Statistics ParticleMillion Events % of Total protons helium electrons positrons deuterons anti-protons carbon photons Nuclei (Z 3…28) % of MC production done Will finish by end of July URL: pcamss0.cern.ch/mm.html 185 days, 1196 computers 8.4 TB, 250 PIII 1 GHz/day

32 Alexei Klimentov. AMS CERN. July Y2004 MC Production Highlights  Data are generated at remote sites, transmitted to and available for the analysis (only 20% of data was generated at CERN)  Transmission, process communication and book-keeping programs have been debugged, the same approach will be used for AMS-02 data handling  185 days of running (~97% stability)  18 Universities & Labs  8.4 Tbytes of data produced, stored and archived  Peak rate 130 GB/day (12 Mbit/sec), average 55 GB/day (AMS-02 raw data transfer ~24 GB/day)  1196 computers  Daily CPU equiv GHz CPUs running 184 days/24h Good simulation of AMS-02 Data Processing and Analysis Not tested yet :  Remote access to CASTOR  Access to ESD from personal desktops TBD : AMS-01 MC production, MC production in Y2005

33 Alexei Klimentov. AMS CERN. July AMS-01 MC Production Send request to Dedicated meeting in Sep, the target date to start AMS-01 MC production October 1st