Download presentation
Presentation is loading. Please wait.
Published byLorena Mosley Modified over 8 years ago
1
Computing Research Center, High Energy Accelerator Organization (KEK) Gird Deployment at KEK Go Iwai, Yoshimi Iida, Setsuya Kawabata, Takashi Sasaki and Yoshiyuki Watase Joint Meeting of Pacific Region Particle Physics Communities DPF2006 and JPS2006 Oct. 29 ~ Nov. 3 2006 Sheraton Waikiki Hotel, Honolulu, Hawaii
2
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS20062Outline IntroductionIntroduction –KEK –Network in Japan Grid DeploymentGrid Deployment –Current Status –Strategy –Hosted VOs: Belle, ILC, APDG and so on Grid Inter-operabilityGrid Inter-operability –NAREGI: NAtional REsearch Grid Initiative –Relationship among LCG, NAREGI and KEK SummarySummary
3
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS20063 Next Topics IntroductionIntroduction –KEK –Network in Japan Grid DeploymentGrid Deployment –Current Status –Strategy –Hosted VOs: Belle, ILC, APDG and so on Grid Inter-operabilityGrid Inter-operability –NAREGI: NAtional REsearch Grid Initiative –Relationship among LCG, NAREGI and KEK SummarySummary
4
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS20064 KEK Computing Research Center Introduction Overview of CRC Super Computer System Our mission related computing and networkingOur mission related computing and networking –Providing Computing Facility KEK-B/BelleKEK-B/Belle ILCILC J-PARCJ-PARC –Proton Synchrotron K2K, T2KK2K, T2K –long baseline neutrino detection Accelerator designAccelerator design Application at Synchrotron Radiation FacilityApplication at Synchrotron Radiation Facility –material science, life science and etc –Networking –Security –Support for university groups in the field As an Inter University Research Institute CorporationAs an Inter University Research Institute Corporation
5
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS20065 The 1 st Homepage in Japan HEPnet-J Originally, KEK organized HEP institutes in Japan to provide the networking among themOriginally, KEK organized HEP institutes in Japan to provide the networking among them –We started from 9600bps DECnet in early 1980’s –KEK is one of the first Internet sites and the first web site in Japan (1983? and 1992) The current network infrastructure is the SuperSINET operated by NII (National Institute of Informatics)The current network infrastructure is the SuperSINET operated by NII (National Institute of Informatics) NII will be upgraded to the SINET3 in April 2007.NII will be upgraded to the SINET3 in April 2007. The SINET3 will provide multi-layered network service with 10 - 40 Gbps backbone.The SINET3 will provide multi-layered network service with 10 - 40 Gbps backbone.Introduction
6
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS20066 SINET (44nodes) 100 Mbps~ 1Gbps Super SINET (32nodes) 10Gbps International lines Japan~U.S.A 10Gbps (To:N.Y.) 2.4Gbps (To:L.A.) Japan~Singapore622Mbps Japan~Hong Kong 622Mbps NationalPublicPrivateJuniorCollegesSpecialized Training Colleges Inter-Univ. Res. Inst. Corp. OthersTotal8151273684114182710Introduction Tab.: Number of SINET particular Organizations (Feb. 2006) Tab.: Line Speeds HEPnet-J (cont.) Network Topology Map of SINET/SuperSINET(Feb. 2006)
7
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS20067 Next Topics IntroductionIntroduction –KEK –Network in Japan Grid DeploymentGrid Deployment –Current Status –Strategy –Hosted VOs: Belle, ILC, APDG and so on Grid Inter-operabilityGrid Inter-operability –NAREGI: NAtional REsearch Grid Initiative –Relationship among LCG, NAREGI and KEK SummarySummary
8
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS20068 Strategy on GRID Grid Deployment Deployment at KEK for major groupsDeployment at KEK for major groups –BELLE Ongoing experimentOngoing experiment –ILC Near future targetNear future target University supportUniversity support –education and training –Deployment at smaller centers HEPNET-J VOHEPNET-J VO Overview of KEK-B accelerator Design of ILC accelerator/detector
9
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS20069 Recent Events Nov. 2005: HEP Data Grid WorkshopNov. 2005: HEP Data Grid Workshop –training and cooperation in Asia-Pacific region –at KEK Mar. 2006: First meeting on NAREGI/EGEE InteroperabilityMar. 2006: First meeting on NAREGI/EGEE Interoperability –launched Inter-OP projects between NAREGI and EGEE talk about NAREGI latertalk about NAREGI later –at CERN Aug. 2006: Belle workshop on GridAug. 2006: Belle workshop on Grid –to share the information among Belle collaborations –at Nagoya Univ. Sep. 2006: Japan-France Workshop on Grid ComputingSep. 2006: Japan-France Workshop on Grid Computing –at IN2PS/Lyon Univ. Grid Deployment
10
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200610 Summary of LCG Deployment JP-KEK-CRC-01 (Pre-Production System)JP-KEK-CRC-01 (Pre-Production System) –since Nov. 2005. –is registered to GOC, is ready to WLCG (World wide LCG). –is operated by KEK staffs. –Site Role: practice for production system JP- KEK-CRC-02.practice for production system JP- KEK-CRC-02. test use among university groups in Japan.test use among university groups in Japan. –Resource and Component: SL-3.0.5 w/ LCG-2.7SL-3.0.5 w/ LCG-2.7 –upgrade to gLite-3.0 is done. CPU: 14, Storage: 1TBCPU: 14, Storage: 1TB –Supported VOs: belle, apdg, dteam and opsbelle, apdg, dteam and ops JP-KEK-CRC-02 (Production System)JP-KEK-CRC-02 (Production System) –since early 2006. –is registered to GOC, is ready to WLCG. –is outsourced to IBM Co.,Ltd. –Resource and Component: SL or SLC w/ LCG-2.7 –upgrade to gLite-3.0 is done. CPU: 48, Storage: 6TB (w/o including HPSS) –Supported VOs: belle, apdg, atlasj, ilc, dteam and ops JP-KEK-CRC-00 (Testbed System)JP-KEK-CRC-00 (Testbed System) –since Jun. 2005. –is closed environment in comparison with other sites. easy to access and configure. –Resource and Component: SL-3.0.5 w/ gLite-3.0 (100% pure) –Supported VOs: belle, apdg, atlasj and g4med Grid Deployment
11
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200611 SRB: Storage Resource Broker The SRB is a client-server middleware that provides a uniform interface for connecting to heterogeneous data resources over a network and accessing replicated data sets.The SRB is a client-server middleware that provides a uniform interface for connecting to heterogeneous data resources over a network and accessing replicated data sets. We have started to work on SRB earlier than LCGWe have started to work on SRB earlier than LCG The zone federation among the Belle member institutes has been established.The zone federation among the Belle member institutes has been established. SRB-DSI works as the gridftp server, and is easily integrated into LCG services.SRB-DSI works as the gridftp server, and is easily integrated into LCG services. –SRB exports existing data without copying physically and will be useful for existing projects. Grid Deployment Bonny Strong, RAL http://www.globus.org/
12
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200612 Other Grid-related Services We have our own GRID CAWe have our own GRID CA –is started on Feb. 2006, and is recognized by LCG. –is accredited by APGRID PMA –# of issued user certificates: 25 –# of issued host certificates: 74 –http://gridca.kek.jp/ http://gridca.kek.jp/ VO Membership ServiceVO Membership Service –Supported VOs: apdg is the VO for Asia-Pacific Data Grid.apdg is the VO for Asia-Pacific Data Grid. belle is the VO for Belle experiments.belle is the VO for Belle experiments. atlasj is the VO for Atlas experiments in Japan.atlasj is the VO for Atlas experiments in Japan. g4med is the VO for Geant4 medical application.g4med is the VO for Geant4 medical application. Local Mirror ServiceLocal Mirror Service –SL, SLC, LCG, gLite –It takes ~30 minutes to update by using apt-get with CERN or FNAL repositories. ~3 minutes with KEK repository~3 minutes with KEK repository –http://hepdg.cc.kek.jp/mirror/ http://hepdg.cc.kek.jp/mirror/ Semi-automatic Installation ServiceSemi-automatic Installation Service –WNs can be installed semi-automatically by PXE (Preboot eXecution Environment) and kickstart configuration file. –http://hepdg.cc.kek.jp/install/ http://hepdg.cc.kek.jp/install/ Site PortalSite Portal –http://grid.kek.jp/ http://grid.kek.jp/ Grid Deployment KEK Grid CA Web Repository
13
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200613 Belle VO Started using SRB and LCGStarted using SRB and LCG –LCG site: JP-KEK-CRC-01/02 Data distribution service using SRB-DSIData distribution service using SRB-DSI –Belle already have a few PBs data in total including 100s TB DST and MC Bulk file register helps us: SregisterBulk file register helps us: Sregister we do not move any of themwe do not move any of them –Benefits both for native SRB users and LCG users VO is supported by KEKVO is supported by KEK –Nagoya (JP), Melbourne (AU), Academia Sinica (TW), Krakow (PL) and etc Grid Deployment
14
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200614 New B Factory Computer System Performance(Year)1997~(4years)2001~(5years)2006~(6years) Computing Server ( SPECint2000 rate ) ~100(WS)~1,250(WS+PC)~42,500(PC) Disk Capacity ( TB ) ~4~91,000(1PB) Tape Library Capacity ( TB ) 1606203,500(3.5PB) Work Group server ( # of hosts ) 3+(9)1180+16FS User Workstation ( # of hosts ) 25WS +68X 23WS +100PC 128PC Grid Deployment - New B Factory Computer System since March 23. 2006 - History of B Factory Computer System Moore’s Law: 1.5y=x2.0 4y=x~6.3 5y=x~10
15
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200615 Photo of New B System Computing Server: ~42,500 SPECint2K Storage System (DISK): 1PB Storage System (HSM): 3.5PB
16
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200616 Belle Grid Deployment Plan We are planning a 2-phased deployment for BELLE experiments.We are planning a 2-phased deployment for BELLE experiments. –Phase-1: BELLE user uses VO in JP-KEK-CRC-02 sharing with other VOs. JP-KEK-CRC-02 consists of “Central Computing System” maintained by IBM corporation.JP-KEK-CRC-02 consists of “Central Computing System” maintained by IBM corporation. Available resources:Available resources: –CPU: 72 processors (opteron), SE: 200TB (with HPSS) –Phase-2: Deployment of JP-KEK-CRC-03 as BELLE Production System JP-KEK-CRC-03 uses a part of “B Factory Computer System” resources.JP-KEK-CRC-03 uses a part of “B Factory Computer System” resources. Available resources (maximum estimation)Available resources (maximum estimation) –CPU: 2200 CPU, SE: 1PB (disk), 3.5 PB (HSM) This system will be maintained by CRC and NetOne corporation.This system will be maintained by CRC and NetOne corporation. Grid Deployment
17
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200617 Belle Grid Deployment Plan (cont.) We are planning to federate with Japanese universities.We are planning to federate with Japanese universities. –KEK hosts the BELLE experiment and behaves as Tier-0. –Univ. with reasonable resources: full LCG (Tier-1) –Univ. without resources: UI –The central services such as VOMS, LFC and FTS are provided by KEK. –KEK also covers web Information and support service. –Grid operation is co- operated with 1~2 staffs in each full LCG site. Grid Deployment JP-KEK-CRC-02 JP-KEK-CRC-03 University UI University UI University UI University UI University UI University UI University UI University UI University UI Tier-0 Tier-1 deploy at phase-2 preliminary design
18
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200618 ILC VO GRID for ILC is sponsored byGRID for ILC is sponsored by –GAKUJYUTSU SOUSEI budget (a grant from MEXT) –French-Japan Joint Lab. Program Initial goalInitial goal –As a tool to share data of total size 1~10TB among Institutes in Japan, Asia, and World Wide. Grid Deployment
19
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200619 APDG VO Asia Pacific Data GRIDAsia Pacific Data GRID Collaboration among Academia Sinica(TW), Center for HEP-Korea, University of Melbourne and KEKCollaboration among Academia Sinica(TW), Center for HEP-Korea, University of Melbourne and KEK Regular meetings, workshops and conferencesRegular meetings, workshops and conferences We are seeking tighter collaboration with ASGCWe are seeking tighter collaboration with ASGC –GOC in Asia Grid Deployment
20
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200620 ATLAS JAPAN VO federation among ICEPP, Kobe Univ., Nagoya Univ. and KEK.federation among ICEPP, Kobe Univ., Nagoya Univ. and KEK. –Okayama Univ. and Hiroshima-IT are also potential sites VO UsageVO Usage –Testing inter-connectivity among ICEPP, Kobe Univ. and KEK –Testing Function of middleware –Measuring performance of data sharing ATLAS RC will be hosted by ICEPP not usATLAS RC will be hosted by ICEPP not us Grid Deployment
21
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200621 Next Topics IntroductionIntroduction –KEK –Network in Japan Grid DeploymentGrid Deployment –Current Status –Strategy –Hosted VOs: Belle, ILC, APDG and so on Grid Inter-operabilityGrid Inter-operability –NAREGI: NAtional REsearch Grid Initiative –Relationship among LCG, NAREGI and KEK SummarySummary
22
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200622 NAREGI: NAtional REsearch Grid Initiatives National Research Grid Initiatives (NAREGI)National Research Grid Initiatives (NAREGI) –Apr. 2003 MEXT funded NAREGI 5 years Project –Lead by Prof. Ken Miura (NII) –Development of Grid infrastructure and an application for promotion of national economy –Target application is nano science and technology for new material design – Players Computing & networking: NII, AIST, TITECComputing & networking: NII, AIST, TITEC Material scientists :IMS, U. Tokyo, Tohoku U., Kyushu U., KEK,..Material scientists :IMS, U. Tokyo, Tohoku U., Kyushu U., KEK,.. Companies: Fujitsu, Hitachi, NECCompanies: Fujitsu, Hitachi, NEC –Distributed facility: Computing Grid up to 100 TFLOPS in total –Extended to 2010 as a part of National Peta-scale Computing Project Grid Inter-operability SuperSINET NII 10 TFLOPS (1,618 CPU) Application Testbed 5 TFLOS (896 CPU) Software Testbed * As of 2004 TokyoNagoyaIMS
23
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200623 Collaboration with NAREGI What we expect for NAREGIWhat we expect for NAREGI –Better quality –Easier deployment –Better support in the native language What we need but still looks not in NAREGIWhat we need but still looks not in NAREGI –File/replica catalogue and data GRID related functionalities Need more assessmentsNeed more assessments Comes a little bit lateComes a little bit late –Earlier is better for us We need something working today!We need something working today! Require commercial version of PBS for β 1Require commercial version of PBS for β 1 LCG (LHC Computing GRID) is now based on gLite 3.LCG (LHC Computing GRID) is now based on gLite 3. Only middleware available today to satisfy HEP requirementsOnly middleware available today to satisfy HEP requirements –US people are also developing their own DifficultyDifficulty –Support Language gapsLanguage gaps –Quality assurance –Assumes rich man power Grid Inter-operability
24
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200624 LCG and NAREGI Inter-operability NAREGI has much interests on interoperability because they came late and they decided to establish in their sideNAREGI has much interests on interoperability because they came late and they decided to establish in their side First meeting at CERNFirst meeting at CERN –March 2006 –NAREGI, LCG and people from KEK Second meeting at GGF TokyoSecond meeting at GGF Tokyo Grid Inter-operability
25
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200625 KEK Plan on GRID Inter Operability NAREGI will implement LFC on their middlewareNAREGI will implement LFC on their middleware –We assume job submission between them will be realized soon –Share the same file/replica catalogue space between LCG and NAREGI Move data between them using GridFTPMove data between them using GridFTP NAREGI SRB LCG will be tried alsoNAREGI SRB LCG will be tried also –using SRB-DSI Assessments will be done forAssessments will be done for –Command level compatibility (syntax) between NAREGI and LCG –Job description languages –Software in experiments ILC, International Linear Collider, will be a targetILC, International Linear Collider, will be a target – interoperability among LCG, OSG and NAREGI will be required Grid Inter-operability
26
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200626 Next Topics IntroductionIntroduction –KEK –Network in Japan Grid DeploymentGrid Deployment –Current Status –Strategy –Hosted VOs: Belle, ILC, APDG and so on Grid Inter-operabilityGrid Inter-operability –NAREGI: NAtional REsearch Grid Initiative –Relationship among LCG, NAREGI and KEK SummarySummary
27
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200627Summary We, KEK Computing Research Center, are working for Belle and ILC mainly related GRIDWe, KEK Computing Research Center, are working for Belle and ILC mainly related GRID –Belle has started to use LCG –ILC soon –University supports also 3 LCG sites are in operation at KEK.3 LCG sites are in operation at KEK. Other Grid-related services are in operation also.Other Grid-related services are in operation also. –CA, VOMS, Mirror, Installation and Documentation Grid inter operability between NAREGI and LCG will be established.Grid inter operability between NAREGI and LCG will be established.
28
Computing Research Center, High Energy Accelerator Organization (KEK) Thank You K. Amako 1,2, J. Ebihara 3, Y. Iida 1, K. Inami 4, K. Ishikawa 5, M. Kaga 4, S. Kameoka 1,2, S. Kawabata 1, K. Kawagoe 6, A. Kimura 7, Y. Kiyamura 6, M. Matsui 5, K. Murakami 1,2, H. Sakamoto 8, T. Sasaki 1,2, S. Suzuki 1, Y. Watase 1, S. Yashiro 1 and H. Yoshida 9 1 High Energy Accelerator Organization (KEK) 2 Japan Science and Technology Agency (JST) 3 SOUM Co.,Ltd. 4 Nagoya University 5 IBM Japan Systems Engineering Co.,Ltd. 6 Kobe University 7 Ashikaga Institute of Technology 8 ICEPP, University of Tokyo 9 Naruto University of Education
29
Computing Research Center, High Energy Accelerator Organization (KEK) Backup
30
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200630 Introduction to SRB -- Bonny Strong, RAL The SDSC Storage Resource Broker (SRB) is a client-server middleware that provides a uniform interface for connecting to heterogeneous data resources over a network and accessing replicated data sets.
31
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200631 What is SRB Software product developed by the San Diego Supercomputing Centre (SDSC), as part of U.S. National Partnership for Advanced Computational Infrastructure.Software product developed by the San Diego Supercomputing Centre (SDSC), as part of U.S. National Partnership for Advanced Computational Infrastructure. It has been operational at San Diego since 1997, where currently 200 TB of data are shared between 30 participating universities.It has been operational at San Diego since 1997, where currently 200 TB of data are shared between 30 participating universities. -- Bonny Strong, RAL
32
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200632 SRB History at KEK 2003 Jun. 06 Start operation of the test system 1 with PostgreSQL2003 Jun. 06 Start operation of the test system 1 with PostgreSQL –HPSS interface 2003 Oct. 22 Test system 2 with DB22003 Oct. 22 Test system 2 with DB2 –PostgreSQL looks better than DB2 2004 Apr. 02 SRB in BELLE Computer system2004 Apr. 02 SRB in BELLE Computer system –Interface to SONY Peta Site via NFS –Test with Melbourne 2004 Dec. 06 Previous workshop2004 Dec. 06 Previous workshop –Together with Michel Wan, SDSC, SRB federation has been established among AU, KR, TW, CN, PL and JP 2005 Jul. 28 Nagoya Joined the federation2005 Jul. 28 Nagoya Joined the federation 2006 Apr. 18 Replace with the new system2006 Apr. 18 Replace with the new system –Interface to HPSS via NFS/VFS 2006 Jun. 01 Federated with IN2P32006 Jun. 01 Federated with IN2P3
33
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200633 SRB Main Features Allows users to access files and database objects across a distributed environment.Allows users to access files and database objects across a distributed environment. Actual physical location and way the data is stored is abstracted from the user.Actual physical location and way the data is stored is abstracted from the user. Can manage replication and movement of data.Can manage replication and movement of data. Allows the user to add user defined metadata describing the scientific content of the information, which can be searched for data discovery.Allows the user to add user defined metadata describing the scientific content of the information, which can be searched for data discovery. Metadata held includes the physical and logical details of the data held and its replicas, user information, and security rights and access control.Metadata held includes the physical and logical details of the data held and its replicas, user information, and security rights and access control. -- Bonny Strong, RAL
34
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200634 SRB-DSI Architecture The Storage Resource Broker Data Storage Interface (SRB-DSI) is an extension to the GridFTP server that allows it to interact with SRB. Plugging this extension into a GridFTP server allows the GridFTP server to access a SRB resource and serve it to any GridFTP client as though it were a filesystem.
35
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200635 Performance of SRB-DSI Band width: 117MB/sec (with iperf) ~30MB/sec~40MB/sec ~60MB/sec~60MB/sec
36
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200636 LCG History at KEK 20052005 –Jun: Testbed project was started with LCG-2.6. this site named JP-KEK-CRC-00 is used for test.this site named JP-KEK-CRC-00 is used for test. –Nov: held workshop at KEK. JP-KEK-CRC-01 with LCG-2.7JP-KEK-CRC-01 with LCG-2.7 Pre-production sitePre-production site APDGAPDG 20062006 –Feb: CA service started. –Mar: JP-KEK-CRC-01 site registered to GOC. –May: JP-KEK-CRC-02 started with LCG-2.7. –Jun: JP-KEK-CRC-02 site registered to GOC. CRC-00 being re-constructed with gLite-3.0 from scratch.CRC-00 being re-constructed with gLite-3.0 from scratch. –not ready to start. –Aug: BELLE Grid Workshop CRC-01/02 updated to glite-3.0CRC-01/02 updated to glite-3.0
37
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200637JP-KEK-CRC-00 We have positioned JP-KEK-CRC-00 as the testbed site.We have positioned JP-KEK-CRC-00 as the testbed site. We use it for:We use it for: –practice to install LCG/gLite. –functional tests. This site is closed environment in comparison with other sites.This site is closed environment in comparison with other sites. –easy to access and configure We install gLite-3.0 (100% pure)We install gLite-3.0 (100% pure) –but it doesn’t work yet configured VOsconfigured VOs –belle (BELLE experiments) –apdg (Asia-Pacific Data Grid) test VO for AP region peopletest VO for AP region people –g4med (Geant4 Medical Application) wn001.kekgrid.jp wn002.kekgrid.jp dg01.cc.kek.jp dg02.cc.kek.jp dg03.cc.kek.jp dg04.cc.kek.jp dg05.cc.kek.jp dg06.cc.kek.jp dg07.cc.kek.jp N/A 130.87.208.0/2 2 cc.kek.jp cc.kek.jp 192.168.162.0/24kekgrid.jp 7 nodes & 2 WNs via NAT
38
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200638JP-KEK-CRC-01Hostname/130.87.208.xxx dg10170 CE dg15175 PXBDII dg11171 SE dg12172 MON dg13173 VOMS dg14174 UI dg16176 LFC dg17177 VOBOX dg18178 1TB RAID SE dg09169 RB ce01.keklcg.jp192.168.1.1 xxx.keklcg.jp192.168.1.0/24 se01.keklcg.jp192.168.1.2 mon.keklcg.jp192.168.1.2 ce02.keklcg.jp192.168.1.4 se02.keklcg.jp192.168.1.5 DNS, NAT NFS WN Farm 1 wn001~wn015 Last-mod: 2006-08-03 We have positioned JP-KEK-CRC-01 as the Pre- production site.We have positioned JP-KEK-CRC-01 as the Pre- production site. We use it for:We use it for: –practice for production site JP-KEK-CRC-02 –test use among university groups in Japan SL305/LCG-2.7SL305/LCG-2.7 –upgrade to gLite-3.0 is done CPU: 14CPU: 14 SE: 1TBSE: 1TB under GOC monitoringunder GOC monitoring supported VOsupported VO –belle (BELLE experiments) –apdg (Asia-Pacific Data Grid) –dteam –ops # of users: ~20# of users: ~20 (for WLCG operation)
39
2006/11/1"Grid Deployment at KEK", DPF2006 & JPS200639JP-KEK-CRC-02 We use it as production.We use it as production. SL305/LCG-2.7SL305/LCG-2.7 –upgrade to gLite-3.0 is done. CPU: 48CPU: 48 SE: 6TB (w/o HPSS)SE: 6TB (w/o HPSS) –HPSS is now connected to a part of DPM pool. –Kohki talk about using HPSS later. CE: we have a plan to use LSF as LRMS.CE: we have a plan to use LSF as LRMS. under GOC monitoringunder GOC monitoring supported VOsupported VO –belle (BELLE experiments) –ilc (International Linear Collider experiments) –apdg (Asia-Pacific Data Grid) –atlas_j (Atlas Japan community) –dteam –ops # of users: ~20# of users: ~20 LCG 計算サーバ (CERN SL) rlc01 ~ rlc18 LCG ファイル サーバ + DISK (AIX5L 5.3) HPSS システ ム LCG 計算サーバ (CERN SL) rlc19 ~ rlc36 CE WN rls01 rls02 SE rac01 RB PX/BDII VOMSLFC ( UI ) MON rls03 rls04 rls05 rls06 rls07 rls08 rls09 GRID UI PBS LSF 48CPU 200TB (for WLCG operation)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.