Download presentation
Presentation is loading. Please wait.
Published byMadison Lynch Modified over 9 years ago
1
Gungwon Kang & Jiwoong Kim (KISTI) LIGO Data Grid and KISTI June 27, 2015 at 8 th J-K Joint Workshop on KAGRA, Gwangju in Korea
2
OUTLINE I. KISTI GSDC Overview II. KISTI LDG (LIGO Data Grid) III. Conclusion 2
3
National research institute for information technology since 1962 About 600 people working for Supercomputing & Networking and National Information Service (development & analysis) Running High-Performance Computing Facility Total 3,398 Nodes (30,592 CPUs, 360 TFlops at peak), 1,667 TB storage (introduced from 2008) 3 KISTI: Korea Institute of Science and Technology Information Intel Xeon X5570 2.93GHz (Nehalem) Rpeak: 300TF CPU: 25,408 Memory: 76.8TB Storage: 1,061TB I. KISTI GSDC Overview
4
4 GSDC: Global Science experiment Data hub Center National project to promote data-based science research experiments by providing computing and storage resources: HEP and other fields Running Data-Intensive Computing Facility ~20 Staffs: system administration, experiment support, external-relations, administration and students CPU: ~5,900 cores. Storage: ~6.8PB Budget: ~ 6M$/year Supporting experiments: ALICE, CMS, Belle, LIGO, RENO, Genomic Medicine, etc. GSDC Facility HP Servers Hitachi VSP Storage
5
National Institute of Supercomputing and Networking 2013. 10. 28. CPU (cores) 2014.12 Field / Experiment 5 Storage (TB) Resource allocations
6
6 ModelPhysical sizeUsable size NetApp FAS2050 (SAN only, RAID6) 104TB50TB NetApp FAS6080 (SAN & NAS, RAID6) 334TB200TB Hitachi USP-V (SAN & NAS, RAID6) 960TB600TB EMC CX4-960C(SAN, RAID6)1,920TB1,250TB EMC Isilon 108NL 1,620TB1,400TB Hitach VSP 1 (SAN, NAS, RAID6) 2013 758TB500TB 2014 320TB214TB Hitach VSP 2 (SAN, NAS, RAID6) 857TB570TB Total6,873TB4,784TB GSDC CPU/Storage (2014.12.30) 구 분구 분내역 Model Hitach VSP, HNAS4080(4Node) DiskUsable 700TB RAIDRAID6(6D+2P) Cache Memor y 512GB Front-end Inter face 8Gbps FC 32 ports 구 분구 분세 부 사 항세 부 사 항수 량수 량 Comput ing Server (Purcha sed in 2014) ☐ 모델명 : HP DL360G8(1U) ☐ 사양 : - E5-2680v2 2.8GHz * 2P(20core) - 128GB DDR3 1600Mhz SDRAM - 600GB 10K SAS * 4EA - 1GbE 2Port, 10GbE 2Port - 8G HBA - Redundant Power Supply 1,100 C ore (10Core x 55 No des)
7
7 II. LIGO Data Grid (LDG)
8
+ 8 KISTI LDG T3 Result Analysis Discussion User Authentication based on GSI Storage 155 TBs Tier1/2/3 LIGO / VIRGO Data 576 cores (780) wn3076~3110.sdfarm.kr (Condor) Cluster Login NodeWeb Server KGWG F2F Meeting 8 ui04.sdfarm.kr (Condor) ldas.ligo.kisti.re.kr lgm.sdfarm.kr (Intel-Compiler-License) ce04.sdfarm.kr (Condor) ldr.sdfarm.kr (GridFTP server)
9
9 ui04.sdfarm.kr {Condor} wn3076~3110.sdfarm.kr {Condor} Central storage 150TB ldr.ligo.caltech.edu …. ldr.sdfarm.kr {GridFTP server} ldas.ligo.kisti.re.kr {web publication} Connection/ Job Submission lgm.sdfarm.kr {Intel-Compiler-License} ui test {Condor} System Configuration in more detail:
10
+ KISTI LDG Resources (2015) 10 KGWG F2F Meeting Computation Resources(Worker Node) 48 Node : 780cores (Hyperthread-17node) / 576 cores (Physical) Storage Resources(Only Data - /data/ligo/archive) 155 TB (Expandable to 200 TB) SizeUsedAvailUse (%) /data/ligo/home786GB471GB315GB60 /data/ligo/lib100GB71GB20GB71 /data/ligo/scratch4.73TB3.96TB784G84 /data/ligo/archive150TB123TB28TB82 Total155TB127TB29TB82 10 CoresRAM Worker Node780 / 57672GB(Hyperthread) 48GB(W/O Hyperthread) UI,CE,LGM,LDAS,LDR60(12core per server)24GB Total840 / 636
11
+ Stored Data - hoft frame: LIGO S5~6. Virgo VSR1~4 - RDS L1 frame: LIGO S6 11
12
+ KISTI LDG Usage : 12 KGWG F2F Meeting 12 기간 1월1월 2월2월 3월3월 4월4월 5월5월 6월6월 7월7월 8월8월 9월9월 10 월 11 월 12 월합계 2011 건수 ( 컴퓨팅 시간, 일 ) 124 (81) 362 (309) 80 (171) 70 (726) 44 (500) 30 (4) 187 (193) 128 (159) 1,025 (2,143) 2012625522101930 100??229 20131,2871,951927268195,4301,12926???11,028 20143319189217033678811214545120124??6,124 20159,33521,6834,25710911435,400 Work Node Usage: CPU 384 cores - 2014.11~2015.02: Max 78.9%, Ave 13% - 2015.01~2015.02: Max 60%, Ave 35% ( ※ Monitored by KISTI) * Job 건수 만이 아닌 새로운 모니터링 Measure 필요 ( 예, 2011 년 6 월, 8 월 비교 )
13
KGWG Korean Gravitational Wave Group (2008~): ~30 people working in 8 universities and 3 government-funded institutes LIGO-Virgo and KAGRA 13 User groups (1) KGWG 한국중력파연구협력단 소속이름 서울대이형목 (PI) NIMS 오정근 연세대김정리오상훈 한양대이현규손재주 김경민김환선 이철훈추형석 서강대조규만 KISTI 강궁원 부산대이창환장행진 김영민김지웅 김명국윤희준 인제대이형원조희석 김정초 KAERI 차용호 고려대윤태현 GIST 강훈수 조동현경북대박명구 명지대김재완군산대김상표
14
+ Parameter estimation: Chunglee Kim(Yonsei U/ KISTI GSDC), Hyungwon Lee, Chungcho Kim (Inje U) + LSC collaborators (Caltech, NU, UWM, Monclair State Univ.) + KAGRA collaborators (Osaka University) User groups (2) 14 CBC Signal and Noise Identification: 오정근, 오상훈, 손재주, 김환선, 추형석 (NIMS), 이창환, 김영민 ( 부산대 ) ※ KISTI 연구원 : 강궁원, 장행진, 김지웅, 윤희준, 조희석 (KISTI-GSDC) iDQ pipeline 의 개선 (Deep Learning) 및 인공신경망 모듈 이식 Bank chisq 를 이용한 detection statistic 의 개선연구 중력파 채널과 보조채널간의 Correlation Analysis (CAGMon) HHT 를 이용한 Trigger generation 연구 - Mostly used by domestic researchers! 중력파 분석을 통한 모수추정 (parameter estimation) 보다 천체물리학적으로 “ 실제와 가깝고 ”, 계산적으로 효율적인 중력파형 개발
15
+ User groups (3) 이름소속 e-mail 계정 Kazuhiro Hayama Osaka University kazuhiro.hayama@gmail.com … Tatsusya Narikawa Osaka University narikawat@vega.ess.sci.osaka-u.ac.jp … Hideyuki Tagoshi Osaka University tagoshi@vega.ess.sci.osaka-u.ac.jp … Koh Ueno Osaka University ueno@vega.ess.sci.osaka-u.ac.jp … Hirotaka Yuzurihara Osaka University yuzurihara@yukimura.hep.osaka-cu.ac.jp …
16
+ 이름접속기록 Kazuhiro Hayama- Tatsusya Narikawa (588 분 ) narikawa pts/7 :pts/8:S.0 Wed Jun 3 16:49 - 16:51 (00:01) narikawa pts/8 pascal.hep.osaka Wed Jun 3 16:49 - 16:51 (00:01) narikawa pts/7 :pts/6:S.0 Thu May 21 12:12 - 12:43 (00:30) narikawa pts/7 :pts/6:S.0 Thu May 21 11:20 - 11:51 (00:30) narikawa pts/6 pascal.hep.osaka Thu May 21 11:20 - 12:45 (01:24) narikawa pts/7 :pts/6:S.0 Thu May 21 10:52 - 10:52 (00:00) Tue May 12 18:07 - 18:11 (00:04) narikawa pts/18 :pts/14:S.0 Tue May 12 17:48 - 17:57 (00:09) narikawa pts/14 pascal.hep.osaka Tue May 12 17:48 - 17:57 (00:09) narikawa pts/19 :pts/16:S.0 Mon Apr 27 17:29 - 18:02 (00:32) narikawa pts/19 :pts/16:S.0 Mon Apr 27 15:50 - 16:54 (01:04) narikawa pts/16 pascal.hep.osaka Mon Apr 27 15:50 - 18:02 (02:11) narikawa pts/19 :pts/16:S.0 Mon Apr 27 14:59 - 14:59 (00:00) narikawa pts/19 :pts/16:S.0 Mon Apr 27 12:38 - 13:10 (00:31) narikawa pts/16 pascal.hep.osaka Mon Apr 27 12:38 - 14:59 (02:20) narikawa pts/5 :pts/1:S.0 Thu Apr 23 17:59 - 18:04 (00:05) narikawa pts/1 pascal.hep.osaka Thu Apr 23 17:59 - 18:04 (00:05) narikawa pts/5 :pts/1:S.0 Thu Apr 23 17:56 - 17:56 (00:00) narikawa pts/1 pascal.hep.osaka Thu Apr 23 17:55 - 17:56 (00:00) narikawa pts/1 pascal.hep.osaka Thu Apr 23 17:54 - 17:55 (00:00)
17
+ 이름접속기록 Kazuhiro Hayama- Tatsusya Narikawa (588 분 ) narikawa pts/7 :pts/8:S.0 Wed Jun 3 16:49 - 16:51 (00:01) narikawa pts/8 pascal.hep.osaka Wed Jun 3 16:49 - 16:51 (00:01) … narikawa pts/1 pascal.hep.osaka Thu Apr 23 17:55 - 17:56 (00:00) narikawa pts/1 pascal.hep.osaka Thu Apr 23 17:54 - 17:55 (00:00) Hideyuki Tagoshi- Koh Ueno (250 분 ) ueno pts/19 :pts/18:S.0 Sat May 2 19:37 - 20:21 (00:43) ueno pts/18 pascal.hep.osaka Sat May 2 19:37 - 20:21 (00:43) … ueno pts/8 pascal.hep.osaka Tue Apr 21 21:00 - 21:07 (00:06) Hirotaka Yuzurihara (28 분 ) yuzu pts/19 :pts/3:S.0 Tue Jun 9 06:09 - 06:09 (00:00) yuzu pts/3 pascal.hep.osaka Tue Jun 9 06:09 - 06:09 (00:00) … yuzu pts/24 pascal.hep.osaka Mon Apr 27 17:39 - 17:48 (00:09)
18
LDG central monitoring system Ganglia installed on all KISTI resources ce04.sdfarm.kr, a condor server, gathers ganglia information of LDG WNs. 8649 port is open to the watchtower. 129.89.57.50(watchtower.phys.uwm.edu) 18 LDG WatchTower
19
19 OS: Scientific Linux 6.1 ( ?) Batch system for managing compute jobs: Condor-7.8.7 ( ?) LIGO Data Grid : 5.2.2 https://www.lsc-group.phys.uwm.edu/daswg/download/repositories.htmlhttps://www.lsc-group.phys.uwm.edu/daswg/download/repositories.html More than 200 packages Soft Wares Deployed
20
III. Conclusion We have briefly introduced computing resources, environments and operation status of the KISTI GSDC LDG T3 center. We hope to develop a good collaboration in the computing and data management of KAGRA in the future. 20
21
THANK YOU 감사 ( 感謝 ) 합니다 21
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.