Download presentation
Presentation is loading. Please wait.
1
TeraScale Supernova Initiative
Supernovae driven by the gravitational collapse of the core of a massive star March 16, 2004 DMW2004
2
3D runs could be routine if…
…we could handle the data Linear Evolution 4003 grid 500 Gb of data 8 hour run Non-linear Evolution Up to grid Terabytes of data 30-50 hour run March 16, 2004 DMW2004
3
Science begins with Data
Scientific discovery is done with interactive access to data Must have interactive access on a large-memory computer for analysis and visualization. Must have high bandwidth in accessing data. Must have sufficient storage (many TB) to hold data for weeks. Current solution: move data to a dedicated computer a 22-node linux cluster EnSight parallel software March 16, 2004 DMW2004
4
To move, or not to move… Working with data at remote site is not currently possible due to lack of: Shared file system Large # of interactive processors on-line storage for many TBs low-latency, high-bandwith WAN Cray X1 Billion-cell simulation in 30 hours generates 4 terabytes WAN Visualization platform March 16, 2004 DMW2004
5
Current mode of operation
Logistical network ORNL NCSU Linux cluster with 4 Tbytes of local disks Cray X1 ~few MB/s 20 MB/s HDF5 Billion-cell simulation in 30 hours generates 4 terabytes HPSS Time=2.2 density pressure velocity B field March 16, 2004 DMW2004
6
Data Management Demands
NOW! Volume of data: 100 TB (<5 TB/run) Bandwidth to storage: 100 MB/s WAN bandwidth: 100 MB/s LoCI for replication and collaboration Data integrity?? March 16, 2004 DMW2004
7
Data Management Demands
Future? Volume of data: PB are not too far away Bandwidth to storage: Must match flops WAN bandwidth: Do we move a PB? Storage efficient access - easy to use! March 16, 2004 DMW2004
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.