Download presentation
Presentation is loading. Please wait.
1
MGHPCC as a Platform for MOC
Jim Culbert Director of IT Services MGHPCC 12/06/2016
2
The MGHPCC Data Center and Consortium
A partnership between 5 universities…. The Commonwealth, and industrial sponsors
3
Platform for Collaborative Research Computing
RC Requirements High density Experimental by definition - flexibility Compute flexibility – commodity, gpu, fpga, the next big thing… Networking flexibility – topology, technology Reconfigurable Regulatory and grant constraints
4
Collaborative RC (Cont.)
Collaboration Requirements Low friction interactions – 3 rows is easier than 3 counties. Low friction infrastructure Pre-install as much as possible Pre-installed cabling, support ad-hoc physical interconnect Pre-configured MeetMe networking services Low friction process Physical flexibility / swaps Collaborative design, construction, operation, governance – if it’s not working we can change it
5
RC Implementation Options
Pro Con Roll your own Flexible! Master of your own destiny! Poor physical environment/support, gets old quick, hard to collaborate. Expensive for university (sometimes PI) Central IT DC More resource efficient Not usually designed to support research computing. Hard to collaborate outside of university. Commercial DC Flexible! Good support. Can be coerced to support RC. Expensive. Can collaborate but everyone needs to do business with the same DC. Not agile/flexible. Paying for enterprise features you don’t need. Commercial Cloud Flexible! Can do RC these days. Great for small scale. Unexpectedly expensive. Collaborators must agree on cloud. MGHPCC Flexible. Built for collaborative RC. Can add other collaborators easily. Cheap– power, efficient DC, utilization (people and equipment) Must work and play well with others. Commute to Western Mass to touch your equipment.
6
MGHPCC By the Numbers 100k square feet total - 30k square feet of computer room space (~1 acre), 15k admin, 55k everything else (entry rooms, loading docks, staging, recycling, MEP) 10mW* compute power (14kW compute 6kW networking) - standard cabinet densities up to 25kW, custom designs for 100kW. 718 (31k U) total pre-installed cabinets = 580 (25k U) of space for computing and storage, 138 (6k U) for networking and critical storage 1000 strands (2-3mi) pre-installed OM2/OM3, structured cabling for tenant use 1000 linear feet of tenant accessible, inter-cabinet cable tray High Capacity Networking 150 strands in MGHPCC manhole, ~30% of that landed in ERs, more at Appleton and Cabot street. Providers- MIT, UMass, VZ, Comcast, Fibertech Multiple 10G WAN paths to members home locations, 100G to I2, commodity 200M internet Room to expand - Site power and space to build another MGHPCC * 20% UPS/Critical Power
7
Collaborative RC and Security
Designed to support this Security working group during design phase Matrix analysis of typical grant regulatory constraints HIPAA, FISMA, FERPA, NIST SP-800 recommendations Auditable computer-managed physical key control Pervasive HD video Physical isolation all the way to the carrier network demarc MGHPCC resources “look like” campus buildings/rooms
8
How we’re Green Power portfolio is greater than 90% carbon free
1.2 PUE - comparable to best in class Free Cooling (a.k.a water-side economizer) In row cooling Hot aisle containment 400V power distribution Continuous improvement - Monitor, modify, measure, repeat Plus a million little things (no conditioning in electrical rooms, VFDs, trim chiller, ECO mode UPS, motion sensors, etc.) US Green Building Council, LEED Platinum Certified In the U.S., buildings account for - 38% of CO2 emissions, 13.6% potable water (15 trillion gal. / year) use, 73% Of U.S. electricity consumption
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.