SURA Birds of a Feather I2 Spring Members Meeting April 23, 2007 Gary Crane SURA Director IT Initiatives.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

PowerEdge T20 Customer Presentation. Product overview Customer benefits Use cases Summary PowerEdge T20 Overview 2 PowerEdge T20 mini tower server.
National Networking Going Forward Scenarios Larry Conrad Florida State University Florida LambdaRail February 20-21, 2007.
PowerEdge T20 Channel NDA presentation Dell Confidential – NDA Required.
Confidential Prepared by: System Sales PM Version: 1.0 Lean Design with Luxury Performance.
© 2010 Cisco and/or its affiliates. All rights reserved. 1 Microsoft Exchange Server Ready UCS Data Center & Virtualizacion Cisco Latin America Valid through.
Company Equipment Upgrade Proposal. The Current Situation  It has been five years since Alt-F4 Inc. has upgraded any of it’s equipment.  200 Computers.
Custom’s K-12 Education Technology Council Presents… Custom Computer Specialists Server Technology Solutions Designed for NYCDOE Affordable and.
Engenio 7900 HPC Storage System. 2 LSI Confidential LSI In HPC LSI (Engenio Storage Group) has a rich, successful history of deploying storage solutions.
Report of the SURA CIO RON Ad Hoc Committee Larry Conrad November 9, 2006.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Tower Dual Processor 1 x 2.13GHz Quad Core Intel Xeon E5506 1x250GB 7200RPM Drive Four open 3.5” Direct- Cabled SATA Bays 2GB 2 (1 DIMM) PC MHz.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
SURA IT Program Update See Board materials book Tab 17 for IT highlights –ITSG activities & summary of interactions with I2 & NLR – Page 1-2 and –SURA.
Supporting Transformative Research Through Regional Cyberinfrastructure (CI) Dr. Dali Wang, Grid Infrastructure Specialist.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Project Cysera Hardware Configuration Drafted by Zoebir Bong.
Intel® 64-bit Platforms Platform Features. Agenda Introduction and Positioning of Intel® 64-bit Platforms Intel® 64-Bit Xeon™ Platforms Intel® Itanium®
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
5.3 HS23 Blade Server. The HS23 blade server is a dual CPU socket blade running Intel´s new Xeon® processor, the E5-2600, and is the first IBM BladeCenter.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
VPN for Sales Nokia FireWall-1 Products Complete Integrated Solution including: –CheckPoint FireWall-1 enterprise security suite –Interfaces installed.
ADVANCE FORENSIC WORKSTATION. SPECIFICATION Mother board : Xeon 5000 Series Server Board support 667MHz, 1066MHz and 1333MHz1 Processor : Two Intel Quad.
Mobile Server T echnology. Eurocom The Worlds Leading Developer of fully configurable and customizable Mobile Workstations, Mobile Servers and Desktop.
© Copyright IBM Corporation 2006 Course materials may not be reproduced in whole or in part without the prior written permission of IBM IBM BladeCenter.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
SURAgrid Update SURA Internet2 SMM April 23, 2007 Mary Fran Yafchak, SURA IT Program Coordinator, SURAgrid project manager.
and beyond Office of Vice President for Information Technology.
SURAgrid Governance Committee Art VandenbergMike Sachon SGC ChairSGC Co-Chair September 27, 2007.
Supporting Transformative Research Through Regional Cyberinfrastructure (CI) Gary Crane, Director IT Initiatives.
Group 6 Project: Server Proposal. Introduction I. Vendor II. Servers a. Exchange b. Print c. Web d. SQL III. Software IV. Cost.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Objective  CEO of a small company  Create a small office network  $10,000 and $20,000 Budget  Three servers (workstations)  Firewall device  Switch.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
© 2012 IBM Corporation IBM Flex System™ The elements of an IBM PureFlex System.
© Copyright 2013 TONE SOFTWARE CORPORATION New Client Turn-Up Overview.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
IBM Linux Update for CAVMEN Jim Savoie Linux Sales IBM Americas Group October 21, 2004.
Wright Technology Corp. Minh Duong Tina Mendoza Tina Mendoza Mark Rivera.
State of SURAgrid for All-Hands meeting September 2007 Mary Fran Yafchak SURA IT Program Coordinator
SERON: Southeast Regionally Organized Networking Subcommittee Organizational Meeting Larry Conrad Florida State University Florida LambdaRail April 17,
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Launching a great program! SURAgrid All Hands Meeting – September 21, 2006.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Information Technology Committee Report by Dick Newman for Dave Lambert Committee chairman.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage SuperBlade ® Configuration Training Francis Lam Blade.
Dell PowerEdge Blade Server PDVSA Jun-05
By Harshal Ghule Guided by Mrs. Anita Mahajan G.H.Raisoni Institute Of Engineering And Technology.
Acer Altos T110 F3 E3-1220v3 4GB 500GB: 24,900 Baht
© 2009 IBM Corporation IBM Configurator for e-business (e-config) Training Module 3 – Entry Server (Power 720) with STG Software Products Bill Luken –
1062m0656 between 10692m2192 DS/ICI/CIF EqualLogic PS6510E
Power Systems with POWER8 Technical Sales Skills V1
Ryan Leonard Storage and Solutions Architect
TYBIS IP-Matrix Virtualized Total Video Surveillance System Edge Technology, World Best Server Virtualization.
SURA Presentation for IBM HW for the SURA GRID for 2007
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Virtualization OVERVIEW
IBM Linux Update for CAVMEN
Low Latency Analytics HPC Clusters
,Dell PowerEdge 13 gen servers rental.
Dell ™ PowerEdge R320 E GB 1TB:- 57,900 Baht
Presentation transcript:

SURA Birds of a Feather I2 Spring Members Meeting April 23, 2007 Gary Crane SURA Director IT Initiatives

SURA I2 BoF Agenda Introductions SURA Program Update SERON Committee Formation SURAgrid Update New SURAgrid Corporate Partnerships –New IBM Linux Cluster offering –Dell Linux Cluster offering Open Discussion

SURA IT Program Update See current on-line IT Program Update at for details: –IT Steering Group activities & summary of interactions with I2 & NLR SURA letters to I2 leadership – Attachments 1&2 –SURAgrid status & activities –SURAgrid corporate partnership program –AtlanticWave update –AT&T GridFiber update –SERON background and committee structure – Attach 3 –SURAgrid Governance Proposal – Attachment 4 –SURAgrid application summaries – Attachments 5-9 Multiple Genome Alignment – GSU Urban Water System Threat Management Simulation – NC State Bio-electric Simulator for Whole Body Tissues – ODU Storm Surge Modeling with ADCIRC – UNC-CH/RENCI Searching Protein Databases - UAB

SERON Committee Activities Background SURA SE footprint –Has 63 research institutions –35% of dues-paying I2 member –7 of the 16 NLR memberships –All but 3 states have R&E network initiatives Strong history of connectivity leadership –SURANet –Regional Infrastructure Initiative & AT&T Partnership –Operational RONs: FLR, LEARN LONI, MATP, MAX, NCREN, OneNet, SLR

Held Two Community Organized and Led Information & Planning Meetings July 27, 2006 Atlanta meeting –Resulted in creation of 2 working groups Explore ways to leverage SURA region RONs to reduce costs / improve services –Working group led by Larry Conrad Leverage SURA region to influence I2 / NLR relationship –Working group led by John Mullin February 20, 2007 Atlanta meeting –SERON Committee formed –SURA RON Traffic Analysis group formed

First Meeting of the SERON Committee APRIL 17, 2007 Agreed to continue to pursue cooperative efforts by SURA region RONs leveraging SURA Nominated a SERON Chair Charlie McMahon (LSU) and co-Chair Phil Halstead (FLR) Will work with SURA to continue to organize SERON community

SERON Products Several letters from SURA President and IT Committee Chair stating SURA regional views on I2/NLR developments An initial effort to organize a SURA regional traffic analysis Improved communications between I2 & NLR leaders and SURA region network leaders Active effort to nominate SURA regional networking leaders to I2 Council positions

SURAgrid Update Mary Fran Yafchak SURA IT Program Coordinator

SURAgrid Corporate Partnerships Existing IBM p575 partnership New IBM e1350 Linux partnenrship New Dell PowerEdge 1950 partnership Significant product discounts Owned and operated by SURAgrid participants Integrated into SURAgrid with 20% of capacity available to SURAgrid

Existing P5 575 System Solution Robust Hardware with High Reliability Components 16 CPU scalability within a node Low Latency High Performance Switch Technology AIX OS & Software Subsystems High Compute Density Packaging Ability to scale to very large configurations

16W Fed SW BPA 2U 4U 16W 0.97 TFlop Solution For SURA 8, 16W Nodes at 1.9 Ghz 128 Processors Federation Switch 256 GB or 128 GB System Memory Storage Capacity: 2.35 TBytes 1.7 TFlop Solution for SURA 14, 16W Nodes at 1.9 Ghz 224 Processors Federation Switch 224 GB or 448 GB System Memory Storage Capacity: 4.11 TBytes Two Options 16W Fed SW BPA 2U 4U 16W

p5 575 Software AIX 5.3 General Parallel File System (GPFS) with WAN Support LoadLeveler Cluster Systems Management (CSM) Compilers (XL/FORTRAN, XLC) Engineering and Scientific Subroutine Library (ESSL) IBM’s Parallel Environment (PE) Simultaneous Multi-Threading (SMT) Support Virtualization, Micro-Partitioning, DLPAR

SURA Pricing for p5 575 Solutions 0.97 TFlop Solution –8 Nodes $380, to SURA (16GB/Node)* –8 Nodes $410, to SURA (32GB/Node)* 1.70 TFlop Solution –14 Nodes $610, to SURA (16GB/Node)* –14 Nodes $660, to SURA (32GB/Node)* Price Includes 3 Year Warranty –Hardware M-F, 8-5, Next Day Service Pricing Available Through the End of Calendar Year 2007 Net Price to Add a Node with 32 GB Memory : $56,752 Net Price to Add a Node with 16 GB Memory : $53,000

New SURA e1350 Linux Cluster New IBM BladeCenter-H, new HS21XM Blades and Intel Quad-Core Processors 3 TFLOP Configuration –One Rack solution with GigE interconnect –1GB/core –Combination Management/User node with storage 6 TFLOP – Performance Focused Solution for HPC –Two Rack solution utilizing DDR Infiniband –2GB/core –Combination Management/User node with storage –Optional SAN supporting 4Gbs storage at 4.6Tbytes Announced at last week’s SURA BoT meeting

3 TFLOP e1350 Cluster -  34 HS21XM Blade Servers in 3 BladeCenter H Chassis Dual Quad-Core 2.67 GHz Clovertown Processors 1 GB Memory per core 73 GB SAS Disk per blade GigE Ethernet to Blade with 10Gbit Uplink Serial Terminal Server connection to every blade Redundant power/fans  x3650 2U Management/User Node Dual Quad-Core 2.67 GHz Clovertown Processors 1 GB Memory per core Myricom 10Gb NIC Card RAID Controller with (6) 300GB 10K Hot-swap SAS Drives Redundant power/fans  Force10 48-port GigE Switch with 2 10Gb Uplinks  SMC 8-port 10Gb Ethernet Switch  (2) 32-port Cyclades Terminal Servers  RedHat ES 4 License and Media Kit (3 years update support)  Console Manger, Pull-out console, keyboard, mouse  One 42U Enterpise Rack, all cables, PDU’s  Shipping and Installation  5 Days onsite Consulting for configuration, skills transfer  3 Year Onsite Warranty $217,285

 70 HS21XM Blade Servers in 5 BladeCenter H Chassis Dual Quad-Core 2.67 GHz Clovertown Processors 2 GB Memory per core 73 GB SAS Disk per blade GigE Ethernet to Blade DDR Non-Blocking Voltaire Infiniband Low Latency Network Serial Terminal Server connection to every blade Redundant power/fans  x3650 2U Management/User Node Dual Quad-Core 2.67 GHz Clovertown Processors 1 GB Memory per core Myricom 10Gb NIC Card RAID Controller with (6) 300GB 10K Hot-swap SAS Drives Redundant power/fans  DDR Non-Blocking Infiniband Network  Force10 48-port GigE Switch  (3) 32-port Cyclades Terminal Servers  RedHat ES 4 License and Media Kit (3 years update support)  Console Manger, Pull-out console, keyboard, mouse  One 42U Enterpise Rack, all cables, PDU’s  Shipping and Installation  10 Days onsite Consulting for configuration, skills transfer  3 Year Onsite Warranty 6 TFLOP e1350 Cluster - $694,309

 x3650 Storage Node Dual Quad-Core 2.67 GHz Clovertown Processors 1 GB Memory per core Myricom 10Gb NIC Card (2) 3.5" 73GB 10k Hot Swap SAS Drive (2) IBM 4-Gbps FC Dual-Port PCI-E HBA Redundant power/fans 3 Year Onsite 24x7x4Hour On-site Warranty  DS4700 Storage Subsystem 4 Gbps Performance (Fiber Channel) EXP810 Expansion System (32) 4 Gbps FC, GB/15K Enhanced Disk Drive Module (E-DDM) Total 4.6 TB Storage Capacity 6 TFLOP e1350 Cluster Storage Option - $41,037

SURA – Dell Partnership Complete Dell Dell PowerEdge TFlop High Performance Computing Cluster SURAGrid Special Offer - $112,500 –Master Node – Dell PowerEdge 1950 (Qty = 1) –Compute Nodes – Dell PowerEdge 1950 (Qty = 27) –Gigabit Ethernet Interconnect Dell PowerConnect 6248 (Qty = 2) –PowerEdge 4210, 42U Frame –Platform Rocks – Cluster Management Software 1 year support agreement –Complete Rack & Stack, including cabling, prior to delivery –Complete Software Installation – Operating system, Cluster Management Software –2 Day on-site systems engineer

Dual 2.33GHz/2x4MB Cache, Quad Core Intel® Xeon E5345, 1333MHz FSB Processors 12GB FBD 667MHz Memory 80GB 7.2K RPM SATA Hard Drive Red Hat Enterprise Linux WS v4 1Yr RHN Subscription, EM64T 24X CD-ROM 3 Years HPCC Next Business Day Parts and Labor On-Site Service Compute Nodes – Dell PowerEdge 1950 (Qty = 27)

Master Node – Dell PowerEdge 1950 (Qty = 1) Dual 2.33GHz/2x4MB Cache, Quad Core Intel® Xeon E5345, 1333MHz FSB Processors 12GB FBD 667MHz Memory Embedded RAID Controller – PERC5 (2) 146GB, SAS Hard Drive (RAID1) Dual On-Board 10/100/100 NICS 24X CDRW/DVD-ROM Dell Remote Assistance Card Redundant Power Supply Red Hat Enterprise Linux AS v4 1Yr Red Hat Network Subscription, EM64T 3 Years Premier HPCC Support with Same Day 4 Hour Parts and Labor On-Site Service Gigabit Ethernet Interconnect – Dell PowerConnect 6248 (Qty = 2) PowerConnect 6248 Managed Switch, 48 Port 10/100/1000 Mbps Four 10 Gigabit Ethernet uplinks 3 Years Support with Next Business Day Parts Service Other Components - PowerEdge 4210, Frame, Doors, Side Panel, Ground, 42U (3) 24 Amp, Hi-Density PDU, 208V, with IEC to IEC Cords 1U Rack Console with 15" LCD Display, Mini-Keyboard/Mouse Combo Platform Rocks – Cluster Management Software with 1 year support agreement Complete Rack & Stack, including cabling, prior to delivery Complete Software Installation - Operating system, Cluster Management Software, etc. 2 Day on-site systems engineer

For More Information Regarding IBM and Dell Discount Packages Contact Gary Crane or

SURAgrid Governance and Decision-Making Structure Overview See Tab 17 (page 28) of Board materials book for a copy of the SURAgrid Governance and Decision Making Structure Proposal SURAgrid Project Planning Working Group established at Sep 2006 In-Person Meeting to develop governance options for SURAgrid. Participants included: –Linda Akli, SURA –Gary Crane, SURA –Steve Johnson, Texas A&M University –Sandi Redman, University of Alabama in Huntsville –Don Riley, University of Maryland & SURA IT Fellow –Mike Sachon, Old Dominium University –Srikanth Sastry, Texas A&M University –Mary Fran Yafchak, SURA –Art Vandenberg, Georgia State University

SURAgrid Governance Overview To date SURAgrid has been used Consensus based decision-making with SURA facilitating process State of maturity & investment >> formal governance needed Expected purposes of formal governance –Ensure those investing have appropriate role in governance –Support sustainable growth of active participation to enhance SURAgrid infrastructure 3 initial membership classes 3 Classes of Membership Defined –1. Contributing Member Higher education or related org contributing significant resources to advance SURAgrid regional infrastructure SURA is contributing member by definition –2. Participating Member Higher education or related org participating in SURAgrid activities other than Contributing Member –3. Partnership Member Entity (org, commercial, non-HE…) with strategic relationship with SURAgrid

SURAgrid Governance Overview SURAgrid Contributing Members form primary governing body Each SURAgrid Contributing Member will designate one SURAgrid voting member SURAgrid Governance Committee elected by SURAgrid Contributing Members –SURAGrid Governance Committee provides Act on behalf of Contributing Members guidance, facilitation, reporting Initial SURAgrid Governance Committee will have 9 members: –8 elected by contributing members –1 appointed by SURA

Transitioning to the New SURAgrid Governance and Decision-Making Structure SURAgrid Participating Organization designate a SURAgrid LEAD – Done New governance structure approved by SURA IT Steering Group – Done New Governance structure approved by vote of SURAgrid participating leads – Done Call for nominations for SURAgrid Governance Committee candidates – Done Nominations will be accepted through midnight April 28 Election of SURAgrid Governance Committee members is expected to be completed by May 12

New IBM e1350 Linux Cluster BladeCenter-H Based Chassis –Redundant power supplies and Fan Units –Advanced Management Module –Dual 10 Gbps Backplanes Fully integrated, tested and installed e1350 Cluster Onsite configuration, setup and skills transfer QUAD-CORE Intel Processors (8 Cores/Node) Single Point of Support for the Cluster Terminal Server connection to every Node IBM 42U Enterprise Racks Pull-out Console monitor, keyboard, mouse Redundant power and fans on all nodes 3 Year Onsite Warranty –9x5xNext Day on-site on Compute Nodes –24x7x4 Hour on-site on Management Node, switches, racks (optional Storage)