Download presentation
Presentation is loading. Please wait.
1
Ilya Baldin ibaldin@renci.orgibaldin@renci.org
2
2
3
14 GPO-funded racks built by IBM ◦ Partnership between RENCI, Duke and IBM ◦ IBM x3650 M4 servers (X-series 2U) 1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM 1333Mhz Dual-socket 8-core CPU Dual 1Gbps adapter (management network) 10G dual-port Chelseo adapter (dataplane) ◦ BNT 8264 10G/40G OpenFlow switch ◦ DS3512 6TB sliverable storage iSCSI interface for head node image storage as well as experimenter slivering ◦ Cisco(UCS-B) and Dell configuration also exist Each rack is a small networked cloud ◦ OpenStack-based with NEuca extensions ◦ xCAT for baremetal node provisioning http://wiki.exogeni.net 3
4
ExoGENI is a collection of off-the shelf institutional clouds ◦ With a GENI federation on top ◦ xCAT – IBM product ◦ OpenStack- RedHat product Operators decide how much capacity to delegate to GENI and how much to retain for yourself Familiar industry-standard interfaces (EC2) GENI Interface ◦ Mostly does what GENI experimenters expect 4
6
6
7
CentOS 6.X base install Resource Provisioning ◦ xCAT for bare metal provisioning ◦ OpenStack + Neuca Quantum for VMs ◦ FlowVisor Floodlight used internally by ORCA GENI Software ◦ ORCA for VM, baremetal and OpenFlow ◦ FOAM for OpenFlow experiments Worker and head nodes can be reinstalled remotely via IPMI + KickStart Monitoring via Nagios (Check_MK) ◦ ExoGENI ops staff can monitor all racks ◦ Site owners can monitor their own rack Syslogs collected centrally 7
8
IBM ◦ All GPO-funded racks, NICTA Cisco ◦ NCSU, WVnet Dell ◦ University of Amsterdam, several others in planning 8
11
ExoGENI racks are separate aggregates but also act as a single aggregate ◦ Transparent stitching of resources from multiple racks ExoGENI is designed to bridge distributed experimentation, computational sciences and Big Data ◦ Already running HPC workflows linked to OSG and national supercomputers ◦ Newly introduced support for storage slivering ◦ Strong performance isolation is one of key goals 11
12
GENI tools: Flack, GENI Portal, omni ◦ Give access to common GENI capabilities ◦ Also mostly compatible with ExoGENI native stitching ExoGENI automated resource binding ExoGENI-specific tools: Flukes ◦ Accepts GENI credentials ◦ Access to ExoGENI-specific features Elastic Cluster slices Storage provisioning Stitching to campus infrastructure 12
13
Presentation title goes here13
14
Compute nodes ◦ Up to 100 VMs in each full rack ◦ A few (2) bare-metal nodes ◦ BYOI (Bring Your Own Image) True Layer 2 slice topologies can be created ◦ Within individual racks ◦ Between racks ◦ With automatic and user-specified resource binding and slice topology embedding ◦ Stitching across I2, ESnet, NLR, regional providers. Dynamic wherever possible OpenFlow experimentation ◦ Within racks ◦ Between racks ◦ Include OpenFlow overlays in NLR (and I2) ◦ On-ramp to campus OpenFlow network (if available) Experimenters are allowed and encouraged to use their own virtual appliance images Since Dec 2012 ◦ 2500+ slices 14
15
Virtual network exchange Virtual colo campus net to circuit fabric Multi-homed cloud hosts with network control Computed embedding Workflows, services, etc.
16
ExoGENI rack in Sydney, Australia Multiple VLAN tags on a pinned path from Sydney to LA Internet2/OSCARS ORCA-provisioned dynamic circuit ◦ LA, Chicago NSI statically pinned segment with multiple VLAN tags ◦ Chicago, NY, Amsterdam ◦ Planning to add dynamic NSI interface ExoGENI rack in Amsterdam ◦ ~14000 miles ◦ 120ms delay
17
Strong isolation is the goal Compute instances are KVM based and get a dedicated number of cores VLANs are the basis of connectivity ◦ VLANs can be best effort or bandwidth-provisioned (within and between racks) 17
18
Workflow Management Systems ◦ Pegasus, Custom scripts, etc. Lack of tools to integrate with dynamic infrastructures ◦ Orchestrate the infrastructure in response to application ◦ Integrate data movement with workflows for optimized performance ◦ Manage application in response to infrastructure Scenarios ◦ Computational with varying demands ◦ Data-driven with large static data-set(s) ◦ Data-driven with large amount of input/output data
21
21 462,969 condor jobs since using the on-ramp to engage-submit3 (OSG)
24
Hardware-in-the loop slices Hardware-in-the-Loop Facility Using RTDS & PMUs (FREEDM Center, NCSU)
25
New GENI-WAMS Testbed Latency & Processing Delays Packet Loss Network Jitters Cyber-security Man-in-middle attacks Aranya Chakrabortty, Aaron Hurz (NCSU) Yufeng Xin (RENCI/UNC)
28
http://www.exogeni.net http://www.exogeni.net ◦ ExoBlog http://www.exogeni.net/category/exoblog/ http://wiki.exogeni.net 28
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.