Presentation is loading. Please wait.

Presentation is loading. Please wait.

New Virtual Application Networks Innovations Enable Cloud Data Center Interconnectivity in Minutes We are building on the success of Virtual Application.

Similar presentations


Presentation on theme: "New Virtual Application Networks Innovations Enable Cloud Data Center Interconnectivity in Minutes We are building on the success of Virtual Application."— Presentation transcript:

1 New Virtual Application Networks Innovations Enable Cloud Data Center Interconnectivity in Minutes
We are building on the success of Virtual Application Networks with new innovations for speeding the deployment of data center interconnectivity. Prior to Vmworld 2012, we announced two new software innovations for our data center interconnect (DCI) solutionto enable virtual machine mobility, data mobility, disaster recovery, business continuity, and cloud bursting. As with our Virtual Applications announcement in May, we are focusing on the need for agility. There was a recent piece of research published by Tom Bittman at Gartner titled 5 Things that a private cloud is not. In that piece of research, one of the things he talks about is that private clouds are not about finding a lower cost of operations for a data center, they are about having speed and agility so that businesses can be more responsive to change, and to have self-service interfaces in order to take the manual work out of that change. That is a big piece that we are focusing on, and I want to leave everyone here with the comfort that that is exactly what our client base wants to hear. Virtual Application Networks has been received by partners and customers including the capabilities of the Virtual Application Network Manager module released for IMC in June.

2 Legacy Networks Can’t Meet SDN & Cloud Expectations
11 November 2018 Legacy Networks Can’t Meet SDN & Cloud Expectations Application Indifferent Impossible to identify applications and user behaviors and meet diverse service levels Rigid Physical Networks Architected for one tenant, user type & location - lacking programmability Manual Management Slow to respond to new app requirements & hampered by manual errors Legacy networks fail to meet the demands for software-defined networks and cloud. We walk them through the 3 limitations we talked about during the introduction of Virtual Application Networks in May Application indifferent: If a thousand packets come into a switch or a router, they are simply sent on to their destination MAC address or IP address without any regard for the payload, or whether the traffic that is being forwarded is carrying video streaming, or instant messaging, or an online transaction processing app, and that means that we cannot deliver an expected service level. Rigid, physical networks: They are built for one type of tenant, one type of user, one type of location with very device-dependent provisioning. It’s like roads that got built between the city and the suburbs. As people moved further away, traffic congestion occurred as businesses moved the city to the suburb, the roads didn’t change, they weren’t able to adapt to the different needs and traffic patterns that are caused—same is true with cloud. Legacy networks were “set and forget.” The reaction you get from a lot of network operators when you want to make changes to the infrastructure, and they say “I can’t change the core switch; it’s fragile, it works, we don’t want to touch it.” When I spoke at the F5 Agility event recently, I shared that reality with them, and because they are working with network operations teams and they oftentimes ask for network changes as they implement their F5 technology, they get that exact same reaction. Everyone in the room chuckled because they understood it and they know it’s true. Manual management: All the configuration, all the setup is done through the command line interface (CLI). As Cisco and other companies have acquired different start-ups to expand their portfolio, they have not integrated the software and they have not integrated the management, and so there are many different management platforms which creates complexity. As a result of no management integration, customers are forced to use the command line interface, and that simply doesn’t scale, and we’ve seen from analyst’s data that misconfigurations in the command line are responsible for 70% of network outages. HP Confidential

3 Virtual Application Networks Deliver SDN, Cloud Agility
11 November 2018 Virtual Application Networks Deliver SDN, Cloud Agility Application Characterization Create consistency, reliability & repeatability across entire the network infrastructure Network Virtualization Create multitenant, on-demand, topology & device-independent provisioning Automated Orchestration Orchestrate using templates, user service levels & policy for dynamic app delivery Virtual Applications Networks deliver on the expectations for software-defined networking and for cloud agility. The first part is Application Characterization, which is something we can all demonstrate with the Virtual Network Application Manager Module. The template in the VAN Manager allows IMC is to characterize the delivery needs of the app consistently, reliably and repeatedly automate network configuration for delivering specific applications. We virtualize the network to make it multi-tenant, and also to have a topology configuration that is on demand, independent of devices. That way as we need to support a new tenant, or need to stand up a new application, we can do that across multiple devices at one time (that’s what we mean by device independent), and it’s on demand – it uses the server profile we create with the template in Virtual Network Application Manager module. That allows it to be on demand, and allows us to automate the orchestration. We can automate the way the network is configured in response to the policy driven decisions built into the server profile in Virtual Network Application Manager module. This allows us to get out of the boiler room – we can get away from using the command line interface to configure the network and administer the network and automate those processes, taking human time out of the equation, which allows us to bring it from months to minutes, making it more consistent and more reliable. We avoid the command line, so we avoid creating 70% of the network outages. HP Confidential

4 Today’s News: Virtual Application Networks Innovations
FlexFabric Data Center Solution Multitenant Device Context (MDC) Ethernet Virtual Interconnect (EVI) The new capabilities we announced on August 14 are two Comware software features: Multi-tenant Device Context and Ethernet Virtual Interconnect. Initially, the switch will be the first to run these new Comware features, and our roadmap that you can get from Global Product Line Management will help you to gain greater insight into the other platforms that will support these features when running Comware 7.

5 Virtual Application Networks Deliver SDN, Cloud Agility
11 November 2018 Virtual Application Networks Deliver SDN, Cloud Agility Application Characterization Network Virtualization Multitenant Device Context (MDC) delivers single enterprise private cloud network to support multiple tenants Automated Orchestration Ethernet Virtual Interconnect (EVI) reduces enterprise data center interconnectivity setup time from months to minutes These new capabilities deliver Virtual Application Networks and the client expectations for a software-defined network and to gain that cloud agility. Multi-tenant Device Context allows us to build a single cloud network to serve multiple tenants. For the Enterprise client, those multiple tenants are typically departments, and they need to have secure isolation and separation of traffic from Legal and other departments like Finance, Marketing and R & D. Ethernet Virtual Interconnect is all about expediting the connectivity between data centers, and having that happen easily and reliably in minutes rather than months. This is all about automating the orchestration It is very clear how these two new functions of Comware 7 deliver on the promise of Virtual Application Networks. HP Confidential

6 Cloud Workload Mobility Expectations
x86 architecture server virtualization is expected to be used by 75% of all workloads by by 2014… expect more than 80% of traffic in the data center's LAN to be between servers…2 Current network design and network technologies will need to flatten within the next four to five years to accommodate VM mobility, including between data centers. 3 PRIVATE, PUBLIC AND INDEPENDENT CLOUDS FEDERATED APPS & VIRTUALIZATION Virtual Machine Mobility 3 Gartner, Inc., “Hype Cycle for Networking and Communications, 2012”, Published: 27 July 2012 – G 1 Gartner, Inc., “Hype Cycle for Virtualization, 2012” Published: 24 July G I mentioned some Gartner research at the beginning of my conversation that came from Tom Bittman. Here are excerpts from three pieces of Gartner research that supports the client’s expectation for agility, and what we are delivering here with Multi-tenant Device Context and Ethernet Virtual Interconnect. The first strategic planning assumption from Gartner is that “x86 Architecture Server Virtualization is expected to be used by 75% of all Enterprise workloads by That has an implication in private and public clouds, and hybrid cloud deployment. The deployment of virtualization is having an impact on traffic patterns in the data centers, because it is allowing a greater amount of federation of applications and it brings a new killer App to the data center – virtual machine mobility. The strategic planning assumption is that “by 2014, more than 80% of the traffic in the data center LAN will be between servers.” We have seen that ourselves, and we have been delivering that message and the importance of technologies like IRF, in order to reduce latency hops and improve performance. Most recently, in the Gartner Hype Cycle published at the end of July 2012, they talk about “current network design and technologies, and the need to flatten within the next four to five years to accommodate virtual machine mobility, including between data centers.” This is exactly what we are talking about with Ethernet Virtual Interconnect. 2 Gartner, Inc., “Plan Now For the Hyperconverged Enterprise Network” Published: 2 May 2012– G Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

7 Single-tenant Legacy Enterprise Networks
Static, Dedicated Physical Devices for Segmentation Finance Legal Marketing R & D Clients who need to have true isolation and separation of traffic for different departments in their enterprise, like Finance, Legal, Marketing and R & D, have built separate dedicated network infrastructure, or possibly we use complex technologies like VPLS. VPLS is very uncommon within a data center – they just don’t have the staff that understands it, so they take the path of developing separate dedicated networks. That means four times the amount of equipment that they could possibly use with MDC and it takes up more space, more power, more cooling. IEEE 802.1Q VLANs can’t meet the need for secure isolation because VLANs use a shared forwarding data base (FDB) and tag packets for separation. With a shared FDB, VLANs can’t guarantee full isolation and separation of traffic.

8 Introducing HP Multitenant Device Context
Creating a Multitenant Enterprise Data Center while Reducing Physical Devices Complete secure isolation of tenants Increased resiliency Simplified management Reduced configuration errors Reduced power, cooling and space Finance Marketing R&D Legal Up to 75% reduction of devices and cost With MDC, we can build a single network to serve multiple clients with the secure isolation and separation of traffic. The memory of a single device can be separated into protected partitions and run different instances of the switch with separate forwarding databases so you can have true isolation, full separation of traffic for those different tenants. In the Enterprise context, those tenants are the departments: Finance, Marketing, R & D and Legal. You get the complete separation and increased resiliency, especially when you use IRF across a group of switches in the core of the network, where you have 4 switches or 2 switches in an IRF group together, and you instantiate a Multi-tenant Device Context in that group, it exists in all the switches in that group. That way you have transparent failover in the event of a single device failing, or when a single device needs to be taken offline for service. We deliver simplified management through our single pane-of-glass, IMC, and that helps to reduce configuration errors, remembering that 70% of network outages are caused by misconfigurations in the command line. Serving four tenants in a single device, with that full separation instead of building four separate networks, means you can use up to 75% less equipment, spend up to 75% less money, and consume up to 75% less power, cooling and space. And spend a lot less time administering that equipment you aren’t buying with the consolidation into one network.

9 Legacy Data Center Interconnect – Slow & Complex
11 November 2018 Legacy Data Center Interconnect – Slow & Complex MPLS/VPLS based Data Center Interconnect Identify right SP for MPLS MPLS service provision End to end VPLS service planning and design Customer Edge/ Provider Edge deployment Provider Edge configuration Customer Edge configuration Time in Months Re-design Complex Configuration 100s of CLI entries Expensive additional equipment, services, cost When clients want to create a layer 2 data center interconnect between data centers, they have used legacy technologies like Multiprotocol Label Switching running Virtual Private LAN Services (MPLS/VPLS). By engaging with a lot of the solution architects, clients and our product management team, it has become very clear that the first problem everyone has to deal with is whether or not they have the right service in place – whether or not their service provider can deliver the right services needed to support MPLS and VPLS to these multiple data centers. Once they get that in place and complete the process of planning and design and implementation, which alone can take months of time, they have to ensure the proper configuration of the provider edge router, customer edge router, and then they would be ready to go. Months of time have passed, and what we have experienced from a lot of clients is they tell us, “we don’t want to admire the complexity; we want to eliminate the complexity.” We want to make this quick and easy to do, when I can have anyone on my staff do this – I don’t need to have people who are the virtual equivalent of a PhD in MPLS and VPLS to perform these functions, because they are hard to find, they are expensive, and if they walk out the door tomorrow, I lose that intellectual property. I need something easier.” HP Confidential

10 Data Center Interconnectivity in minutes, not months
Introducing HP Ethernet Virtual Interconnect Layer 2 Routing Extensions Single touch site configuration steps per site Simplified Configuration & operation Multi -datacenter scalability Overlay - No network re-design High Resiliency DC 1 DC 3 DC 5 DC 7 Layer 2 Routing Extension DC 2 DC 4 DC 6 DC 8 One logical Data Center Up to 8 physical sites Up to 56% lower Total Cost of Ownership Ethernet Virtual Interconnect (EVI) eliminates the complexity and the long set up time for clients. EVI allows clients to have a simple set of Layer 2 routing extensions that can provide data interconnectivity in minutes rather than the months of legacy approaches like VPLS. For each site, there are 5 simple steps that they need to perform to configure it. Once they configure the first two data centers and create that layer 2 interconnectivity, adding the third only requires the completion of those five simple steps for the additional site. No additional work needs to be done at the first two. This is a much more simplified configuration, they get the scalability across up to 8 data centers, and they don’t have to change the underlying network. Just think about a global business – they may have dark fiber between data centers that are close by, but then they may have globally separated data centers to which their connectivity is leased lines or MPLS. Across any one of those transports, they can use EVI that’s independent of what the underlying transport is, and it requires no re-configuration. EVI takes tremendous headaches out of the process for the client, and it maintains a high level of resiliency. One of the things that we have measured here is what is the savings in implementing our approach vis a vis a competitive approach. Cisco also has a layer two routing approach – they charge a lot of licensing premiums to implement that. When we compare the fact that Comware on the doesn’t require any additional licensing, what we see here is a 56% lower cost to deploy this with HP Networking than with Cisco. I will show you that with more granularity later.

11 HP Ethernet Virtual Interconnect Deploys in Minutes
11 November 2018 HP Ethernet Virtual Interconnect Deploys in Minutes Layer 2 Routing Extensions Simplify Data Center Interconnect Identify existing IP network Provision EVI Single touch deployment Five simple steps per site Minutes When we go back to the original problem set up, it’s really easy – you simply identify the existing IP network that is your transport between the data centers, and you answer a simple set of questions in performing these configuration steps, and boom you are up and running. We will demonstrate this at the VMworld event, and on or after the event, we are going to have a recording of that that will be on demand that you can playback for educating your partners, and also showing to your customers so they can get a first hand sense of what this is like without having to bring any hardware into their environment and to get them more engaged to actually want to have a demonstration, and get them into a proposal mode. HP Confidential

12 EVI Use Cases for Virtualized Data Center
Long-distance Workload Mobility Long-distance Data Mobility Disaster Recovery Business Continuity Hypervisor Hypervisor Ethernet Extension Virtualized Data Center 1 Any Transport Virtualized Data Center 2 Deploy over Existing Network No Redesign Required Five Configuration Steps per Site Automatic Fault Isolation Here are four use cases for Virtualized Data Centers that can help increase the value of the dialogue you have with clients. We are going to take it one step at a time and talk about Virtual Data Centers here and Cloud Data Centers second. The first is long distance workload mobility, which you saw mentioned in the earlier quote from the Gartner Hype Cycle for 2012. Along with long-distance workload mobility is long-distance data mobility. Think about it this way - when you move the workload, that’s like moving your Outlook from your PC to another PC, but there is also the associated data with that application, which is your Mailbox, your Outlook .pst files – that is still on your hard drive. If you move the workload, depending upon how long you want to use it in that other location, you may also want to move the data to ensure the highest performance. Also on August 14, HP Storage announced Store Virtual VSA, which is a brand new appliance-based solution from HP, and it is going to allow much greater flexibility in how storage is deployed in virtual environments, and increase levels of mobility, and ease of management of that. In that environment, the data for virtual workloads can be easily moved, and the port it is going to go out on is the Ethernet IP port. The data mobility becomes another payload across our layer 2 EVI link between the two Data Centers. Clients get the benefit of moving the workload and moving the data along with it. A common use case for workplace mobility is to address disaster recovery planning, which is oftentimes thought of as a reactive initiative. If there is an unplanned outage, clients will have a disaster recovery scenario plan to move workloads based on a prioritization to a prioritized list of locations and in a specific order. They can easily address that with long-distance virtual machine mobility and data mobility across the layer 2 EVI data center connections. Business continuity is a proactive plan for continuing operations during a planned outage or a proactive plan for expanding capacity. Here again they can use that layer two path with EVI to do that. Along the bottom are the benefits of EVI for addressing all four of these use cases: you can deploy over any existing network transport, without having to redesign it or reconfigure it; it’s five simple steps, and you get automated fault isolation in the process.

13 EVI & MDC Use Case for Private Cloud Data Center
Cloud Bursting Long-distance Workload & Data Mobility Disaster Recovery & Business Continuity Hypervisor Hypervisor Ethernet Extension Private Cloud 1 Any Transport Private Cloud 2 75% Reduction of Devices Lowers Cost DCI in minutes from months 80% faster vMotion w/ IRF * 500X faster vMotion failover w/ IRF * One of the things that Tm Bittman said in his research note on private clouds is, “a private cloud is not a virtualized data center”. A private cloud requires virtualization, but it also has self-service interfaces for easy provisioning, similar to the self-service interfaces in the matrix operating environment in HP CloudSystem Matrix. The private cloud will have the same four use cases of the virtualized data center – long distance workload mobility and data mobility, disaster recovery and business continuity – but the use case is Cloud Bursting. This is where they are building a private cloud with an HP converged infrastructure, and they want to have elasticity in capacity during peak periods, and they want to tap an HP Cloud-agile partner. An HP Cloud-agile partner has built their infrastructure with HP converged infrastructure. Therein lies the opportunity to run EVI on a core switch in their network and support that layer two path across any two transport from the enterprise private data center to the Cloud agile partner’s data center and use MDC to have that private separation, that isolation of traffic from the other users in that environment. Where clients have data centers interconnected over dark fiber, one of things that we want to advocate is that they can use IRF to virtualize the switches at either end of the dark fiber, so they are part of a single logical switch. That reduces latency and is the reference on the bottom of the page to the test report published by David Newman at Network Test showing that we have reduced the time for virtual machine mobility in a very proven way, and that can give you up to an 80% performance improvement for virtual machine mobility. And because IRF reconfigures around a fault in about two and a half milliseconds, as opposed to the 31 seconds with Rapid Spanning tree, it gives you 500 times faster fail over recovery compared to Rapid Spanning Tree. * “Higher Speed, Lower Downtime, With HP IRF Technology,” Network Test, August 2011

14 HP EVI – Unmatched in the market
DCI Feature Brocade Juniper Cisco HP Layer 2 Routing Extension No Yes – OTV Yes – EVI Layer 2 Routing Extension Scaling N/A 512 VLANS 6 Data centers 4K VLANS 8 Data centers Layer 2 Routing Extension with Multitenancy YES EVI & MDC Multiple MPLS/VPLS-based & Layer 2 DCI on one switch Yes 12500 We talked earlier about the fact that there are other alternatives in the market for how we are solving this problem with EVI, and they are available from other vendors. At a macro level, one of the tools we have put together is this table that compares HP’s approach with three of the other vendors that we get compared to most often in any data center opportunities – Brocade, Juniper, and Cisco. Neither Brocade nor Juniper offer layer 2 routing extensions – Juniper is very much focused on VPLS. They actually like the complexity of it, they even kind of admire the fact that they have to JUNOS experts to implement VPLS, and they are not focusing on making that simpler. Cisco has layer 2 routing extensions called OTV, but there are some clear differences. For data center clients who have multiple data centers, there are really only two vendors to look at – Cisco and HP.

15 HP EVI Advantages vs. Cisco OTV
DCI Feature VPLS Cisco OTV HP EVI End-to-End Loop-free without STP Failure Domain Isolation Multi-pathing and Load-balancing Independent of Infrastructure MPLS required Multicast required by Default Multi-DC Interconnectivity √ Up to 6 √ Up to 8 Active/Active Physical Redundancy Large Number of VLANs 256 √ 512 √ 4K Number of Network Instances 10 per Nexus 7000 32 per MDC The second table provides greater magnification on the differences between HP EVI and both Cisco OTV and legacy VPLS. EVI offers several advantages. We have looked at the problem caused by VPLS, and Cisco OTV, like requiring by default that multicast be enabled which adds to the complexity. Cisco OTV is not able to scale beyond 6 data centers whereas HP can scale to 8 Cisco OTV limits the number of IEEE 802.1Q VLANs to 512 whereas HP can support the full 4K. Cisco OTV can support 10 instances per Nexus 7000 chassis, but it can not be combined with the virtual device context to support multiple tenants. HP can support 32 instances of EVI per each of the four MDC tenants. ; and being able to support a larger number of instances. In briefing all of the analysts, this was an important learning for them – most of the analysts were unaware that Cisco, while they have technique for creating multi-tenancy, they call it Virtual Device Context, they cannot combine that with OTV. We have actually intentionally made it possible to combine Multi-tenant Device Context and Ethernet Virtual Interconnect. You can have 32 instances of EVI, but that is per Multi-tenant Device Context, so you could have 4 tenants, 32 EVI each, that is 4 times 32 tenants in a single chassis. Clients have a greater level of scaling, greater level of flexibility, and it supports the multi tenancy needed for the private cloud so that you can get that separation between Enterprise departments.

16 EVI Enhances Existing Joint HP & F5 Solution
Tested & validated solutions for the FlexFabric Workload Mobility Business Continuity Disaster Recovery HP and F5 Solution Benefits Optimized vMotion over agile EVI up to 8 data center Multitenant (MDC) flexibility and scaling (32 EVI/MDC) Intelligent routing for faster data placement DCI provisioning in minutes vs. months 10X Faster Virtual Machine Mobility * I want to highlight part of our announcement at Interop which included integration of F5 iApps with IMC to deliver the industry’s first single pane-of-glass tool for policy-based orchestration of the network from layers 2-7. We also announced three tested and validated solutions including a data center to data center solution. What we announced on August 14th with MDC and EVI only improve that tested, validated solution. It allows us to take that optimized V-Motion performance from that solution and make it more agile with the addition of EVI, with that set-up being in minutes for the transport over the wide area, and scaling that to 8 data centers. Mutli-tenant Device Context increases the scale even further because remember you can have 32 instances of EVI per every instance of MDC. We can leverage F5’s intelligent routing for faster data placement in the movement of a virtual machine or in the movement of data, and of course the provisioning of months to minutes focuses on the need for business agility that people are expecting from software-defined networking and from Cloud. From Interop – Business Continuity/DR Workload Mobility *Windows Server guest vMotion event across a 622 Mbps link with 40 milliseconds of round-trip time and zero packet loss, which would normally take more than five minutes to complete. With HP FlexFabric and BIG-IP WAN Optimization Manager - it takes less than 30 seconds – Source F5 testing 5 mins = 300 seconds vs. 30 seconds = 10X faster (or 1,000%) … Long Distance Live Migration – HP and F5 Linking data centers across the city or thousands of miles apart THE CHALLENGE Getting reliable and rapid performance of vMotion and Storage vMotion events typically requires their movement to be restricted within a single local vCenter Server cluster and a single layer 2 broadcast domain. HP EVI provides the reliable, easy to manage L2 DCI across up to 8 locations … HP, F5 and VMware have developed a complete solution for running vMotion and Storage vMotion events together, between vSphere environments and over long distances. The solution components enable vMotion migration between data centers without downtime or user disruption. *One example is a Windows Server guest vMotion event across a 622 Mbps link with 40 milliseconds of round-trip time and zero packet loss, which would normally take more than five minutes to complete. With BIG-IP WAN Optimization Manager it takes less than 30 seconds. Highlight IMC /VAN integration, HP TS F5 Virtualization/Cloud service offerings (Quick Start, assessment, workshops etc …) * F5 Internal Testing

17 HPN Data Center Interconnectivity Solution Roadmap
DCI Phase 1 Dark fibre/DWDM based MPLS vpls/vll based IP based MPLS over GRE IP DCI Phase 2 IP EVI based Simplified Mgmt Scalable to 8 DCs No Multicast requirement HA architect via IRF DCI Phase 3 Enhanced EVI Flooding control IMC EVI module DCI encryption 1QCY2012 3QCY2012 2013 We have had a Data Center Interconnect solution in market; this Roadmap helps us to understand where we have been, where we are and where we are going. Data Center Interconnect Phase 1 was something that came out earlier in the year, and training was offered by Product Management and Product Marketing. It focused on using dark Fiber and DWDM as a transport, as well as MPLS and VPLS, as well as MPLS over GRE IP. With Phase Two, we are adding the Ethernet Virtual Interface as a new technique which allows us to run us over any transport, having the simplified management as opposed to VPLS, which was much more complex, increasing the scale to 8 Data Centers, removing the multicast requirement that customers were familiar with on Cisco OTV, and ensuring high availability with IRF. As we move forward, in Phase 3 next calendar year, we are actually going to enhance EVI, adding support for Flooding Control, having a module in IMC that for a wizard that will take these 5 configuration steps and make it a simple one touch deployment, and also adding encryption. What I encourage everyone in our field to do, is if you want to know more about these, and it is important for your clients, please engage the Regional Product Management Team that you would normally work with in order to communicate the Roadmap around DCI Phase 3.

18 Interconnect Cloud Data Centers in Minutes, not Months
11 November 2018 Interconnect Cloud Data Centers in Minutes, not Months Virtual Application Networks Deliver Software-defined Networking, Cloud Agility Ethernet Virtual Interconnect Reduces data center interconnectivity setup time from months to minutes Multitenant Device Context Delivers single cloud network to support multiple tenants Intelligent Resilient Framework Increased resiliency and vMotion performance IMC Virtual Application Networks Manager Module Single pane-of-glass management automation speeds application deployment to minutes from months HP-F5 Tested & Validated DC- DC Improves virtual machine mobility performance by 200% Ethernet Virtual Interconnect is going to reduce the set up time of Data Center Interconnectivity from months to minutes; it is going to give people the agility and the speed that they want, and take the work out of it associated with command line interfaces, making it more reliable and repeatable. Multi-tenant Device Context allows us to build a single cloud network for our Enterprise clients that supports the multiple departmental tenants – Finance, Legal, Marketing, and R & D as examples. The Intelligent Resilient Framework, based on testing in the past, has been proven to increase not just the resiliency but the also the performance of V-Motion. IMC, our single pane of glass speeds the application deployment with virtual application networks from months to minutes, and makes it easier to configure functions like Multi-tenant Device Context. Our tested validated solution with F5 is only improved by these announcements. HP Confidential

19 Thank you

20 HP EVI TCO vs. Cisco OTV HP Lowers CAPEX by 56% or
Cisco CAPEX is 127% than HP $60,000 Licenses - Enterprise - VDC - OTV Licensing $56,510 12508 Chassis 4xFabric 1xMPU 2xPSU 1xGbE Module $68,000 7010 Chassis 2xFabric 1 1xSup 1xPSU 1xGbE Module Base System I want to give you some insight into what the savings look like. Cisco has a set of layer two routing extensions which they have marketed as Overlay Transport Virtualization (OTV) for the Nexus 7000. So here we have two comparable chassis, the and the Nexus 7010, both with the adequate amount of fabric parts to support line cards, management processing cards or supervisor, power supplies and line cards – base systems, very clear. But there is no added licensing cost for Comware. There are however three licenses that have to be purchased on top of the base Nexus software license – an Enterprise LAN license for 15K; the VDC license is required so that’s 20 and the OTV license for 25. This makes our value proposition better just on paper with 56% savings. Or you can position it as Cisco can do something almost as good as us, but it is 127% more. Nexus 7010 w/2x Fab1, 1x Sup1, 1xChassis, 1xpower, 1x 1GbE line card = $68,000 HP w/4xG1 Fabric, 1x MPU, 1xChassis, 2xpower, 1x 1GbE line card = $56,510 Nexus 7010 Licenses: LAN Enterprise ($15,000) and VDC license ($20,000) required for OTV, along with OTV license ($25,000) = $60,000 Total Nexus 7010 cost: $128,000 vs $56,510


Download ppt "New Virtual Application Networks Innovations Enable Cloud Data Center Interconnectivity in Minutes We are building on the success of Virtual Application."

Similar presentations


Ads by Google