Download presentation
Presentation is loading. Please wait.
1
HP ProCurve 6120 Blade Switch Series
NPI Technical Training Version 1.1 26 August 2009
2
Disclaimer HEWLETT-PACKARD COMPANY MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. The only warranties for ProCurve Networking products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. ProCurve Networking shall not be liable for technical or editorial errors or omissions contained herein. Hewlett-Packard assumes no responsibility for the use or reliability of its software on equipment that is not furnished by Hewlett-Packard. © Copyright 2009, Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
3
Audience This training is designed for networking professionals who are familiar with: Capabilities and installation of HP BladeSystem c-Class enclosures (c3000 or c7000) Capabilities and installation of HP BladeSystem components such as server, storage, and interconnect devices Basic capabilities and features of the ProCurve family of edge and datacenter switches This technical New Product Introduction (NPI) training is designed for networking professionals who are familiar with the following: Capabilities and installation of HP BladeSystem c-Class enclosures (c3000 or c7000). You should be familiar with the basic capabilities and features of the HP c3000 and c7000 enclosures. For example, that the architecture was designed as a general-purpose, flexible infrastructure. The HP c-Class enclosures consolidate power, cooling, connectivity, redundancy, and security into a modular, self-tuning system with intelligence built in. In addition, you should be aware that the enclosures support a variety of blade types, such as server, storage, and interconnect blades. Also keep in mind, that HP ProCurve and some other vendor interconnect (networking) blades can be used. This presentation focuses on a new component that can be used in the c-Class enclosures, specifically, the two new HP ProCurve 6120 Blade Switch Series. There are two models of these new blade switches, the 6120G/XG and the 6120XG. Unless otherwise specified the term 6120 Blade Switch Series will be used to refer to both of these models. These new blade switches are equivalent to taking a ProCurve external, stackable-sized switch and fitting in into a blade footprint. Capabilities and installation of HP BladeSystem components. It will be helpful if you are familiar with fundamental concepts of the c-Class enclosure server, storage, and interconnect blades. For instance, how such blades are physically installed and the basic, initial setup typically done using the HP Onboard Administrator web interface. In the case of the server blades, you should be aware that they can run a variety of operating systems, like Windows and Linux, and can also include the implementation of server virtualization software like VMware. It is not necessary that you understand the actual installation of the such specifics as a server operating system. You should also be familiar with the embedded built-in Ethernet connections of the server blades, the capability of using optional mezzanine cards, and the mapping of the NICs to the internal ports of interconnect modules. Basic capabilities and features of the ProCurve family of edge and datacenter switches. The more you know about any ProCurve switches the easier it will be for you to follow the discussion of the positioning, capabilities, and configuration of the 6120 Blade Switch Series. For example, you may be generally familiar with fixed-port switches like the ProCurve 2900, or the larger modular switches like the 5400zl or 8200zl series. In particular, what will be most helpful is some basic understanding of how you use the Command Line Interface (CLI), or web browser-based UI, to configure basic networking functions. Another topic that you should be familiar is the basic purpose of Layer 2 Virtual LANs (VLANs) and how they are configured.
4
Introduction to the 6120 Blade Switch Series
Benefits, capabilities and features Hardware overview Software features Deployment scenarios Warranty and support Blade Switch Installation Blade Switch Configuration Blade Switch Management and Troubleshooting The first section of this presentation provides an introduction to the HP ProCurve 6120 Blade Switch Series, which are new components supported by the HP c-Class enclosures, the c3000 and c7000. This section begins with an overview of the features and benefits of these new blade switches and their positioning in an IT data center. This is followed by a look at the key hardware aspects and a summary of the key specifications. Lastly, the licensing, warranty, and support aspects are briefly described.
5
HP ProCurve 6120 Blade Switch Series
Ideal for data centers Ease deployment and provisioning of network connectivity Can support high bandwidth applications like video streaming Provides L2 switching support Provides 1G and 10G infrastructure support The HP ProCurve 6120 Blade Switch Series provides customers using the c3000 and c7000 enclosures with new options for connecting server and storage blades to the enterprise network. There are two models: HP ProCurve 6120G/XG Blade Switch HP ProCurve 6120XG Blade Switch These blade switches are managed switches that are easily configured like so many other ProCurve fixed-port and modular (chassis-based) switches. In general, a managed switch is a network device that can be typically configured and monitored using a number of interfaces. For instance, the 6120 Blade Switch Series can be managed using the Command Line Interface (CLI) through a console port, telnet, or SSH session. These switches can also be managed using the web browser UI, using SNMP applications, and ProCurve Manager (PCM). The 6120 Blade Switch Series allows the network administrator to easily deploy and provision network access to server and storage blades that are coresident with the blade switch. Despite the small footprint, the 6120 Blade Switch Series can support network access for a variety of applications, even those requiring high bandwidth such as video streaming. The 6120 Blade Switch Series provides Layer 2 switching support. The 6120G/XG Blade Switch has nine external 1GbE and 10GbE ports, whereas the 6120XG Blade Switch has nine 10GbE ports. Either switch can be used for connectivity to upstream switches and routers that may be organized in a top-of-rack or end-of-row datacenter arrangement along with a c-Class enclosure containing these blade switches. Or, for older, more traditional datacenter environments, these blade switches can be connected to upstream switches and routers through cable runs to wiring closets. About the BladeSystem c-Class Enclosures The c-Class enclosures consolidate power, cooling, connectivity, redundancy, and security into a modular, and flexible infrastructure along with embedded management capabilities that provide simple control interfaces. Management software monitors the infrastructure to streamline operations and increase productivity. The complete solution manages all components of the infrastructure as one system, saving administrators’ time and ensuring high quality service levels. HP ProCurve 6120G/XG Blade Switch HP ProCurve 6120XG Blade Switch c7000 Enclosure c3000 Enclosures (two models) 16 server and storage bays 8 interconnect bays Redundant ILO 8 server and storage bays 4 interconnect bays
6
HP ProCurve 6120 Blade Switch Series
There are two primary c-Class enclosure models: HP BladeSystem c7000—The c7000 provides sixteen device (server and storage) bays and eight interconnect bays in a 10U rack-mount configuration. HP BladeSystem c3000—The c3000 provides eight device bays and four interconnect bays in a 6U rack-mount configuration. The c3000 is also available in a tower configuration. Both enclosure models also include the HP BladeSystem Onboard Administrator (OA), a web browser-based management interface, and the Insight Display diagnostic “hide-away” LCD panel. The flexible and adaptive design of the c-Class enclosures includes common form factor components so that modules such as server blades, storage blades, interconnect modules (the 6120G/XG and 6120XG are two examples), and fans can be used in any c-Class enclosure. The architecture uses scalable device bays and interconnect bays so that administrators can scale up or scale out their BladeSystem c-Class infrastructure. The overall architecture provides high bandwidth and compute performance through the use of new serial I/O technologies as well as full-featured server and storage blades. Independent signal and power backplanes enable scalability, reliability, and flexibility. The signal midplane supports multiple high-speed fabrics in a protocol-agnostic manner, so administrators can populate the enclosure with server blades and interconnect modules in many ways to solve a multitude of application needs. The midplane is how server and storage blades communicate with the 6120G/XG and 6120XG interconnect modules using “internal ports”. The various external ports of the 6120G/XG and 6120XG are used to communicate with upstream switches and routers. The efficient BladeSystem c-Class architecture addresses the concern of balancing performance density with the power and cooling capacity of the data center. Thermal Logic technologies—mechanical features and control capabilities throughout the BladeSystem c-Class—enable IT administrators to optimize their power and thermal environment. Embedded management capabilities in the BladeSystem c-Class platform and integrated management software streamline operations and increase administrator productivity. The complete solution manages all components of the BladeSystem infrastructure as one system. Embedded capabilities and software provide active monitoring, simplify operations, save time, and ensure high service quality. If you want additional background information, both high-level and very technical, you can start with the HP BladeSystem website at Ideal for data centers Ease deployment and provisioning of network connectivity Can support high bandwidth applications like video streaming Provides L2 switching support Provides 1G and 10G infrastructure support HP ProCurve 6120G/XG Blade Switch HP ProCurve 6120XG Blade Switch c7000 Enclosure c3000 Enclosures (two models) 16 server and storage bays 8 interconnect bays Redundant ILO 8 server and storage bays 4 interconnect bays
7
Benefits and Features Benefits Features
Integrated Networking Solutions Integrated blade switch with robust Layer 2 switching capabilities, and 1GbE and 10GbE uplinks, simplifies cable deployments and reduces cost Investment Protection Flexible 1GbE/10GbE connectivity over copper & fiber Lifetime warranty lowers operational costs Flexible Network Architecture Reduces complexity and provides choice and flexibility along with top-of-rack and end-of-row server connectivity Enhanced Security User security using 802.1X, Web, and MAC authentication through RADIUS Port security Management access security including SSL, SSH, and Authorized IP Managers Converged Application Support Data-driven IGMP and LLDP-MED support data, voice, and video application deployments Automated Server Provisioning Integrates seamlessly with ProCurve Datacenter Connection Manager (DCM) to simplify server and network provisioning Comprehensive Network Management and Diagnostics Integrated interconnect device management through the HP Onboard Administrator Other diagnostic and management tools include PCM+, SNMP, port mirroring, RMON, and UDLD This slide provides a summary of several of the major benefits and corresponding features of the 6120 Blade Switch Series. Integrated networking solutions—The 6120 Blade Switch Series are integrated solutions with robust Layer 2 switching capabilities. The blade switches offer 1GbE and 10GbE external port connectivity for uplinks. Along with the c3000 or c7000 enclosure, blade systems allow the datacenter manager to simplify cable deployments and reduce the overall cost of ownership. Investment protection—The external ports of the 6120G/XG Blade Switch provides flexible 1GbE and10GbE connectivity options over copper and fiber links. Similarly, the 6120XG Blade Switch provides all 10GbE connectivity options over copper and fiber links. The lifetime warranty of these blade switches, just as with many ProCurve switches and routers, lowers the datacenter’s long term operational costs. Flexible network architecture—The 6120 Blade Switch Series reduces datacenter setup complexity and provide choice and flexibility when deploying either blade switch and its accompanying enclosure using a top-of-rack or end-of-row server connectivity solution. Enhanced security—The 6120 Blade Switch Series supports end-user security through any of three popular authentication methods; 802.1X, Web, and MAC authentication. These authentication methods are deployed using a centralized user directory system that is accessed using the RADIUS protocol. Another popular security feature supported by the 6120 Blade Switch Series is port security. On a per-port basis, you can configure security measures to block unauthorized devices, and to send notice of security violations. Once port security is configured, you can then monitor the network for security violations. For instance, you can configure specific MAC addresses of upstream switches that are allowed to connect to the external ports of the blade switches. Management access security, which limits access by switch administrators, can be implemented using a variety of methods including SSL for secure web browser access to the web UI, SSH for secure access to the CLI, and the Authorized Managers list to control the source computers that can manage a given blade switch.
8
Benefits and Features Benefits Features
Converged application support—The 6120 Blade Switch Series supports the data-driven Internet Group Management Protocol (IGMP) used for managing multicast video applications, and the Link Layer Discovery Protocol-Media Endpoint Discovery (LLDP-MED) protocol commonly used in voice deployments Automated server provisioning—The 6120 Blade Switch Series integrates with the ProCurve Datacenter Connection Manager (DCM) to simplify server and network provisioning. The DCM application runs on the ProCurve DCM appliance or a specialized 5400zl/8200zl module and assists the datacenter manager by allowing physical and virtualized servers to be more easily provisioned as they come online in the network. Comprehensive network management and diagnostics—The 6120 Blade Switch Series are interconnect modules supported by the c-Class BladeSystem enclosures. These interconnect modules can be managed and monitored using the HP BladeSystem Onboard Administrator application just like any other server, storage, and interconnect module supported by the enclosures. The typical method for performing the initial network and ongoing network configuration is through the switch web browser or CLI user interfaces. The web browser interface can be launched from the HP BladeSystem Onboard Administrator. The switch CLI can be accessed in a number of ways, This includes using a console port connection to the blade switch, a console port connection to the HP BladeSystem Onboard Administrator’s CLI, and through a telnet or SSH session. These blade switches can also be monitored and managed by ProCurve Manager (PCM), a network device management application, and through SNMP-based applications like a MIB browser. Like many ProCurve switches the blade switches support traffic monitoring using the port mirroring feature. Other diagnostic support capabilities include Remote Network Monitoring (RMON) and UniDirectional Link Detection (UDLD), to name a few. Benefits Features Integrated Networking Solutions Integrated blade switch with robust Layer 2 switching capabilities, and 1GbE and 10GbE uplinks, simplifies cable deployments and reduces cost Investment Protection Flexible 1GbE/10GbE connectivity over copper & fiber Lifetime warranty lowers operational costs Flexible Network Architecture Reduces complexity and provides choice and flexibility along with top-of-rack and end-of-row server connectivity Enhanced Security User security using 802.1X, Web, and MAC authentication through RADIUS Port security Management access security including SSL, SSH, and Authorized IP Managers Converged Application Support Data-driven IGMP and LLDP-MED support data, voice, and video application deployments Automated Server Provisioning Integrates seamlessly with ProCurve Datacenter Connection Manager (DCM) to simplify server and network provisioning Comprehensive Network Management and Diagnostics Integrated interconnect device management through the HP Onboard Administrator Other diagnostic and management tools include PCM+, SNMP, port mirroring, RMON, and UDLD
9
c-Class BladeSystem Interconnect Types
Pass-thru module For scenarios where one-to-one server to network connections are required Equivalent to a patch panel Virtual Connect module Simplest, and most flexible connectivity to a network Appears as a L2 bridge to the network Ethernet switch Interconnect aggregation and cable reduction using a managed switch Provides typical L2 switching feature set and may offer L3 routing capabilities A major benefit of HP BladeSystem c-Class enclosures is the ability to integrate network connections directly into the server and storage system. There are three types of interconnect solutions that you can consider for connecting servers and storage systems to the network: Pass-thru module—This type of solution is in effect equivalent to using a patch panel. A pass-thru module allows you to connect server network interfaces (NICs) or storage host bus adapters (HBAs) directly to outside switches in those special situations where a one-to-one connection is required between a server or storage device and the network. This solution tends to be the most expensive and cumbersome method of connection, so it is not recommended for common usage. The HP 1Gb Ethernet Pass-Thru Module is one example of this type of interconnect module. It is designed for those customers desiring an unmanaged direct connection between each server blade within the enclosure and an external network device such as a switch, router, or hub. Ideal for datacenters with existing network infrastructure, the HP 1Gb Ethernet Pass-Thru Module delivers sixteen internal 1Gb downlinks and sixteen external 1Gb RJ-45 copper uplinks, providing a 1:1 non-switched, non-blocking path between the server and the network. This module is designed to fit into a single half-height, interconnect bay of a c-Class enclosure. This interconnect module should be installed in pairs of like models to provide redundant uplink paths to the network. There are also Fibre Channel pass-thru products that are supported for those situations where a direct connection between each storage blade and an external Fibre Channel SAN switch, director, or SAN controller is required. Virtual Connect module—A virtual connect module is a type of interconnect solution that simplifies server connections by cleanly separating the server enclosure from the LAN. This type of interconnect solution simplifies the networking tasks by reducing cabling and without introducing switches that would need to be managed. The HP 1/10Gb-F Virtual Connect Ethernet Module is one example of this interconnect type. This standards-based Virtual Connect Ethernet Module looks like a pass-thru device to the network, yet provides all the key benefits of integrated switching including high performance 1GbE and 10GbE fiber optic uplinks to the data center switch, port aggregation, failover, VLAN tagging, and stacking. This module manages and administers server profiles that can be pre-defined and deployed anywhere within a virtual connect domain. This allows servers to come online very quickly and easily without having to coordinate with network administrators. 9
10
c-Class BladeSystem Interconnect Types
Blade Switch modules—The HP c-Class enclosures also support Ethernet- and Fibre Channel-based network switches. Blade switches are a good approach for reducing cabling and extending a network right to the server blades. For environments that want to use a managed switch approach for their server and storage devices installed in enclosures, blade switch modules will be the preferred approach. The c-Class enclosures support Brocade and Cisco Fibre Channel switches, Cisco and Blade Network Technologies Ethernet switches, HP InfiniBand-based switches, and of course the most recent introductions, the HP ProCurve 6120 Blade Switch Series. Pass-thru module For scenarios where one-to-one server to network connections are required Equivalent to a patch panel Virtual Connect module Simplest, and most flexible connectivity to a network Appears as a L2 bridge to the network Ethernet switch Interconnect aggregation and cable reduction using a managed switch Provides typical L2 switching feature set and may offer L3 routing capabilities 10
11
Switch and Virtual Connect Interconnects: Key Differences
Switch Interconnect Virtual Connect Interconnect Part of the LAN network Enterprise class network management and features Consistent network architecture from server edge through TOR/EOR Server is directly connected, so any server change affects the network Part of the server system Managed with the servers Layer between servers and network, so network does not see server changes — provides abstraction Allows flexibility and control of system resources A common question from customers is: “What are the differences between the two interconnect solutions referred to as the Blade Switch module and a Virtual Connect module?”. The key differences are related to where the interconnect device is positioned within the data center architecture and who manages the interconnect module. Switch Interconnect First of all, a switch is part of the Ethernet or the storage network, depending on the specific I/O connectivity it supports. The switch is directly connected to either a server NIC or a storage HBA. The switch interconnect device communicates with the other switches that make up the overall datacenter communications environment, and it is managed as part of that environment. So for example, if network loops are implemented for redundancy, then spanning tree must be used to manage the redundant paths on the switch interconnect device. In most enterprises, by definition, a switch is owned and managed by the network operations group or the storage operations group. In whatever way the device works, if it is a switch, then it must be managed by the LAN or SAN administrator because that administrator must have total control over their network to make sure it operates properly, securely, and efficiently. Virtual Connect Interconnect On the other hand, a virtual connect interconnect module is part of the server or storage system. It forms a layer between the server/storage devices and the Ethernet and storage networks so that the networks cannot see any changes that are made to the devices. The virtual connect module is managed by the server administrator as part of the overall server or storage system. There is less effort involved in managing a virtual connect module because it is not as complicated as a switch. Therefore, the server or storage administrator can easily handle the configuration tasks without detailed networking knowledge. The virtual connect module is also appealing because it pools and shares the network connections for the servers so that server changes are transparent to the LAN and SAN networks. Server Server LAN Server LAN Server Virtual Connect Server SAN Server SAN Server Server
12
6120G/XG Hardware Overview: Front Panel
16x 1GbE internal ports for server/storage blade access 1x 10GbE internal port for switch-to-switch access Midplane This graphic shows the front panel of the 6120G/XG Blade Switch and highlights the major connectors, LEDs, and control buttons. The next slide shows the equivalent information for the 6120XG Blade Switch. External Network Ports The 6120G/XG Blade Switch has nine external Ethernet ports for which the connectors or transceiver slots are accessible from the front panel. The external ports function as uplinks that can be used to connect to one or more upstream switches. For example, an upstream switch could be a ProCurve Switch 6600 Series installed in the same rack with a c3000/c7000 enclosure containing the 6120G/XG Blade Switch. The upstream switch could just as easily be located in another rack or require a cable run to a wiring closet where the upstream switch may be located. The external Ethernet ports consist of the following: 1x 10GbE CX4—The CX4 interface requires the use of copper cabling, similar to the variety used in InfiniBand technology, and is designed to work up to a distance of 15 m. CX4 technology is an attractive solution because it has the lowest cost per port of all 10Gb interconnects, at the expense of range. CX4 has a bigger form factor than either SFP or SFP+. 2x 10GbE XFP ports—These 10GbE ports support SR (multi-mode fiber, 300 m) and LR (single-mode fiber, 10 km) optic transceivers based on the XFP version of the small form factor pluggable specification. XFP transceivers have a smaller form factor compared to several standards (XENPAK, X2, XPAK) that preceded it. The XFP ports support Industry Standard Servers (ISS) optics only. 2x 1GbE SFP ports—These 1GbE ports support SX (multi-mode fiber, 550 m) and LX (single-mode fiber, 10 km) optic transceivers based on the SFP version of the small form factor pluggable specification using an LC physical connector. These ports also support 1000BASE-T copper connections using an RJ-45 connector. The SFP ports support ISS and ProCurve optics. 4x 10/100/1000 RJ-45 ports—These ports support standard RJ-45 Ethernet cables and each port can operate at one of three autosensing speeds. Internal Network Ports The 6120G/XG also has “internal” Ethernet ports that function as downlinks to devices in the enclosure. The internals ports are essentially communication interfaces used to transfer data to and from server blades, storage blades, and even another 6120G/XG (if it is installed in an adjacent bay of the enclosure). The internal ports use the enclosure’s midplane circuitry to communicate with the other c-Class enclosure components. Clear button Reset button (recessed) 2x 10GbE XFP ports DAC, SR and LR optics Console port Type A mini-USB 1x 10GbE CX4 port CX4 cable 2x 1GbE SFP ports Copper, and SX and LX optics 4x 10/100/1000 RJ-45 ports Module status LED Green—normal, Amber—fault Link status LED Green—connected, Amber—fault Link activity LED Green flashing—10/100 activity Amber flashing—1000 activity For 10GbE & 1GbE ports: Link status LED Green—connected, Amber—fault Link activity LED Green flashing—activity Module locator LED Blue—selected
13
6120G/XG Hardware Overview: Front Panel
The internal ports consist of the following: 16x 1GbE ports for communications with server and storage blades. 1x 10GbE port for communications with another 6120G/XG Blade Switch. Note that the 6120G/XG Blade Switches must be installed in adjacent interconnect bays (for instance, bays 1 and 2, or 3 and 4, and so forth) to be able to communicate using this internal port. Port Status Indicators For the three 10GbE and two 1GbE ports located on the left side of the front bezel, the link status and activity LEDs provide the following information: Link status LED—Green indicates connected and amber indicates a fault condition Link activity LED—Green flashing indicates link activity A single LED below the port is used for link status and link activity indications. For the for 10/100/1000 RJ-45 ports located on the right side of the front bezel, the link status and activity LEDs provide the following information: Link status LEDs—Green indicates connected and amber indicates a fault condition Link activity LEDs—Green flashing indicates 10/100 link activity and amber flashing indicates 1000 activity link activity Separate LEDs, built into the RJ-45 connector, are used for link status and link activity indications. Console Port The switch console port provides a serial communications interface for accessing the switch CLI. Unlike many ProCurve switches that have a DB-9 or RJ-45 connector, the 6120G/XG has a USB connector type. A standard USB type A to mini USB type A cable is used to connect a terminal emulator running on a computer to the switch. This cable is included with the purchase of the 6120G/XG product. You can also use an equivalent cable that enables connectivity between a computer and a digital camera. Note: Some ProCurve switches such as the 3500yl, 5400zl, and 8200zl also have a USB port. Unlike those switches, the USB port of the 6120G/XG can be used only to access the serial console port interface. On the previously mentioned switches, the USB port can be used for transferring software images and configuration files, but this activity cannot be performed using the 6120G/XG USB port. Clear and Reset Control Buttons The 6120G/XG has the clear and reset control buttons that are typically found on most switches. The reset button is used to initiate a reboot of the switch. The clear button is used to clear the manager and operator privilege level passwords. Using the clear and reset buttons together, causes the switch configuration to be reset to the factory defaults. The operation of these control buttons is equivalent to how they operate on most ProCurve switches. System LEDs Module status LED—Green indicates normal conditions and amber indicates a fault condition. Module locator LED—This is useful to identify a particular switch among several located in close proximity to each other. Blue indicates “this” switch has been selected. To control the LED, you use the switch CLI chassislocate <off | on | blink] command. 16x 1GbE internal ports for server/storage blade access 1x 10GbE internal port for switch-to-switch access Midplane Clear button Reset button (recessed) 2x 10GbE XFP ports DAC, SR and LR optics Console port Type A mini-USB 1x 10GbE CX4 port CX4 cable 2x 1GbE SFP ports Copper, and SX and LX optics 4x 10/100/1000 RJ-45 ports Module status LED Green—normal, Amber—fault Link status LED Green—connected, Amber—fault Link activity LED Green flashing—10/100 activity Amber flashing—1000 activity For 10GbE & 1GbE ports: Link status LED Green—connected, Amber—fault Link activity LED Green flashing—activity Module locator LED Blue—selected
14
6120XG Hardware Overview: Front Panel
16x 10GbE internal ports for server/storage blade access 1x 10GbE internal port for switch-to-switch access Midplane This graphic shows the front panel of the 6120XG Blade Switch and highlights its major connectors, LEDs, and control buttons. External Network Ports The 6120XG Blade Switch has nine external Ethernet ports for which the connectors or transceiver slots are accessible from the front panel. Several of these ports have shared functionality or can be thought as being a form of dual personality ports. The two left-most ports labeled port number 17 operate in a shared manner. That is, this port can be used for either type of connection, either 10GbE CX4 or 10GbE SFP+, but the two ports cannot be used concurrently. The two right-most ports labeled port numbers 23 and 24 also operate in a shared manner, but differently from how port number 17 operates. Individually, port 23 and 24 can be used as either an external SFP+ port or as an internal 10GbE switch-to-switch port. The latter mode of operation is also known as a cross-link between blade switches. This allows redundancy to be implemented when two 6120XG Blade Switches are installed in adjacent interconnect bays. Since ports 23 and 24 operate individually, one port could be used as an external SFP+ port while the other operates as an internal switch-to-switch port. In summary, you can have a maximum of eight 10GbE external ports operating concurrently. One scenario you could deploy is 8 SFP+ ports, or 7 SFP+ ports and 1 CX4 port. As a second scenario, you could have up to seven 10GbE external ports operating concurrently (7 SFP+, or 6 SFP+ and 1 CX4) and one of the right-most external ports (port 23 or 24) operating as a10GbE internal switch-to-switch port. Lastly, as a third scenario, you could have up to six 10GbE external ports operating concurrently (6 SFP+, or 5 SFP+ and 1 CX4) and both of the right-most external ports (ports 23 and 24) operating as 10GbE internal switch-to-switch ports. Note: Small Form-factor Pluggable (SFP) refers to compact, hot-pluggable, transceivers used in optical communications. In general, SFP transceivers are designed to support SONET, Gigabit Ethernet, Fibre Channel, and other communications standards. The SFP standard has been enhanced to include Small Form-factor Pluggable Plus (SFP+), which is able to support data rates up to10 Gbps. SFP+ transceivers for optics as well as copper have been introduced by various vendors, including those from HP ISS and HP ProCurve. In comparison to the Xenpak, X2, and XFP types of transceivers, SFP+ transceivers leave some of the circuitry to be implemented on the host board instead of all of the circuitry being inside the transceiver. shared port (17) Clear button dedicated ports (18, 19, 20, 21, 22) individually shared ports (23, 24) Reset button Console port Type A mini-USB 5x 10GbE SFP+ ports DAC, and SR, LR, and LRM optics 2x 10GbE SFP+ ports DAC, and SR, LR, and LRM optics -- or -- Module status LED Green—normal, Amber—fault 1x 10GbE CX4 port CX4 cable 2x 10GbE internal S2S ports -- or -- 10GbE SFP+ ports also support use of 1GbE SFP (SX, LX, Gig-T) transceivers 1x 10GbE SFP+ port DAC, and SR, LR, and LRM optics Module locator LED Blue—selected
15
6120XG Hardware Overview: Front Panel
The external Ethernet ports consist of the following: 1x 10GbE CX4 or 1x 10GbE SFP+—The port labeled 17 can be used for either 10GbE CX4 or 10GbE SFP+ connectivity. The CX4 interface requires the use of copper cabling, similar to the variety used in InfiniBand technology, and is designed to work up to a distance of 15 m. For SFP+ connectivity, port 17 supports SR (multi-mode fiber, 300 m), LR (single-mode fiber, 10 km), and LRM (multi-mode fiber, 220 m) optic SFP+ transceivers. Note: The SFP+ port always has precedence over the CX4 port. If there is any module installed in the SFP+ port (whether or not the module is valid), then the SFP+ port will be the active port. If there is no module installed in the SFP+ port, then the CX4 port will be the active port. 5x 10GbE SFP+ ports—These five ports are dedicated10GbE SFP+ ports that support SR (multi-mode fiber, 300 m), LR (single-mode fiber, 10 km), and LRM (multi-mode fiber, 220 m) optic SFP+ transceivers. 2x 10GbE SFP+ ports—Individually, the two ports labeled 23 and 24 can be used as either external 10GbE SFP+ ports or internal 10GbE switch-to-switch (cross-link) ports. For SFP+ connectivity, these ports support the same fiber optic transceivers as the other SFP+ ports. Note: The external SFP+ ports (23 and 24) always have precedence over the internal switch-to-switch port. If there is any module installed in the SFP+ port (whether or not the module is valid), then the SFP+ port will be the active port. If no module is installed in the SFP+ port and the port has not been configured as an uplink, then the corresponding switch-to-switch port will be the selected port. If there is not a 6120XG Blade Switch in an adjacent bay, then the switch-to-switch port will not be enabled and no link can be established. Note: Each of the eight SFP+ ports support optics and DACs from ISS and ProCurve. Note: Each of the eight SFP+ ports can also use SFP optics (SX and LX) and Gig-T transceivers from ISS and ProCurve, but any such port would operate at 1Gbps. Internal Network Ports Like the 6120G/XG Blade Switch, the 6120XG also has “internal” Ethernet ports that function as downlinks. The primary difference is that all internal ports of the 6120XG operate as 10GbE ports. The internal ports consist of the following: 16x 10GbE ports for communications with server and storage blades. 0x, 1x, or 2x 10GbE ports for communications with another 6120XG Blade Switch. The 6120XG Blade Switches must be installed in adjacent interconnect bays to be able to communicate using these internal ports. You have the option of individually using ports 23 and 24 as SFP+ ports or internal switch-to-switch ports. Therefore you can have no switch-to-switch ports active, or only one, or both. Port Status Indicators The link status and link activity LEDs on the 6120XG Blade Switch are equivalent to those described for 10GbE operation on the 6120G/XG Blade Switch. The primary difference is that the single LED for each SFP+ port is located above the port. The one exception is that for the CX4 port, the LED is located below the port. For the shared operation ports (numbers 17, 23, and 24), there is a triangle LED below the port. When illuminated green, it indicates the corresponding port is active. Common Features Between 6120G/XG and 6120XG The following features operate the same as described previously for the 6120G/XG Blade Switch. Console Clear and Reset Control Buttons System LEDs 16x 10GbE internal ports for server/storage blade access 1x 10GbE internal port for switch-to-switch access Midplane shared port (17) Clear button dedicated ports (18, 19, 20, 21, 22) individually shared ports (23, 24) Reset button Console port Type A mini-USB 5x 10GbE SFP+ ports DAC, and SR, LR, and LRM optics 2x 10GbE SFP+ ports DAC, and SR, LR, and LRM optics -- or -- Module status LED Green—normal, Amber—fault 1x 10GbE CX4 port CX4 cable 2x 10GbE internal S2S ports -- or -- 10GbE SFP+ ports also support use of 1GbE SFP (SX, LX, Gig-T) transceivers 1x 10GbE SFP+ port DAC, and SR, LR, and LRM optics Module locator LED Blue—selected
16
6120G/XG Hardware Diagram: Logical View
server / storage blades midplane connection 6120G/XG Blade Switch 266 MHz 512 MB DDR SDRAM MAC EEPROM 512 KB boot flash 256 MB system flash 16x 1GbE server links I/O panel indicators & LEDs . . . Main Processor This slide provides a simplified, logical view of the 6120G/XG Blade Switch’s hardware architecture. Many of the lower-level components such as interrupt, DMA, and bus controllers are not shown. The primary components are the main processor and the network chip. The main processor is where the switch’s software image runs. The network chip manages the 9 external and 17 internal ports. The memory resident on the network chip is used for packet buffering and memory for various Layer 2 tables. The network chip is capable of sustaining line rate switching performance while all ports are concurrently active. The actual hardware’s rated switching performance exceeds the potentially utilized complement of internal and external ports, which translates into approximately 124 Gbps of throughput operating full-duplex. As previously described, the 6120 Blade Switch Series is a Layer 2 switch. As such, the primary workload is processed by the network chip that provides Layer 2 switching service. Therefore, the primary requirement for packet buffer memory is on the network chip itself. The main processor includes a 2 MB packet buffer that can be used when switching congestion occurs. This packet buffer memory is used when the packet queue depth of one or more ports increases. If the external and internal ports never get to the point of oversubscription on a link, the usage of this 2 MB packet buffer will be shallow. 1x 10GbE S2S link Reset & clear buttons Freescale MPC MHz PCI bus line rate switching capacity with all ports active Network Chip BCM56504 USB port 1x 10GbE CX4 2x 10GbE XFP 2x 1GbE SFP 4x 10/100/1000 RJ-45
17
6120XG Hardware Diagram: Logical View
server / storage blades midplane connection 6120XG Blade Switch 266 MHz 512 MB DDR SDRAM MAC EEPROM 512 KB boot flash 640 MB system flash 16x 10GbE server links I/O panel indicators & LEDs . . . Main Processor This slide provides a simplified, logical view of the 6120XG Blade Switch’s hardware architecture. Similar to the preceding slide, many of the lower-level components such as interrupt, DMA, and bus controllers are not shown. Like the 6120G/XG, the primary components are the main processor and the network chip. The main processor is where the switch’s software image runs. The network chip manages the 9 external and 17 internal ports. The memory resident on the network chip is used for packet buffering and memory for various switching tables. To support the greater throughput capacity demands, a different network chip is used. This network chip is capable of sustaining line rate switching performance while all ports are concurrently active. The hardware’s rated switching performance for the internal and external ports translates into approximately 240 Gbps of throughput operating full-duplex. In comparison to the 6120G/XG, the 6120XG has the following hardware architecture differences: The system flash is 640MB instead of 256MB. The 16 internal ports operate at 10Gb instead of 1Gb. The internal switch-to-switch (cross-link) ports are shared with SFP+ ports 23 and 24. You have the option of using the ports as two SFP+ ports, two switch-to-switch ports, or one SFP+ and one switch-to-switch port. The external ports all operate at 10Gb instead of some being 1Gb. The external ports support SFP+ (optionally SFP) connections instead of XFP or RJ-45 connections. As noted on the previous slide, the main processor includes a 2 MB packet buffer that can be used when switching congestion occurs. This packet buffer memory is used when the packet queue depth of one or more ports increases. If the external and internal ports never get to the point of oversubscription on a link, the usage of this 2 MB packet buffer will be shallow. 2x 10GbE S2S links Reset & clear buttons Freescale MPC MHz shared PCI bus line rate switching capacity with all ports active Network Chip BCM56820 USB port 1x 10GbE CX4 or 1x 10GbE SFP+ 5x 10GbE SFP+ 2x 10GbE SFP+ or 2x 10GbE S2S dedicated shared shared
18
Blade Switch Comparisons
ProCurve 6120G/XG ProCurve 6120XG HP 1:10Gb Ethernet BL-c Cisco 3020 Cisco 3120G Cisco 3120X External 1GbE ports 2 SFP 4 RJ-45 None 1 4 RJ-45 4 SFP/RJ RJ-45 External 10GbE ports 1 CX4 2 XFP 1 SFP+/CX4 5 SFP+ 2 SFP+/S2S None 4 SFP or 2 X2 Memory 512 MB RAM 256 MB flash 512 MB RAM 640 MB flash 256 MB RAM 64 MB flash 128 MB RAM 32 MB Flash Management LLDP-MED SNMPv3 HTTPS SNMPv3 SNMPv3 Access Security 802.1X, Web, MAC auth ACLs, SSH, RADIUS & TACACS+ auth ACLs, 802.1X, Web, MAC auth IGMP Multicast 256 groups 1K groups Forwarding/ Routing L2, IPv6 host, 16K MAC , 256 VLANs L2, IPv6 host, 32K MAC , 256 VLANs L2, L3, VRRP, 16K MAC , 1K VLANs L2+, 8K MAC, 1K VLANs L2 (upgradeable to L3 & IPv6), 8K MAC, 1K VLANs Rate Limiting/ QoS Ingress, L3/L4 prioritization QoS and 802.1p Extensive, highly granular with rate limiting & traffic shaping Stacking No Stackwise Warranty Lifetime 1 year As you may be aware, the HP BladeSystem c3000 and c7000 enclosures support a number of interconnect or networking modules. The 6120G/XG and 6120XG are the newest members introduced for these enclosures. Prior to the release of the 6120G/XG and 6120XG Blade Switches, you could choose to implement any of the other HP and Cisco blade switches listed above. This slide provides a comparison of some of the major features and capabilities among the blade switches. Compared to the competitive offerings, some of the improvements that the 6120 Blade Switch Series provides are: More flexible use of and additional 10GbE and 1GbE ports. LLDP-MED support. Lifetime warranty. Some aspects that customers may object to is that ACLs cannot be configured or stacking support is not available. One response to this objection is that the 6120 Blade Switch Series can be deployed with advanced TOR and EOR switches, such as the ProCurve Switch 6600 series, to provide enhanced scalability, resiliency, and security to meet overall data center network requirements. 1 1GbE SFP optics (SX and LX) and Gig-T transceivers can be installed in any of the external 10GbE ports.
19
Software Features General Networking Features IP Multicast
IEEE 802.1D MAC Bridges IEEE 802.1p Priority IEEE 802.1Q VLANs IEEE 802.1v VLAN classification by Protocol and Port QOS (COS, TOS, DSCP) IEEE 802.1D RSTP (formerly 802.1w) IEEE 802.1Q MSTP (formerly 802.1s) BPDU Protection and STP root guard IEEE 802.3ad LACP IEEE 802.3x Flow Control RFC 792 ICMP Broadcast Throttling RFC 951 BOOTP and RFC 1542 Extensions RFC 2030 SNTP RFC 2131 DHCP Information Option with DHCP Protection TFTP, SFTP, FTP Uni-Directional Link Detection IPv6 Host ICMP Rate-limiting IP Multicast IGMPv1, v2 & v3 (Data Driven) Device Management CLI Access Using Console, Telnet, or SSH HTTP and HTTPS Web Management Access SSHv1/SSHv2 Management Access HP Onboard Administrator Integration OOBM (with DHCP client default) Authorized Managers List Security Concurrent Port-Based 802.1X, Web and MAC Authentication RADIUS & TACACS+ Port Security MAC Address Lockout Monitor and Diagnostics Port Mirroring RMON v1/v2 Network Management LLDP-MED Syslog Protocol SNMPv1/v2c/v3 This slide provides a summary of the many software features supported by the 6120 Blade Switch Series version Z software. These blade switches provide extensive Layer 2 switching support with robust networking management and security support. In addition, multicast and LLDP-MED support enable convergence deployments. The device management, monitoring, and diagnostics features provide strong operational support. Note: The IEEE 802.1Q specification was updated in 2003 to include MSTP (formerly IEEE 802.1s). Previous to that, the specification was known for its VLAN tagging concept. The IEEE 802.1D specification was updated in 2004 to include RSTP (formerly IEEE 802.1w). Previous to that, the specification was referenced the original STP. Note: The use of a RADIUS server is supported for authentication of users, for instance, along with the implementation of 802.1X or MAC authentication. The use of a RADIUS or TACACS+ server is supported for management user authentication. That is, when accessing the switch management interface as the manager or operator through, for instance, telnet or SSH.
20
Usage Model: Basic Network Connectivity
WAN and Campus Core Each c-Class enclosure has a single 6120 Blade Switch Series Basic network connectivity redundancy is achieved using link aggregation Uplinks from blade switch could be 1G or 10G trunks depending on throughput requirements For two 1G trunks, 6120G/XG can be used For two 10G trunks, 6120XG would be required 5412zl / 8212zl switches VRRP OSPF 10G trunks XG switches This graphic illustrates a data center environment where a c-Class enclosure is installed in a rack. Each c-Class enclosure contains multiple server blades and a single 6120 Blade Switch Series. The various uplinks are aggregated as industry-standard LACP trunks. Aggregated links are one step in providing network redundancy at the link level. Depending on the throughput requirements of the servers, the uplinks from each blade switch may be 1GbE or 10GbE connections. The 6120G/XG Blade Switch can be used if the uplink trunks only need to be 1GbE. If the two LACP trunks must be 10GbE, then the 6120XG Blade Switch would be required in each case. The 6120 Blade Switch Series can be connected to various ProCurve switch models. For instance, the 1U ProCurve Switch G-4XG could be used for either trunk scenario where 1GbE or 10 GbE trunks are used. The 1U ProCurve Switch 3500yl could be also be used since it can support up to 4 10GbE links. The chassis-based ProCurve Switch 5400zl could also be considered if additional port density is needed for other external device connectivity. Notice that Multiple Spanning Tree Protocol (MSTP) is enabled since redundant paths exist in the network. Depending on the VLAN design, the data center manager could load balance two sets of distinct VLANs by configuring MST instances. That is, if the various servers involved the use of, for instance VLANs 10, 20, 30, and 40, then it may be feasible to group VLANs 10 and 20 in one MST instance and VLANs 30 and 40 in a second MST instance. Doing so, may allow the data center manager to configure MSTP so that one group of VLANs flows over particular uplinks while the second group of VLANs flows over another set of uplinks. 10G / 1G trunks MSTP 10G trunks G-4XG switches 10G / 1G trunks 6120G/XG or 6120XG c-Class enclosures Single 6120 Series
21
Usage Model: Increased Redundancy
WAN and Campus Core Each c-Class enclosure has two 6120 Blade Switch Series Increased redundancy for servers by using two blade switches in the enclosure Two same-model blade switches are installed in adjacent interconnect bays Each blade switch supports two uplink trunks 6120XG will support 4x 10Gb trunks, each with 2 links 5412zl / 8212zl switches VRRP OSPF 10G trunks XG switches This usage model improves the network redundancy by adding a second 6120 Blade Switch Series to each c-Class enclosure. The second blade switch must be the same model as the first blade switch and must also be installed in an adjacent interconnect bay. With a second blade switch installed, the data center manager may choose to replicate the trunk connections for the second blade switch (shown above), or alternatively decide to implement single link connections (not shown above). In this latter scenario, there would still be network redundancy since each enclosure is afforded two paths to each upstream (external) switch. The alternative path would involve the use of the internal switch-to-switch ports. In the illustrated scenario using 2x 10GbE trunks from each blade switch, the 6120XG would be required, since 8x 10GbE ports would be used. If the data center manager, was willing to implement single-port uplinks and 1GbE was sufficient for throughput, then the 6120G/XG Blade Switch could be used. Another configuration feature that would be taken advantage of here is to enable the internal switch-to-switch (cross-link) port that can be used for two 6120 Blade Switch Series of the same model and installed in adjacent interconnect bays. The benefit is another redundant path. With MSTP already implemented, this redundant path will be managed appropriately. 10G trunks MSTP 10G trunks XG switches 10G trunks 6120XGs c-Class enclosures Redundant 6120 Series
22
Warranty, Service and Support
Lifetime hardware replacement warranty Service Optional Hardware Installation available Support HP ISS provides Level 1, 2, and 3 support HP ProCurve support involved as needed Enhanced Care Pack services available for 24/7 on-site support One of the important features that any customer should consider when evaluating blade switches for their HP c3000/c7000 enclosure is that the 6120 Blade Switch Series comes with a lifetime warranty, just like most ProCurve external switches. A lifetime warranty can significantly lower the operational costs over the course of many years. For servicing needs, HP offers optional hardware installation. In general, the installation and initial configuration of the blade switches is considered relatively straightforward. Level 1, level 2, and level 3 customer support are provided by the HP Industry Standard Servers (ISS) organization. HP ProCurve customer support is involved as necessary to assist with level 3 problems.
23
Parts Information Description Part No. Two blade switches HP ISS parts
HP ProCurve 6120G/XG Blade Switch B21 HP ProCurve 6120XG Blade Switch B21 HP SFP+ SR Transceiver B21 HP SFP+ LR Transceiver B21 HP SFP+ LRM Transceiver B21 HP 10GbE SFP+ .5m Direct Attach Cable B21 HP 10GbE SFP+ 1m Direct Attach Cable B21 HP 10GbE SFP+ 3m Direct Attach Cable B21 HP 10GbE SFP+ 5m Direct Attach Cable B21 HP 10GbE SFP+ 7m Direct Attach Cable B21 HP 1Gb SX SFP Option Kit B21 HP 1Gb RJ-45 SFP Option Kit B21 HP XFP 850nm SR Module B21 HP XFP 1310nm LR Module B21 Two blade switches This table summarizes the SKU (stock keeping unit) or part numbers applicable to the 6120 Blade Switch Series. The two 6120 Blade Switches and the parts specific to HP ISS are listed in this table. The next slide lists the pars specific to HP ProCurve. HP ISS parts 23
24
Parts Information (cont.)
Description Part No. HP ProCurve SFP+ SR Transceiver J9150A HP ProCurve SFP+ LR Transceiver J9151A HP ProCurve SFP+ LRM Transceiver J9152A HP ProCurve 10-GbE SFP+ 1m Direct Attach Cable J9281A/B HP ProCurve 10-GbE SFP+ 3m Direct Attach Cable J9283A/B HP ProCurve 10-GbE SFP+ 7m Direct Attach Cable J9285A/B HP ProCurve 10-GbE XFP-SFP+ 1m Direct Attach Cable J9300A HP ProCurve 10-GbE XFP-SFP+ 3m Direct Attach Cable J9301A HP ProCurve 10-GbE XFP-SFP+ 5m Direct Attach Cable J9302A Only version “B” DACs can be purchased going forward This table lists the part numbers from HP ProCurve that are applicable to the 6120 Blade Switch Series. Notice that three of the DAC cables listed have an SFP+ connector on both ends. These cables can be used to connect an external switch (e.g., a ProCurve 6600 or 2910al functioning as a top-of-rack/end-of-row switch) to a 6120XG Blade Switch. There are also three DAC cables that have an XFP connector on one end and an SFP+ connector on the other. These cables can be used to connect an external switch to a 6120G/XG Blade Switch, where the latter supports the XFP ports. For all six cables, the SFP+ connectors are compliant with MSA SFF-8472 Rev 10.4 as explained below. Note: HP ProCurve has produced “A” (no longer available) and “B” (currently available) versions of the 10GbE SFP+ Direct Attach Cables (DACs). The version "B" DACs are compliant with the most recent specification known as the MultiSource Agreement (MSA) for Small Form Factor (SFF) pluggable optic transceivers (MSA SFF-8472 Rev 10.4). The version "B" DACs also interoperate with the Intel 10 Gigabit AF DA Dual Port Server Adapter. Note: Additional information about version “B” DACs is provided on the next slide. XFP connector on one end, SFP+ connector on the other Applicable to 6120G/XG 24
25
Support for Version “B” DACs
Developed to meet updated specification: MSA SFF Rev 10.4 HP ProCurve version “B” DACs currently available HP ISS version “B” DACs to be released at a later time New switch software is required to support version "B" DACs The MSA SFF-8472 document describes a MultiSource Agreement (MSA) for Small Form Factor (SFF) pluggable optic transceivers that involves several vendors including HP. This specification has been updated such that modifications have been made for the use of DACs with 10GbE SFP+ transceivers. In addition to the release of new 10GbE SFP+ DACs, referred to as version “B” DACs by HP, the specification update also results in changes to the software that runs on the devices supporting the version “B” DACs. For instance, the ProCurve fixed-port 6600 and 2910al switches, which support the version “B” DACs, require a minimum software version. For the fixed-port 6600 switch, version K software is required, and for the fixed-port 2900al switch, version W software is required. For the 6120 Blade Switches, the initial software release (Z.14.00) supports the version “B” DACs. Note: The 6120XG Blade Switch supports 10GbE DACs for its SFP+ ports and the 6120G/XG Blade Switch supports 10GbE DACs for its XFP ports. HP ProCurve has released the version “B” DACs as listed on the prior slide. HP ISS is expected to release version “B” DACs at a later time. Examples ProCurve Switch Software Version 6120 Z.14.04 6600 K.14.32 2910al W.14.28 Version “A” DACs must be used when connecting devices running older software versions or lacking support for version “B” DACs 10GbE DACs are supported for the 6120XG 10GbE SFP+ ports and the 6120G/XG 10GbE XFP ports
26
6120G/XG Specifications Summary
Feature Details Dimensions Weight 10.5(d) x 7.5(w) x 1.1(h) in. (26.67 x x 2.79 cm) 3.7 lb. (1.68 kg) Processor Freescale 400 MHz Main Memory 512 MB DDR2 SDRAM 266 MHz System Flash 256 MB Packet Buffer Memory 2 MB 1 Throughput Switching Capacity Up to 92.2 mpps (64-byte packets) 124 Gbps Ports 1x 10GbE CX4 2x 10GbE XFP 2x 1GbE SFP 4x RJ-45 auto-sensing 10/100/1000 Mbps Management CLI though HP OA or switch console ports CLI through telnet or SSH session Web browser interface through HP OA, direct, or PCM Operating temperature Operating relative humidity Maximum Power Consumption 50ºF to 95ºF (10ºC to 35ºC) 5% to 95%, non-condensing 3.7A (44.4W) This table summarizes a variety of hardware characteristics of the 6120G/XG Blade Switch. The table includes physical, processor, memory, port connectivity, electrical, and operating environment aspects. Refer to the HP ProCurve 6120G/XG Blade Switch datasheet for additional details. 1 Used when the switch experiences queue congestion. Usage will be very shallow if links are not oversubscribed. 26
27
6120XG Specifications Summary
Feature Details Dimensions Weight 10.5(d) x 7.5(w) x 1.1(h) in. (26.67 x x 2.79 cm) 2.8 lb. (1.27 kg) Processor Freescale 400 MHz Main Memory 512 MB DDR2 SDRAM 266 MHz System Flash 512 MB Packet Buffer Memory 2 MB 1 Throughput Switching Capacity Up to 357 mpps (64-byte packets) 480 Gbps Ports 4 1x 10GbE CX4 2 8x 10GbE SFP+ 3 Management CLI though HP OA or switch console ports CLI through telnet or SSH session Web browser interface through HP OA, direct, or PCM Operating temperature Operating relative humidity Maximum Power Consumption 50ºF to 95ºF (10ºC to 35ºC) 10% to 90%, non-condensing 8.75A (105W) This table summarizes a variety of hardware characteristics of the 6120XG Blade Switch. The table includes physical, processor, memory, port connectivity, electrical, and operating environment aspects. Refer to the HP ProCurve 6120XG Blade Switch datasheet for additional details. 1 Used when the switch experiences queue congestion. Usage will be very shallow if links are not oversubscribed. 2 Can be active only if left-most SFP+ port (17) is empty. 3 Two right-most SFP+ ports (23 and 24) can operate individually as internal S2S ports instead of SFP+ ports. 4 1GbE SFP optics (SX and LX) and Gig-T transceivers can be installed in any of the external 10GbE ports. 27
28
Blade Switch Installation
Introduction to the 6120 Blade Switch Series Blade Switch Installation Physical installation Assigning the OOBM IP address using the HP OA Views of the external and internal ports Default switch configuration Blade Switch Configuration Blade Switch Management and Troubleshooting This section describes how you perform the physical installation of a 6120 Blade Switch Series followed by the initial network configuration. The initial network configuration tasks include how you use the HP BladeSystem Onboard Administrator (OA) management application to configure an IP address for a blade switch. This section then provides an overview of the port identifier mappings for both the external and internal Ethernet ports. Lastly, this section shows the default blade switch configuration you can expect to see following its installation.
29
Physical Installation of the Blade Switch
Interconnect modules, like the 6120G/XG and 6120XG, are installed in the rear of the c3000 and c7000 enclosures Enclosure rear views c7000 example Remove interconnect bay blank c3000 The physical installation of a 6120G/XG or 6120XG Blade Switch in a c3000 or c7000 enclosure is very straightforward. To install either blade switch, you do the following: Select an unused half-height interconnect bay located at the rear of the enclosure. Remove the interconnect blank, if present. Insert the blade switch into the interconnect bay. Fasten the device bay shelf locking tab. c7000 Insert blade switch and fasten the device bay shelf locking tab
30
Installation Guidelines
6120G/XG or 6120XG failover configuration Must install two of the same model in adjacent bays Example: two 6120XGs in bays 1 and 2 Server blades with optional mezzanine cards Do not mix a 6120 with other interconnects connected to same server blade mezzanine card Applies to both Ethernet and non-Ethernet switches HP OA management IP address for blade switch Must be in same subnet as OA management module Can be assigned by OA management module, external DHCP server, or statically Here are some installation guidelines to keep in mind: To support a failover configuration for the 6120G/XG Blade Switch, you must install a second 6120G/XG Blade Switch in an adjacent bay. Similarly, to support a failover configuration for the 6120XG Blade Switch, you must install a second 6120XG Blade Switch in an adjacent bay. Note: In a c3000 enclosure, the adjacent interconnect bays are numbered 1 and 2, and 3 and 4. That is, bays 1 and 2 are adjacent, and bays 3 and 4 are adjacent. In a c7000 enclosure, the adjacent interconnect bays are numbered 1 and 2, 3 and 4, 5 and 6, and 7 and 8. Therefore, to implement a failover configuration, you could choose to install two 6120G/XG (or two 6120XG) blade switches in interconnect bays 1 and 2, for instance. Similarly, you could choose to use interconnect bays 3 and 4. A server blade may have one or more optional mezzanine cards installed. For such environments, you cannot mix a 6120 Blade Switch with either other Ethernet or non-Ethernet interconnect modules if they would otherwise have connectivity to the same server blade mezzanine card Do not install a 6120G/XG or 6120XG Blade Switch with another type of Ethernet Blade Switch in interconnect bays that provide connectivity to the same server blade mezzanine card. If this is done, loss of connectivity will occur. Do not install a 6120G/XG or 6120XG Blade Switch with any non-Ethernet switches (e.g., a Virtual Connect Fibre Channel switch) in interconnect bays that provide connectivity to the same server blade mezzanine card. Such a configuration generates an enclosure electronic keying error, which is an incompatible configuration. Each 6120G/XG and 6120XG Blade Switch installed in an enclosure requires a unique HP OA management IP address. This IP address must be in the same subnet as the IP address used for the HP OA management module. It is recommended that use the HP OA EBIPA (Enclosure Bay IP Addressing) feature to assign an IP address to the blade switch. Alternatively, you can use an external DHCP server reachable through the iLO connection of the HP OA management module, or you can statically assign an IP address to the blade switch by specifying it in the configuration file of the blade switch. Note: The HP OA subnet concept and how you assign an IP address to the blade switch are explained later in this section.
31
Installation Guidelines
About Server Blade Mezzanine Cards Each HP server blade comes with some number of built-in Ethernet ports or NICs, typically two or four 1GbE ports. Optional mezzanine cards can be installed on a server blade to provide additional connectivity options and throughput capacity for access to the external network. There are various mezzanine cards including those that support 1GbE, 10GbE, and fibre channel connectivity. For instance, a given mezzanine card may provide four additional 1GbE ports , or two 10GbE ports, and so forth. Note that the external network access is indirect since the server blade must connect through an interconnect module. In other words, a mezzanine card, like the server blade’s built-in connections, do not actually have any external, physical I/O port connectors. A server blade may have up to three mezzanine slots and therefore could support up to three mezzanine cards depending on the particular type of mezzanine cards used. Mezzanine cards are categorized as Type I or Type II. This affects the mezzanine slot on the server blade in which it can be installed and the I/O port mapping that results on a given interconnect module. Mezzanine cards are PCI-Express cards that attach to the inside of the server blade using a designated connector. Relationship to Interconnect Bays The interconnect bays in the back of each c3000 and c7000 enclosure correspond to specific interfaces on the server blades. As a result, all I/O devices that correspond to a specific interconnect bay must be of the same type. Connections between the server blades and the interconnect bays are hard wired. Each of the interconnect bays (4 for the c3000 and 8 for the c7000) in the back of the enclosure has a connection to each of the server bays (8 for the c3000 and 16 for the c7000) in the front of the enclosure. The built-in NIC or mezzanine card into which the interconnect module connects depends on which interconnect bay it is plugged into. Furthermore, because full-height server blades consume two server bays, they have twice as many connections to each of the interconnect bays. For additional information about these concepts, see the HP BladeSystem Onboard Administrator User Guide. The user guide provides illustrations of interconnect bay port mappings for half- and full-height server blades. 6120G/XG or 6120XG failover configuration Must install two of the same model in adjacent bays Example: two 6120XGs in bays 1 and 2 Server blades with optional mezzanine cards Do not mix a 6120 with other interconnects connected to same server blade mezzanine card Applies to both Ethernet and non-Ethernet switches HP OA management IP address for blade switch Must be in same subnet as OA management module Can be assigned by OA management module, external DHCP server, or statically
32
Post Installation: Viewing Blade Switch Status in the HP Onboard Administrator
Logged in using default Administrator account Recommended OA firmware is v2.50 or newer OA management module After the blade switch has been physically installed in the enclosure, you can easily verify its status using the OA management application. The OA is a web browser–based interface for configuring, managing, and monitoring a c3000 or c7000 enclosure along with the server, storage, and interconnect modules. Note: For the purposes of this presentation, it is assumed that the enclosure has been previously operational and that you are familiar with the basic features and functions of the OA application. Accessing the OA To access the OA application, you open a web browser and specify the IP address (or DNS name) that has been assigned to the OA management module. Then, you are prompted for the login credentials of an OA user. To perform configuration changes, you must log in with a user that has the administrative role. In this example, the default user named Administrator has been used to log into the application. Note: The IP address and other settings of the OA management module are configured on the TCP/IP Settings window (not shown). You can access that window by clicking Enclosure Information > Enclosure Settings > TCP/IP Settings in the navigation pane. Note: The recommended firmware version for the OA management module is v2.50 or newer. Among a number of feature benefits is that this version allows ProCurve Manager to recognize the 6120G/XG or 6120XG Blade Switch. 6120G/XG status: Status is OK Locator LED is Off Power is On Management URL is currently blank Two server blades are installed in bays 1 and 2 in front of enclosure iLO port 6120G/XG is installed in interconnect bay 1 in rear of enclosure
33
Post Installation: Viewing Blade Switch Status in the HP Onboard Administrator
Viewing the Status of a Blade Switch To check the status of a blade switch that has been physically installed in the enclosure, you can click Enclosure Information > Interconnect Bays in the navigation pane. In the right pane, you should see a graphical representation of the front and back of the enclosure. In this example, a 6120G/XG Blade Switch has been installed in interconnect bay 1. The Interconnect List section of the window should indicate the following information: Status is OK. Locator LED is Off. Power is On. Management (web browser) URL is currently blank. The reason for this is because the blade switch has not yet been assigned an IP address within the OA subnet. After an IP address has been assigned, this field will be updated with a hyperlink that can be used to launch the web browser interface of the blade switch. Note: Except for the textual identifier used to differentiate the 6120G/XG and 6120XG Blade Switches, the procedures described above are the same for each blade switch model. Logged in using default Administrator account Recommended OA firmware is v2.50 or newer OA management module 6120G/XG status: Status is OK Locator LED is Off Power is On Management URL is currently blank Two server blades are installed in bays 1 and 2 in front of enclosure iLO port 6120G/XG is installed in interconnect bay 1 in rear of enclosure
34
Post Installation: Viewing Status and Diagnostic Details
Status and diagnostic information appear OK Management IP address needs to be assigned Must be within HP OA subnet This slide shows several other windows that you can use to view the status of a 6120 Blade Switch Series after it has been physically installed in the enclosure. These windows can be viewed by first clicking the highlighted link in the navigation pane. The windows that you can display by clicking the Status and Information tabs are shown above. On the Status tab, you should expect to see all status and diagnostic information appear as OK. On the Information tab, you can expect to see that Management IP address is unknown since it has not yet been assigned. The management IP address for the blade switch must be within the OA subnet. Although the blade switch shown in this example has not yet been assigned a management IP address, you would be able access the blade switch CLI indirectly through the OA module’s DB-9 console port (and then entering the connect command) or directly through the blade switch’s USB console port. The procedures for these activities will be described later in this section. Note: The details about the OA subnet and assigning an IP address to a 6120G/XG or 6120XG Blade Switch are presented later in this section. Ethernet path will be viable after IP address is assigned Switch CLI is accessible from HP OA console port
35
Post Installation: Viewing the Internal Ports
Listing of the 16x 1GbE internal ports of a 6120G/XG NIC & HBA MACs Here is another status window that shows information about the port mappings on a blade switch, specifically the internal ports. Note: In this example, a 6120G/XG Blade Switch is illustrated. A similar port listing would be shown if you were viewing this window with a 6120XG Blade Switch installed. As was mentioned previously, the 6120G/XG has 16 internal 1GbE ports that are accessed through the enclosure’s midplane. These internal ports are used by the server and storage blades to communicate through the 6120G/XG when accessing the enterprise network. The external ports of the 6120G/XG (such as the 10GbE CX4, 10GbE XFP, and 1GbE RJ-45 ports) provide the connectivity to the enterprise network. In this example, there are two server blades that were previously installed in the enclosure. Based on the bay locations used, the server blades are automatically assigned particular internal ports. This assignment process is static and predetermined by the enclosure. In this example, the two server blades are installed in bay 1 and bay 2 on the front of the enclosure. The server blade in bay 1 is assigned internal ports 1 and 9, whereas the server blade in bay 2 is assigned internal ports 2 and 10. Each server blade is automatically provided with network access for both the server blade’s Ethernet NIC and iSCSI storage HBA. Note: The 10GbE internal switch-to-switch (cross-link) port that can be used to communicate with an adjacent blade switch of the same model, if installed, is not included in this port listing. Two server blades installed, one in bay 1 and one in bay 2 Similar listing appears for the 16x 10GbE ports of 6120XG NIC & HBA MACs First server uses internal ports 1 and 9, second server uses ports 2 and 10
36
Next Step: Determining an IP Address in the OA Subnet
A blade switch requires an IP address in the OA subnet for management access from enclosure’s OA interface E.g., to launch switch’s web interface IP address is separate from IT group’s management VLAN Blade switch’s OOBM component corresponds to a host in the OA subnet Operates in DHCP client mode IP address to be assigned is configured on the OA’s Enclosure Bay IP Addressing window Enterprise network TOR switch T: VID 5 VLAN 5 IP: /24 Each 6120G/XG or 6120XG Blade Switch requires an IP address within the OA subnet to support management access from the OA management module. For instance, the OA includes a hyperlink so that you can launch the web browser interface to the blade switch. This interface is referred to as the Management Console on various OA windows. The OA subnet is essentially a VLAN containing various “hosts” or “nodes” within a c3000 or c7000 enclosure. The OA management module is itself one of the hosts. Each blade switch that is installed in the enclosure is also a host. The IP address within the OA subnet that you assign to the blade switch is used by a switch component known as Out-Of-Band Management (OOBM). That is, access to the blade switch through this IP address is considered out-of-band compared to any other VLAN IP addresses that may eventually be configured on the blade switch. These latter VLAN IP addresses can be thought of as being used for in-band purposes. For example, the IT group that manages all switches and routers in the enterprise network will typically assign each network device an IP address in a management VLAN (e.g., VLAN 1) that is used for accessing the device for various purposes (e.g., configuring the switch, backing up a configuration file, and upgrading switch software). Note: The IP address within the OA subnet that you assign to the blade switch is different from any management VLAN IP address. The management VLAN IP address is one that is typically reachable from any point in the enterprise network. The OA subnet IP address may or may not be reachable from any point in the enterprise network, depending on whether a VLAN is configured to provide access from an external switch. To assign an IP address to the blade switch’s OOBM component, you specify the IP address using the OA web interface. This task is illustrated on the next slide. Once this IP address is defined, the OOBM component can acquire the IP address through DHCP. Essentially, the OA management module operates as a DHCP server and the blade switch OOBM component operates as a DHCP client. Communication between the OA management module and the blade switch occurs over the enclosure’s midplane. Note: You can assign an IP address to the blade switch using any one of the following methods: Configure the OA to set enclosure bay IP addresses for its blades. This method is illustrated on the next slide. Implement an external DHCP server on the management network connected to the OA. Manually assign a static IP address using the blade switch CLI oobm command. UT: VID 5 OA1 iLO port OA IP: /24 OA Gtwy: Blade switch Interconnect bay 1 IP: x/24 C3000 rear view
37
Assigning an IP Address: Using the HP OA EBIPA Feature
Click Interconnect Bays tab To assign an IP address to the blade switch, you first need to configure the IP address using the OA Enclosure Bay IP Addressing window. To do this, in the navigation pane, click Enclosure Information > Enclosure Settings > Enclosure Bay IP Addressing. Then click the Interconnect Bays tab. You then specify the following information: The IP address to be assigned to the blade switch. In this example, the IP address has been specified. Note: If you do not specify an IP address in the EBIPA Address column, then the OOBA component of the 6120 Blade Switch Series must either be assigned an IP address by an external DHCP server or you must configure a static IP address using the blade switch CLI. Notice that the fields in the Shared Interconnect Settings section of the window are system-wide values. That is, they apply to all devices installed in the enclosure. This section includes the following information: The subnet mask that will apply to all IP addresses. In this example, a 24-bit subnet mask is being used. The IP address of the default gateway. The default gateway should be an upstream switch/router that may be directly connected to the enclosure’s iLO port or located further upstream of intermediate switches. In this example, is the default gateway’s IP address. Note: The OA subnet described here is typically meaningful only from within the OA. That is, you will not see this IP address listed if you use the switch CLI command show ip or access the equivalent window in the blade switch’s web browser interface. However, there is a new CLI command, applicable only to the 6120G/XG and 6120XG Blade Switches, that allows you to see the IP address assigned to the OOBM component. This CLI command is show oobm ip and will be explained later in this section. Default gateway is upstream switch IP address to assign to blade switch
38
Verifying the IP Address is Assigned
Reset to trigger DHCP process immediately After specifying the IP address you want assigned to the blade switch using the OA Enclosure Bay IP Addressing window, you can trigger the DHCP process. This can be done by using the virtual Reset button (or the Power Off button followed by the Power On button). Otherwise, you will need to wait for the process to occur automatically, which may take several minutes. This slide also shows some of the updates that occur within the OA windows after the blade switch has successfully acquired a management IP address in the OA subnet. At the bottom of the slide, an OA console port session is shown where a ping of the blade switch is indicated as being successful. Although it cannot be seen from this simple ping activity, the OA management module and the blade switch are both hosts within the OA subnet. In this example, the OA management module is assigned the IP address /24 and the blade switch is assigned the IP address /24. Some of the basic OA updates that then occur OA-001E0B6C4089> ping PING ( ): 56 data bytes 64 bytes from : icmp_seq=0 ttl=255 time=2.8 ms 64 bytes from : icmp_seq=1 ttl=255 time=0.9 ms 64 bytes from : icmp_seq=2 ttl=255 time=0.9 ms 64 bytes from : icmp_seq=3 ttl=255 time=0.9 ms ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.9/1.3/2.8 ms Using the OA console interface, the blade switch is now reachable from the OA management module
39
Accessing the Switch CLI: Using the OA Console Port
Telnet and SSH are other access methods that may be used Use connect command from HP OA CLI prompt OA-001E0B6C4089> connect interconnect 1 NOTICE: This pass-thru connection to the integrated I/O console is provided for convenience and does not supply additional access control. For security reasons, use the password features of the integrated switch. Connecting to integrated switch 1 at ,N81... Escape character is '<Ctrl>_' (Control + Shift + Underscore) Press [Enter] to display the switch console: Connected at baud ProCurve B G/XG Blade Switch Software revision Z.14.XX Copyright (C) Hewlett-Packard Co. All Rights Reserved. RESTRICTED RIGHTS LEGEND Console port DB9 Next, let us take a look at how you actually access the switch CLI through the OA management module (shown above) and directly from the blade switch (shown on the next slide). The OA management module includes a DB-9 console port to which you can attach a computer (PC, Linux, Unix, etc.) running a terminal emulator. For example, you could use the Windows HyperTerminal program or any other program such as Tera Term Pro. Similar to how you connect to the console port of any ProCurve switch, you configure the terminal emulator program to function as a VT-100 with a baud rate of 9600, 8 data bits, no parity, 1 stop bit, and no flow control. Note: The graphic above shows an example of the OA management module supported by the c3000 enclosure. The c7000 enclosure supports two OA management modules—one is the primary and the other provides redundancy. The CLI interface is the same in each case. When you establish a console session with the OA, you are prompted for login credentials. The same username and password specified when accessing the OA web-browser interface can be used here. After you log in successfully, you can then proceed to access the blade switch CLI. To do this you use the connect interconnect <bay-number> command. In this example, the interconnect bay number used is 1. Notice the following information in the text displayed after entering this command: You press Enter to display the blade switch CLI prompt. You type <Control> + <Shift> + <Underscore> to exit the switch CLI and return to the OA CLI. The default CLI prompt then appears as ProCurve 6120G/XG Blade Switch# for the 6120G/XG Blade Switch or ProCurve 6120XG Blade Switch# for the 6120XG Blade Switch. At this point, just like with any other ProCurve external switch, you can use the “?” or <Tab> character to display a list of commands, type configure to enter the global configuration mode, and so forth. Note: You can also access the OA CLI using a telnet or SSH session from any computer that has network access to the OA subnet. By default, the telnet and SSH access methods are enabled, but either may be disabled individually. To exit switch CLI and return to OA CLI ... (remainder of banner page) Press any key to continue ProCurve 6120G/XG Blade Switch# Default switch CLI prompt
40
Accessing the Switch CLI: Installing the ProCurve USB Console Software
ProCurve USB Console software must be installed on your PC before you can connect Download the USB console software Attach the USB cable to the PC and switch to trigger the installation You can also access the blade switch CLI directly by connecting a computer to the console port of the blade switch. To be able to do that, the ProCurve USB Console software must be installed on your PC before you actually attempt to connect through your terminal emulator program. Here are the basic steps you will need to complete: First, download the USB console software from the ProCurve website and store the software on the computer from which you will be running a terminal emulator program. Note: At the time this training was prepared, the specific ProCurve URL where the software can be found was not available. After the product is officially released, you should be able to locate the USB console software by navigating to the ProCurve web pages that provide access to manuals and switch software for the 6120G/XG and 6120XG Blade Switches. Second, attach the supplied USB cable to the computer and switch to trigger the installation. This assumes both devices are powered-on. Each blade switch comes with the necessary USB cable. The USB cable has a standard Type A connector at one end and a mini Type A connector at the other end. This USB cable can also be found at electronic stores that sell digital cameras. The same type of USB cable used to attach a digital camera to a computer is used for connectivity to the console port of the blade switch. Third, use the Found New Hardware Wizard to install the software. After you proceed through several screens of the Windows hardware wizard, the HP ProCurve USB Console Setup wizard will then be automatically launched for you. Simply follow the prompts to complete this second wizard.
41
Accessing the Switch CLI: Using the Switch Console Port
6120G/XG Console port USB mini Type A Telnet and SSH are other access methods that may be used Type A Mini Type A With the ProCurve USB console software installed on your computer, you can then start your terminal emulator program. In most situations, you will need to set the serial port to COM4 in the terminal emulator to connect successfully. Similar to how you connect to the console port of any ProCurve switch, you configure the terminal emulator program to function as a VT-100 with a baud rate of 9600, 8 data bits, no parity, 1 stop bit, and no flow control. These are also the same settings you use to access the console port of the OA management module. The default CLI prompt appears as ProCurve 6120G/XG Blade Switch# for the 6120G/XG Blade Switch or ProCurve 6120XG Blade Switch# for the 6120XG Blade Switch. Note: You can also access the switch CLI using a telnet or SSH session from any computer that has network access to the OA subnet. By default, the telnet access method is enabled, but may be disabled. The SSH access method is disabled by default, but can be enabled and configured independently of telnet. Connected ProCurve B G/XG Blade Switch Software revision Z.14.XX Copyright (C) Hewlett-Packard Co. All Rights Reserved. RESTRICTED RIGHTS LEGEND ... (remainder of banner page) Press any key to continue ProCurve 6120G/XG Blade Switch# Default CLI prompt
42
Viewing the Switch Configuration: Default Settings
ProCurve 6120G/XG Blade Switch# show running-config hostname "ProCurve 6120G/XG Blade Switch" interface I1 disable exit vlan 1 name "DEFAULT_VLAN" untagged D1-D16,1-4,S1-S2,X1-X2,C1,I1 ip address dhcp-bootp snmp-server community "public" Unrestricted oobm Used as default CLI prompt Internal switch-to-switch port disabled by default All ports are untagged in VLAN 1 IP address of VLAN 1, and optionally any user VLANs, are typically statically assigned Now that you know how to access the blade switch CLI, we can take a closer look at the default configuration and some of the status information using several CLI commands. The first CLI command lists the default configuration that you can expect to see on the 6120G/XG Blade Switch. In this example, the listing for a 6120G/XG Blade Switch includes the following port information: The internal ports are labeled D1 through D16, and I1. D1 through D16 correspond to the 16 1GbE ports that provide connectivity to the server and storage blades. Port I1 corresponds to the10GbE port that provides switch-to-switch connectivity. For this port to be used, two blade switches of the same model must be installed in adjacent interconnect bays such as bays 1 and 2, or 3 and 4, and so forth. The external ports are labeled 1 through 4, S1, S2, X1, X2, and C1. Port IDs 1 through 4 are the RJ-45 10/100/1000 ports, port IDs S1 and S2 are the 1GbE SFP ports, port IDs X1 and X2 are the 10GbE XFP ports, and port ID C1 is the CX4 port. Note: For a 6120XG Blade Switch, the default configuration will be similar. The primary difference is the port identifiers. At this point, no additional configuration has been performed such as assigning an IP address to the default VLAN, VLAN 1, configuring manager and operator login credentials, defining any user VLANs, and so forth. It is recommended that you leave the OOBM configuration at the default settings. This will allow the blade switch to acquire an OA management IP address based on the EBIPA settings for the interconnect bays configured in the OA. An example of how that is done was illustrated several slides previous to this one. Notice that the show ip CLI command does not list any information related to the OOBM IP address that the switch has acquired in the OA subnet nor the OA default gateway IP address. On the next slide, you will see how the OOBM status and IP addressing information can be viewed. Leave at default to get IP address from OA ProCurve 6120G/XG Blade Switch# show ip Internet (IP) Service Default Gateway : Default TTL : 64 Arp Age : 20 Domain Suffix : DNS server : VLAN | IP Config IP Address Subnet Mask Proxy ARP DEFAULT_VLAN | Disabled The OOBM IP address in the OA subnet and the OA default gateway IP address are not listed here
43
Viewing the Switch OOBM Settings
ProCurve 6120G/XG Blade Switch# show oobm Global Configuration OOBM Enabled : Yes OOBM Port Type : 10/100TX OOBM Interface Status : Up OOBM Port : Enabled OOBM Port Speed : Auto Current status of OOBM ProCurve 6120G/XG Blade Switch# show oobm ip Internet (IP) Service for OOBM Interface IPv4 Status : Enabled IPv6 Status : Disabled IP Default Gateway : Address | Address Origin | IP Address/Prefix Length Status dhcp | / preferred Switch has acquired IP address in OA subnet This slide lists the output of several show oobm CLI commands. The show oobm command allows you to view the overall status of the blade switch’s OOBM component. The show oobm ip command allows you to view the IP address assigned to the blade switch’s OOBM component. In this example, the IP address as been assigned using DHCP. Specifically, the IP address was acquired from the DHCP server that runs on the OA management module in the enclosure. The show oobm arp command allows you to view the ARP table of the switch’s OOBM component. In this example, the ARP entry corresponds to a port of an upstream switch that has actually been configured as a member of the OA subnet. That upstream switch has a VLAN configured with the IP address of and has a port connected to the iLO Ethernet port of the enclosure. ProCurve 6120G/XG Blade Switch# show oobm arp OOBM IP ARP table IP Address MAC Address Type Port a4-ae Dynamic oobm ARP entry for port of an upstream switch that has VLAN defined for OA subnet You can use show tech oobm to see all information with one command
44
Viewing the Switch Ports: 6120G/XG CLI Example
ProCurve 6120G/XG Blade Switch# show interfaces brief Status and Counters - Port Status | Intrusion Port Type | Alert Enabled Status Mode D X | No Yes Up FDx D X | No Yes Up FDx D X | No No Down FDx D X | No No Down FDx D X | No No Down FDx D X | No No Down FDx D X | No No Down FDx D X | No No Down FDx D X | No Yes Up FDx D X | No Yes Up FDx D X | No No Down FDx D X | No No Down FDx D X | No No Down FDx D X | No No Down FDx D X | No No Down FDx D X | No No Down FDx /1000T | No Yes Up FDx /1000T | No Yes Down FDx /1000T | No Yes Down FDx /1000T | No Yes Down FDx S SX | No Yes Down FDx S SX | No Yes Down FDx X | No Yes Down X | No Yes Down C GbE-CX4 | No Yes Down 10GigFD I GbE-CX4 | No No Down 10GigFD Internal ports: 16x 1GbE ports through enclosure mid-plane This show interfaces brief CLI command allows you to view an abbreviated listing of all ports of the switch. In the case of a 6120G/XG or 6120XG Blade Switch, the listing includes the external and internal ports. In this example, the listing is for a 6120G/XG Blade Switch. For a 6120XG Blade Switch, the listing is similar except that the port identifiers are different. For this example, the 6120G/XG Blade Switch was installed in interconnect bay 1 of a c3000 enclosure with two server blades installed in device bays 1 and 2. Each server blade had a Quad-port 1GbE mezzanine card installed. As a result, ports D1 and D9 are allocated to the server in bay 1, and ports D2 and D10 are allocated to the server in bay 2. As a reminder, the port identifiers correspond to the following: The internal ports are labeled D1 through D16, and I1. D1 through D16 correspond to the 16 1GbE ports that provide connectivity to the server and storage blades. Port I1 corresponds to the10GbE port that provides switch-to-switch (cross-link) connectivity. The external ports are labeled 1 through 4, S1, S2, X1, X2, and C1. Port IDs 1 through 4 are the RJ-45 10/100/1000 ports, port IDs S1 and S2 are the 1GbE SFP ports, port IDs X1 and X2 are the 10GbE XFP ports, and port ID C1 is the CX4 port. Note: Not all columns of the output from this CLI command are actually shown due to space considerations. External ports: 4x 10/100/1000 2x 1GbE SFP 2x 10GbE XFP 1x 10GbE CX4 Internal port: 1x 10Gb switch- to-switch (cross- link)
45
Viewing the Switch Ports: 6120XG CLI Example
ProCurve 6120XG Blade Switch# show interfaces brief Status and Counters - Port Status | Intrusion Port Type | Alert Enabled Status Mode GbE-K | No Yes Up FDx GbE-K | No Yes Up FDx GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No Yes Up FDx GbE-K | No Yes Up FDx GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD GbE-CX4 | No Yes Down 10GigFD | No Yes Down | No Yes Down | No Yes Down | No Yes Down | No Yes Down GbE-K | No No Down 10GigFD GbE-K | No No Down 10GigFD Internal ports: 16x 10GbE ports through enclosure mid-plane Here is the output of the show interfaces brief CLI command on a 6120XG Blade Switch. For this example, the 6120XG Blade Switch was installed in interconnect bay 3 of a c3000 enclosure with two server blades installed in device bays 1 and 2. Each server blade had a Dual-port 10GbE mezzanine card installed. As a result, ports 1 and 9 are allocated to the server in bay 1, and ports 2 and 10 are allocated to the server in bay 2. As a reminder, the port identifiers correspond to the following: The internal ports are labeled 1 through 16. Ports 1 through 16 correspond to the 16 10GbE ports that provide connectivity to the server and storage blades The external ports are labeled 17 through 24. Port ID 17 can be used as a 10GbE CX4, 10GbE SFP+, or a 1GbE SFP port. Port IDs 18 through 22 can be used as 10GbE SFP+ ports or 1GbE SFP ports. Port ID 23 can be used as a 10GbE SFP+, 1GbE SFP, or a 10GbE switch-to-switch port (cross-link) port. Port ID 24 can be used as a 10GbE SFP+, 1GbE SFP, or a 10GbE switch-to-switch port (cross connect) port. Note: 10GbE-K refers to an Ethernet specification (802.3ap) used for backplane applications such as blade servers, routers, and switches with upgradable line cards ap implementations are applicable to operating environments involving a distance up to 1 meter using copper on circuit boards with two connectors. Note: Not all columns of the output from this CLI command are actually shown due to space considerations. External ports: 1x 10GbE CX4, GbE SFP+, or 1GbE SFP 5x 10GbE SFP+ or 1GbE SFP 2x 10GbE SFP+, 1GbE SFP, or 10GbE switch-to-switch (cross-link)
46
Viewing the Switch Ports: 6120G/XG Web Browser Interface Example
Web browser interface can be accessed: From the OA Directly using a web browser HTTP is enabled by default, SSL can be enabled This slide shows the web browser interface, specifically the Device View tab, for a 6120G/XG Blade Switch. In this example, two external ports (port IDs 1 and 2) are active that function as uplinks to an upstream switch. In addition, four internal 1GbE ports providing connectivity to server blades are active. The web browser interface displays a logical port connector for the OOBM interface, which is active in this case, and one logical port connector for the internal switch-to-switch (cross-link) connection, which is inactive in this case. The switch-to-switch port is labeled as the ISL (inter-switch link) port. The web browser interface of the 6120 Blade Switch Series provides an easy-to-use interface for monitoring and configuring the devices. You can access the web browser interface in several ways: From within the OA by clicking the Management Console link in the navigation pane. The Management Console link corresponds to the OA management IP address configured for the blade switch. In this example, it is /24. By clicking on this link, another web browser tab is opened, which displays the home page for the blade switch. There are also several other locations in the OA where you can click on the OA management IP address for the blade switch. Directly from a web browser by specifying the IP address or DNS name of the blade switch as the URL. For the URL you simply specify where <ip-address> is the OA management IP address (OOBM) assigned to the blade switch. If a DNS address record exists for this IP address, then of course you can specify where <dns-name> is the FQDN of the blade switch. If you use ProCurve Manager for management of your network, then you can also launch a web browser from within ProCurve Manager. HTTP is enabled by default on the blade switch as well as most ProCurve switches. You can optionally choose to enable SSL access. If you enable SSL access, then typically you would disable HTTP. Note: At this point, it is assumed that only the OA management IP address has been configured. That is, no other VLANs with associated IP addresses have been configured on the blade switch. Therefore, the only IP address that is available for accessing the blade switch is the OA management IP address. After an IP address is configured for VLAN 1, or any other VLAN, you can alternatively specify one of those IP addresses. External ports: 4x 10/100/1000 2x 1GbE SFP 2x 10GbE XFP 1x 10GbE CX4 Internal ports: 16x 1GbE ports through enclosure mid-plane OOBM port Switch-to-switch port (I1)
47
Viewing the Switch Ports: 6120XG Web Browser Interface Example
External ports: 1x 10GbE CX4, 10GbE SFP+, or 1GbE SFP 5x 10GbE SFP+ or 1GbE SFP 2x 10GbE SFP+, 1GbE SFP, or 10GbE switch-to-switch This slide shows the web browser interface, specifically the Device View tab, for a 6120XG Blade Switch. In addition to the obvious differences between the external port connectors among the two blade switches, the 6120XG shows two internal switch-to-switch ports that can be used. Individually, ports 23 and 24 can be used as external SFP+ ports or internal switch-to-switch ports. Internal ports: 16x 10GbE ports through enclosure mid-plane OOBM port Switch-to-switch ports
48
Blade Switch Configuration
Introduction to the 6120 Blade Switch Series Blade Switch Installation Blade Switch Configuration Examples of common switch setup tasks Defining VLANs Assigning uplinks to VLANs Blade Switch Management and Troubleshooting This section describes how you can use the switch CLI or web interface to perform the typical follow-on network configuration that includes configuring operator and manager privilege levels, defining VLANs, and assigning uplinks to VLANs. This section also examines several typical management tasks performed for ProCurve switches and highlights any differences for the 6120 Blade Switch Series.
49
Configuring the Blade Switch: Preparing for Network Connectivity
Use the web browser interface: Specify OOBM IP address for URL Or, the blade switch CLI OA console port of c3000 (front) or c7000 (rear) enclosure Console port of blade switch Telnet to switch To get started, you typically: Change privilege level passwords Assign an IP address to a management VLAN Define the IP address of the default gateway After a 6120 Blade Switch Series has been successfully installed, the configuration tasks you would typically proceed with are those that would be commonly performed on any ProCurve switch. For example, the first step should be to define the user credentials for the manager and operator privilege levels. Next, you would likely assign an IP address to the VLAN that is used as the management VLAN by the IT group. For example, this might be VLAN 1. Another configuration task would be to define the default gateway the blade switch will use. To perform the configuration tasks, you have a number of choices available at this point. You can access the web browser interface from the HP OA or by directly starting a web browser. The IP address you would use is the OA management (OOBA) IP address assigned to the blade switch. Keep in mind, that at this point, no other IP address has been assigned to the blade switch. The configuration tasks can also be performed from the switch CLI. As mentioned previously, you can access the switch CLI from the OA CLI (using the connect command), by connecting to the blade switch console port, or using telnet. DB9 USB 6120G/XG HP OA (c3000)
50
Getting Started Enterprise network Some of the current configuration settings of the upstream switch VLAN 1 IP: /24 UT: VID 1 upstream switch T: VID 5 VLAN 5 IP: /24 UT: VID 5 ProCurve 6120G/XG Blade Switch# configure ProCurve 6120G/XG Blade Switch# hostname PCU_ PCU_ (config)# password manager user-name manager plaintext ProCurve%1 PCU_ (config)# password operator user-name operator plaintext ProCurve$2 PCU_ (config)# vlan 1 ip address /24 PCU_ (config)# ip default-gateway PCU_ (config)# snmp-server community public manager unrestricted OA1 iLO This graphic shows a network layout where a c-Class enclosure (c3000 in this example) has a 6120 Blade Switch Series installed (6120G/XG in this example) and is connected to an external upstream switch (ProCurve G in this example) over the iLO port. The existing configuration settings of the OA, blade switch, and the upstream switch are also listed. The next follow-on configuration steps would be to configure several initial switch settings. In this example, the following settings have been configured: A hostname, which is also used as the CLI prompt. A username and password has been assigned to each of the manager and operator privilege levels. An IP address has been assigned to the default VLAN, VLAN 1. This VLAN represents the IT group’s management VLAN as opposed to the OA subnet. The IP address of the default gateway. In this example, the default gateway is the upstream switch, a ProCurve G switch. Access to the SNMP community named public is unrestricted for the manager privilege level (but restricted for the operator privilege level). There are many other initial configuration settings that an IT group would likely apply, but are not shown. For example, the network security policy may include doing the following: Enabling SSL management access and disabling HTTP access. Enabling SSH management access and disabling telnet access. This would also require configuring a public/private key pair that the switch would use to authenticate itself to SSH clients. As an optional step, the IT group may in addition install SSH client public keys on the switch. OA IP: /24 OA Gtwy: Blade switch OA IP: /24 VLAN 1 IP: /24 Gtwy IP: Default VLAN, VLAN 1, is assigned an IP address for management access Upstream switch is the default gateway Current OA settings
51
Configuring the Uplinks
Enterprise network PCU_ (config)# trunk 1-2 trk1 lacp upstream switch Uplink will consist of an LACP trunk with two ports Internal switch & server links The next major configuration task is to implement one or more uplinks between the blade switch and the external upstream switch. For this example, the uplink will consist of two ports aggregated as a trunk using the industry standard Link Aggregation Control Protocol (LACP). Implementing a trunk with two or more links provides basic redundancy so that in the event one port or cable should fail, the other will continue to provide connectivity to the network. Other user VLANs will also typically be defined. For example, the various server blades in the enclosure may all belong to one common user VLAN or there may be the need to define several user VLANs based on how access to these servers is implemented in the enterprise network. In this example, note the following aspects of this configuration: A new user VLAN, VLAN 2, has been added and a descriptive name assigned. Assuming the blade switch will be providing strictly Layer 2 switching service, an IP address need not be assigned to the user VLAN. Two external ports, in this example, ports 1 and 2 corresponding to the 10/100/1000 Base-T RJ-45 ports of a 6120G/XG Blade switch, have been configured as an aggregated link group (trunk) that uses the industry standard LACP protocol. Note: Depending on the throughput requirements, the trunk may use 1GbE ports or 10GbE ports. Therefore, either the 6120G/XG or 6120XG Blade Switch may be used. Depending on the level of link redundancy required, a trunk could be implemented that consists of two, three, or more ports. The LACP trunk has been assigned as a tagged member of VLAN 2. This allows the LACP trunk to carry untagged traffic for VLAN 1, and at the same time, tagged traffic for VLAN 2. As other VLANs are added to the switch, to allow the corresponding traffic of the new VLANs to be transported over this trunk, the trunk would be added as a tagged member of each of those additional VLANs. Note: The LACP trunk is configured in an equivalent manner on the upstream switch. PCU_ (config)# vlan 2 PCU_ (vlan-2)# name web-servers PCU_ (vlan-2)# untagged d1-d2,d9-d10 PCU_ (vlan-2)# tagged trk1 PCU_ (config)# exit S1 S2 D1 D9 D2 D10 Switch VLAN 2 is defined and will be common to all servers Internal ports are untagged in VLAN 2, but the trunk is tagged in VLAN 2 UT: VID 2 LACP trunk Trk1 UT: VID 1 T: VID 2
52
Configuring the Uplinks
The internal blade switch ports that provide connectivity across the enclosure midplane to the servers are defined as untagged members of VLAN 2. This assumes that the servers do not need to send frames with a VLAN tag field. For example, if a server only services VLAN 2 traffic, then there is no need to differentiate that traffic for transmission purposes. On the other hand, if a given server must send and receive traffic in multiple VLANs, then that server will need to be configured with VLAN tagging support. A potential for this scenario is when a physical server runs VMware with multiple virtual server instances and a common VLAN is not used. Enterprise network PCU_ (config)# trunk 1-2 trk1 lacp upstream switch Uplink will consist of an LACP trunk with two ports Internal switch & server links PCU_ (config)# vlan 2 PCU_ (vlan-2)# name web-servers PCU_ (vlan-2)# untagged d1-d2,d9-d10 PCU_ (vlan-2)# tagged trk1 PCU_ (config)# exit S1 S2 D1 D9 D2 D10 Switch VLAN 2 is defined and will be common to all servers Internal ports are untagged in VLAN 2, but the trunk is tagged in VLAN 2 UT: VID 2 LACP trunk Trk1 UT: VID 1 T: VID 2
53
Switch Configuration After Changes
PCU_ (config)# show running-config hostname “PCU_ " interface I1 disable exit trunk 1-2 Trk1 LACP ip default-gateway vlan 1 name "DEFAULT_VLAN" untagged D3-D8,D11-D16,3-4,S1-S2,X1-X2,C1,I1,Trk1 ip address no untagged D1-D2,D9-D10 vlan 2 name “web-servers" untagged D1-D2,D9-D10 tagged Trk1 no ip address snmp-server community "public" Unrestricted spanning-tree Trk1 priority 4 oobm ip address dhcp-bootp Uplink is LACP trunk with two ports IP address of default gateway and IP address of VLAN 1 defined This configuration listing is from a 6120G/XG Blade Switch and corresponds to the network scenario described on the previous slide. The configuration listing includes the initial settings that involved assigning an IP address to VLAN 1 and specifying the default gateway. Notice that since the internal ports (D1, D2, D9, and D10) of the blade switch were assigned as untagged members of VLAN 2, they are no longer untagged members of VLAN 1. This is because a given port can be untagged in one and only one VLAN. This rule is necessary since switched packets sent over a link that are destined for two different VLANs would otherwise not be differentiated. On the other hand, a given port can be a tagged member of zero, one, or more VLANs, since by definition each switched packet would carry the differentiating VLAN tag field. Internal ports are no longer untagged members of VLAN 1 Internal ports to server are untagged members of VLAN 2 LACP trunk is tagged member of VLAN 2
54
Modification: Transporting Multiple VLANs Over a Link
Enterprise network Server S1 is a virtualized server that now needs to transport traffic for VLANs 2, 10, and 11 upstream switch PCU_ (config)# vlan 10 PCU_ (vlan-10)# name db-servers PCU_ (vlan-10)# tagged d1,d9,trk1 PCU_ (vlan-10)# vlan 11 PCU_ (vlan-11)# name file-servers PCU_ (vlan-11)# tagged d1,d9,trk1 PCU_ (config)# exit This scenario illustrates a more complex VLAN environment where server S1 must now transport traffic for additional VLANs. Previously, server S1, like server S2, only needed to transport traffic for VLAN 2. But now, server S1 has been implemented as a virtualized server running VMware and actually consists of multiple server instances. For example, each virtual server could be running Windows or Linux, or mix of operating systems. As a result of this introduction of virtual servers, there are several different applications running that have been historically installed on servers located in different VLANs. Since a given port can only be untagged in one and only one VLAN, any other VLAN traffic that must be transported over that port will require that the port be a tagged member of each other VLAN. In this example, internal ports D1 and D9 can remain untagged in VLAN 2, but at the same time must be tagged members of VLANs 10 and 11. In addition, since this VLAN traffic must traverse the trunk, Trk1 must also be defined as a tagged member of VLANs 10 and 11. Internal switch & server links S1 S2 D1 D9 D2 D10 Switch In addition to internal ports D1 and D9, Trk1 also needs to be a tagged member of the two new VLANs UT: VID 2 LACP trunk Trk1 UT: VID 2 T: VID 10, 11 UT: VID 1 T: VID 2, 10, 11
55
Implementing Redundancy: Using the Switch-to-Switch Ports
Must use two same-model 6120 Blade Switches in adjacent bays Enable the switch-to-switch ports For 6120G/XG, use port I1 For 6120XG, use port 23, or 24, or both Another alternate path that must be managed by spanning tree Enterprise network Internal switch & server links upstream switch S1 S2 D1 D9 D2 D10 When two 6120 Blade Switches are implemented in the same enclosure, you can provide network redundancy for the server/storage devices. With the internal ports enabled between the two blade switches, an alternative path is introduced. This alternative path must be managed by spanning tree just like any physical links that may introduce network loops. The primary requirements are: The two 6120 Blade Switches must be the same model. Both blade switches must be 6120G/XGs or 6120XGs. On each switch, you need to enable the internal port (referred to as interfaces in the CLI). On the 6120G/XG Blade Switch, the internal port is labeled I1. On the 6120XG Blade Switch, you have a choice of using port 23, port 24, or even both. Which ever port (or ports) you decide to use, you must ensure no transceiver is installed in that port. On each switch, you will typically leave the internal port untagged in the management VLAN (e.g., VLAN 1), but make it a tagged member of each user VLAN. Switch PCU_ (config)# vlan 2 tagged i1 PCU_ (config)# vlan 10 tagged i1 PCU_ (config)# vlan 11 tagged i1 PCU_ (config)# interface i1 enable PCU_ (vlan-10)# show interfaces brief i1 Status and Counters - Port Status | Intrusion Port Type | Alert Enabled Status ... I GbE-CX4 | No Yes Up UT: VID 1 T: VID 2,10,11 I1 Switch D1 D9 D2 D10 S1 S2 LACP trunk Trk1 UT: VID 1 T: VID 2, 10, 11 Configure same settings on second blade switch
56
Blade Switch Management and Troubleshooting
Introduction to the 6120 Blade Switch Series Blade Switch Installation Blade Switch Configuration Blade Switch Management and Troubleshooting Diagnosing with the LEDs Viewing system information Managing the OOBM interface Updating the switch software Backing up or restoring a configuration file Restoring the factory default software or configuration Useful CLI show commands This section describes the use of the blade switch LEDs for diagnostic purposes, and the typical management tasks performed for ProCurve switches including backing up and restoring configuration files, upgrading the switch software, resetting the switch to the factory default software and configuration settings.
57
LED Diagnostics Examples
Module status Each of the 6120 Blade Switches has a module status LED and various port LEDs that differ based on the port type. The table above lists a few basic scenarios. In some cases there are numerous possibilities for the cause of the problem, in particular for those scenarios where a port is not functioning properly. Refer to the 6120 Blade Switch Series Installation and Getting Started Guide for explanations of many of the possible problems and troubleshooting approaches that may help resolve the problem. Module Status LED Port C1 LED Port 1-4 LEDs Notes Amber on Switch hardware failure Amber flashing Software failure during self-test Port self-test or initialization failure Green on Off Possible problems include, incorrect cable, port disabled, software config mismatch, port blocked by STP
58
Viewing Rack and Enclosure Information
PCU_ # show system enclosure Rack and Enclosure Information Rack Name : PCU_01_A Rack Unique ID : Default RUID Enclosure Name : PCU01_C3000 Enclosure Serial Number : 2UX80203HD Information is also displayed in HP OA In addition to the typical show system CLI command, which will display switch information, the 6120 Blade Switch Series supports the show system enclosure CLI command. This latter command displays information about the HP BladeSystem enclosure in which the blade switch is installed. This information can useful for auditing purposes as well as remotely directing support staff to the enclosure when troubleshooting. Some of this information can be configured the in the HP Onboard Administrator by clicking Enclosure Information in the navigation pane and then clicking the Information tab.
59
Viewing External and Internal Port Statistics
Various status and statistics information of the internal and external ports can be viewed using the CLI or web browser interfaces PCU_ # show interfaces Status and Counters - Port Counters Flow Bcast Port Total Bytes Total Frames Errors Rx Drops Rx Ctrl Limit D ,977, ,053, on 0 D ,237, , on 0 ... D off 0 1-Trk1 41,118, , off 0 2-Trk1 64,694, , off 0 off 0 off 0 S off 0 S off 0 X off 0 X off 0 C off 0 I off 0 You can view various status information and statistics of both the internal and external ports using the CLI or web browser interfaces. From the CLI, you can use the show interfaces command, which supports several options: brief—Show the ports' operational parameters. config—Show the configuration information. custom—Show the ports' parameters in customized order. display—Show a summary of network traffic handled by the ports. port-list—Show a summary of network traffic handled by the ports. port-utilization—Show the bandwidth-utilization of the ports. From the web browser interface, you can click the Status tab, and then click the Overview, Port Counters, or Port Status tabs. . . .
60
Managing the OOBM Interface
Recommended method for blade switch to acquire an OA management IP address is through DHCP (default) The OOBM component is configurable, if necessary PCU_ # oobm ? disable Disable OOBM. enable Enable OOBM. interface Configure various interface parameters for OOBM. ip Configure various IP parameters for the OOBM. The recommended method for a 6120 Blade Switch Series to acquire its OA management IP address is through DHCP (default). That is, you should allow the OOBM interface to acquire an IP address from either the OA management module based on the EBIPA settings, or an external DHCP server reachable on the OA subnet. If necessary, you can configure the OOBM component using the oobm CLI command. For example, to assign a static IP address to the OOBM interface. This slide shows the help dialogs displayed for various formats of the command. For instance, you can: Disable (or reenable) the OOBM component all together. Disable (or reenable) the OOBM interface. Define a static IP address and mask for the OOBM interface. Define the IP address of the default gateway that the OOBM component will use. PCU_ # oobm ip ? address Set IP parameters for communication within an IP network. default-gateway Configure the IPv4 default gateway address, which will be used when routing is not enabled on the switch. PCU_ # oobm interface ? disable Disable OOBM port. enable Enable OOBM port.
61
Viewing the Software Files on the Flash
Example of the typical flash and software image information you can expect to see PCU_ # show flash Image Size(Bytes) Date Version Primary Image : /27/09 Z.14.04 Secondary Image : /27/09 Z.14.04 Boot Rom Version: Z.14.03 Default Boot : Primary PCU_ # show version Image stamp: /sw/code/build/vern(t4br) Jul :42:40 Z.14.04 1037 Boot Image: Primary You can store two different software images on the flash Like many ProCurve switches, you can store up to two software image files in the flash memory of the 6120 Blade Switch Series. These two locations are referred to as “primary” and “secondary”. The show flash and show version CLI commands operate the same as they do on other ProCurve switches. This slide simply shows the typical software image naming scheme and size information you can expect to see on the 6120 Blade Switch Series. From a CLI session, you can reboot the switch and specify which flash location, and therefore which software image, to use. You can also set the default flash location to use when the switch is power-on or reset. The CLI command to do this, startup-default, is illustrated later in this section. Choose the flash location to boot from using: boot system flash <primary | secondary> Flash location that will be used by default 6120G/XG and 6120XG Blade Switches run the same software image
62
Updating the Switch Software
Typical methods for copying software files to and from the flash include: TFTP using the CLI copy command HTTP/SSL through the web browser interface SSH using a SecureCoPy client program ProCurve Manager As with any ProCurve switch, a software image file can be transferred to and from the switch using several methods. These include using: The CLI copy command and a TFTP server. The HTTP or SSL web browser interface. SSH along with a client SecureCoPy (SCP) program. For example, the Windows WinSCP program is an extremely easy tool to use that provides a Windows Explorer-like interface. ProCurve Manager or ProCurve Manager Plus Typically, when transferring files to and from a switch, the management VLAN’s (e.g., VLAN 1) IP address functions as the source IP address when copying from the switch or the destination IP address when copying to the switch. Note: When you use the CLI copy command or the web browser interface, the switch’s IP address is implied. That is, you do not specify the IP address of the switch, but instead the IP address of the TFTP server. When you use a tool like WinSCP, you of course must first establish an SSH session with the switch and specify the switch’s IP address at that time. Similarly, when using the web browser interface you specify a switch IP address (or DNS name) as the URL to establish a web browser session with the switch. Unlike other ProCurve switches, the 6120 Blade Switch Series allows you to reference the OOBM IP address when transferring files to and from the switch. This is useful when the management VLAN is inaccessible or does not have an IP address assigned. If you are using the switch CLI, you can specify the new oobm keyword with the copy command. If you are using an SSH tool like WinSCP, you would simply connect to the OOBM IP address to use the OOBM interface. For the web browser interface, you simply connect to the IP address of the OOBM interface using HTTP or SSL (whichever protocol is enabled). In the CLI example, a software image for the 6120 Blade Switch is being copied from a TFTP server to flash memory, specifically the secondary location. As mentioned previously, the “oobm” keyword is useful for those environments where no IP address has been assigned to any VLAN defined on the switch. The graphic also shows the Upload/Download tab from the web browser interface. This tab allows you to upload a software image file to the primary or secondary flash location on the switch. This tab also allows you to upload a configuration file to the switch or download it from the switch. Software image upload example using standard CLI copy command Flash location Optionally identify OOBM interface From TFTP server IP address of server primary or secondary To flash Filename PCU # copy tftp flash Z_14_04.swi secondary oobm The Secondary OS Image will be deleted, continue [y/n]? y
63
Resetting to the Factory Default Software
ROM information: Build directory: /sw/rom/build/vernrom(titan4_v14_b_release) Build date: Jul Build time: :54:18 Build version: Z.14.03 Build number: Boot Profiles: 0. Monitor ROM Console 1. Primary Software Image 2. Secondary Software Image Select profile (primary): 0 B21:HP ProCurve 6120G/XG Blade Switch ROM Build Directory: /sw/rom/build/vernrom(titan4_v14_b_release) ROM Version: Z.14.03 ROM Build Date: 10:54:18 Jul ROM Build Number: 25751 Copyright (c) Hewlett-Packard Company. All rights reserved. Start a console port session directly with blade switch or through HP OA console Power on or reboot the blade switch In addition to resetting the switch configuration to the factory default settings it is possible to also reset the software image in the primary flash location to the software image that the switch was originally shipped with. To do this, you need to access the Monitor ROM Console through the CLI. This Monitor ROM Console feature is built into the boot ROM. It provides a way to recover from a situation where both primary and secondary images are not bootable. Since the Xmodem download feature will not work on the 6120 Blade Switch Series, this is the mechanism to use in case both images are not bootable. The Monitor ROM Console can be accessed from a direct console port session or from an OA console port session. In the latter case, you use the connect interconnect <bay> command to initiate the CLI session. To invoke the Monitor ROM Console you must interrupt the normal boot process by entering “0” (zero character) when the Boot Profiles menu appears. You only have a few seconds to enter a menu choice before the switch proceeds with the default setting for the flash location to be used. To initiate the resetting of the software image to the factory default copy, you enter the recover command at the Monitor ROM Console prompt. Note: You can also force the 6120 Blade Switch Series to automatically invoke the Monitor ROM Console by physically manipulating the Module System Maintenance switch located on the blade switch motherboard. To do this you will of course need to remove the blade switch cover. Refer to the graphic below and the 6120 Blade Switch Series Installation and Getting Started Guide for more information. Select Monitor ROM Console ... (remainder of banner page) Enter h or ? for help. => recover Issue the recover command
64
Resetting to the Factory Default Software (cont.)
*************************************************************************** * * * You've invoked the Product Recovery command * * Product Recovery is intended for use when neither the Primary nor the * * Secondary product images are bootable * * If you continue, this command will do the following: * * 1. Overwrite Primary with the original factory released image * * 2. Clear configuration and boot the new Primary product code * * The recovery process will take about 4 minutes to complete, please do * * not reset or power off the system during recovery. If you do not wish * * to perform Recovery at this time, enter 'C' at the prompt to Cancel. * Please confirm! Enter 'P' to Proceed, or 'C' to Cancel: p Recovering... -- Download for this Product, proceeding -- pass CRC check (len= ) , Erasing 52 segments..., Programming starting at 0xf programming successful... Ready for code execution. Decompressing...done. initializing..initialization done. Hit Enter to Continue. After entering the recover command, information text is displayed. Notice that it indicates both the software image and the configuration will be reset to the factory default settings. Next, you are prompted to enter “p” to proceed with the recovery process. After several minutes the process completes and you are prompted to press the Enter key to continue. The default switch prompt will then appear. Enter “p” to proceed with the recovery Switch loads factory image Switch banner page will appear next followed by the default CLI prompt
65
Viewing the Configuration Files on the Flash
You can store three different configuration files on the flash PCU_ # show config files Configuration files: id | act pri sec | name 1 | * * | Config 2 | * | test-cfg 3 | | Choose the configuration file to boot with using: boot system flash <primary | secondary> config <name> You can also preset the configuration file to use with a software image using: startup-default <primary | secondary> config <name> Like many ProCurve switches, you can also store up to three configuration files in the flash memory of the 6120 Blade Switch Series. These three locations are referred to as “active”, “primary”, and “secondary”. There are several show commands that allow you to list the configuration files stored in flash memory, display the contents of any of those stored files, and display the contents of the configuration file currently loaded in main memory. Examples of these are shown above. You can specify the configuration file to use when booting the switch using the boot system flash <primary | secondary> config <name> CLI command. You can also set the default configuration file to use with either software image flash location using the startup-default <primary | secondary> config <name> CLI command. PCU_ # show config test-cfg hostname "PCU " interface I1 enable ... Use show config <name> to view the contents of a named configuration file PCU_ # show running-config hostname "PCU " interface I1 disable ... Use show running-config to view the contents of the configuration currently in memory
66
Managing Switch Configuration Files
Configuration files can be managed using the same methods used for software files TFTP using the CLI copy command HTTP/SSL through the web browser interface SSH using a SecureCoPy client program ProCurve Manager Backup example using standard CLI copy syntax You can manage the configuration files stored in the blade switch’s flash memory using the same methods used for software image files. Typically, when transferring files to and from a switch, the management VLAN’s (e.g., VLAN 1) IP address functions as the source IP address when copying from the switch or the destination IP address when copying to the switch. Note: When you use the CLI copy command or the web browser interface, the switch’s IP address is implied. That is, you do not specify the IP address of the switch, but instead the IP address of the TFTP server. When you use a tool like WinSCP, you of course must first establish an SSH session with the switch and specify the switch’s IP address at that time. Similarly, when using the web browser interface you specify a switch IP address (or DNS name) for the URL to establish a web browser session with the switch. Unlike other ProCurve switches, the 6120 Blade Switch Series allows you to reference the OOBM IP address when transferring files to and from the switch. This is useful when the management VLAN is inaccessible or does not have an IP address assigned. If you are using the switch CLI, you can specify the new oobm keyword with the copy command. If you are using an SSH tool like WinSCP, you would simply connect to the OOBM IP address to use the OOBM interface. For the web browser interface, you simply connect to the IP address of the OOBM interface using HTTP or SSL (whichever protocol is enabled). The examples above show you how the CLI copy command can be used to backup a configuration file to a TFTP server. In the first example, the “oobm” keyword is not specified and therefore it is assumed the switch has an IP address assigned to at least one VLAN. In the second example, the “oobm” keyword is used. To TFTP server IP address of server From flash Filename PCU_ # copy startup-config tftp bkup_612001_ cfg Specify oobm keyword to use the OOBM IP address as the implied source/destination Useful when no VLAN has an IP address PCU_ # copy startup-config tftp oobm bkup_612001_ cfg
67
Resetting to the Factory Default Configuration
Using front panel control buttons: Press the Reset and Clear buttons simultaneously Release the Reset button Continue pressing the Clear button for about 10 seconds Switch then completes self test and is operating with factory default settings 6120G/XG example Clear button Reset button If necessary, you can reset the switch configuration to the factory default settings. An example of this default configuration for a 6120G/XG Blade Switch was shown previously in this presentation. There are two methods you can use to reset the configuration to the factory default settings. You can: Use the front panel control buttons labeled as Clear and Reset as described in the procedures above. These procedures are common to most ProCurve switches. You can use the erase startup-config CLI command. To use this command you must be at the manager privilege level. As indicated in the graphic above, when you enter this command you are prompted to confirm the action. After doing so, the configuration settings are deleted, the factory defaults are installed, and the switch is rebooted. Front panel security settings can be enabled to prevent use of these buttons From the CLI using the erase command: Requires manager privilege PCU_ # erase startup-config Configuration will be deleted and device rebooted, continue [y/n]? y
68
Useful CLI show Commands
Switch CLI Command Description System Information show flash Displays the software versions stored on flash show version Displays the software version currently running show logging [-a | -r | severity] Displays log file entries show system [enclosure | information] Displays general system information or about enclosure show startup-config | running-config Displays the configuration file on flash or in-memory VLAN Information show vlans Displays a list of VLANs defined show vlans vid Displays the untagged/tagged status of ports in a VLAN show vlans ports <port-list> Displays VLAN(s) to which port(s) are assigned Interface / Port Information show interfaces [brief | config | display | port-utilization | port-list] Displays configuration and statistics of external and internal interfaces show trunks [port-list] Displays list of configured trunks and ports used show oobm [ip | arp] Displays configuration and status of the OOBM interface show tech transceivers Displays types of installed transceivers This table provides a starting list of useful CLI display commands for those technicians that are not so familiar with ProCurve blade, fixed-port, and modular switches. Some of these CLI commands have been illustrated in previous slides.
69
Useful CLI show Commands (cont.)
Switch CLI Command Description IP Addressing and Routing Information show ip Displays the IP addresses assigned to each VLAN, and the default gateway, DNS server, and domain suffix show ip route Displays the locally connected and the default static route Spanning Tree Information show spanning-tree Displays the global MST (and CST and IST) status and the status of each switch port show spanning-tree config Displays the global MST configuration settings and those of each switch port show spanning-tree mst-config Displays the MST configuration in terms of which VLANs are mapped to each MST instance show spanning-tree instance <instance-id> Displays the status of an MST instance and the state of each switch port show startup-config | running-config Displays configuration file on flash or in-memory This table provides some additional useful CLI commands.
70
Technology for better business outcomes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.