Category Archives: Infrastructure

Configurer une interface en GbE sur Nexus 50xx/55xx/56xx

Les switchs Nexus constituent la gamme Datacenter chez Cisco. Ils sont optimisés pour les connexions 10GbE et supérieur, et présentent un bien meilleur prix au port 10GbE par rapport à la gamme Catalyst. Il est cependant parfois nécessaire d’avoir une connexion Gigabit Ethernet, entre autre pour se connecter à l’architecture existant.

Pour cela les modules GLC-T sont nécessaires.
Note : prolabs manufacture des Gbic compatibles, avec un prix intéressant.

L’erreur typique rencontrée est « SFP Validation Failed », malgré un SFP compatible. Afin d’avoir un port fonctionnel, il faut configurer le port avant d’insérer le SFP.

  1. Définir la vitesse du port avant insertion du SFP
conf
int e1/1
speed 1000
  1. Insérer le SFP dans un port compatible 1/10GbE :
  • Sur les Nexus 5010 : ports 1 à 8,
  • Sur les Nexus 5020 : ports 1 à 16,
  • Sur les Nexus 5500/5600 : ports 1 à 32.

Note : sur les 5500/5600, un port configuré en speed auto basculera automatiquement la configuration entre 1GbE et 10GbE.

Références

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurationGuide/BasicEthernet.html
A Cisco Nexus 5000 Series switch has a number of fixed 10-Gigabit ports, each equipped with SFP+ interface adapters. The Nexus 5010 switch has 20 fixed ports, the first eight of which are switchable 1-Gigabit/10-Gigabit ports. The Nexus 5020 switch has 40 fixed ports, the first 16 of which are switchable 1-Gigabit/10-Gigabit ports.

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5500/sw/interfaces/7x/b_5500_Interfaces_Config_Guide_Release_7x/b_5500_Interfaces_Config_Guide_Release_7x_chapter_01.html
The 5596T switch has 48 base board ports and 3 GEM slots. The first 32 ports are 10GBase-T ports the last 16 ports are SFP+ ports. The 10GBase-T ports support a speed of 1-Gigabit, 10-Gigabit, or Auto. The Auto setting automatically negotiates with the link parser to select either 1-Gigabit or 10-Gigabit speed.

http://serverfault.com/questions/339957/how-do-i-get-past-sfp-validation-failed-on-a-sfp-ge-t-transceiver-on-a-nexus-5
https://supportforums.cisco.com/discussion/11510966/nexus-5020-and-glc-t

Crash Test – What happen when you loose L1&L2 on an UCSM Cluster?

Nexus-L1L2L1 & L2, What’s it?

The L1 & L2 links are dedicated physical ports (GbE) on UCS Fabric Interconnect platform, responsible of carrying the heartbeat traffic for the UCSM cluster. To be clear, both links are crucial to the survival of the cluster!

For the record, the Fabric Interconnects (or FI), are based on a Nexus hardware platform, tweaked for the job :

  • FI Gen 1 (6100XP) are based on Nexus 5000 physical platforms,
  • FI Gen 2 (6200UP) are based on Nexus 5500 physical platforms.

If you are wondering “What are theses L1 & L2 things for, on my brand new and shiny Nexus?”, here is the answer : nothing (on a Nexus). They are here from day one on Nexus, because it was planned from the start to decline the same hardware base and reuse it for Fabric Interconnects.

It’s just about rationalization of the production line : it’s cheaper to let the 4 GbE ports on all cases than handling several products.

Yes, the company that produced the Nexus line of product (Nuova Systems, acquired by Cisco on 2008) had UCS on mind from the beginning. If you look closer at the résumé of the co-founders of Nuova, all pieces comes together. They are the three musketeers as I tend to call them (even if “band” is mainly from Italy) : Mario Mazzola, Prem Jaim & Luca Cafiero. It’s not their first shot at creating a startup. But I will keep this story for another day – for another blog post, why not.

Talking back about L1 & L2 ports, if we keep on the analogy to the Nexus platform, we can say that L1 & L2 could play the same role of VPC Peer-Links on a VPC,  minus some details. Explained that way, I am sure it is more clear for everybody 🙂

Note : L1 & L2 ports connection are done this way :

  • FI-A L1 <-> FI-B L1,
  • FI-A L2 <-> FI-B L2.

Key point, no switch between L1 & L2!!! They are channel bonded, and expect another Fabric Interconnect at the other edge. Don’t try to workaround, and connect both Fabric Interconnects through a switch : this is explicitly not supported.

The heartbeat traffic

UCSM is the management software of the UCS platform. It’s embedded on each Fabric Interconnect and run in an Active/Stand-by way. The motivation and the revelence behind this choice, rather than opting for an installed of VM hosted application could feed an entire tribe of Trolls.

– We do love Trolls around here, but they will be fed another time on another blog post (again!) –

Like any clustered application, signaling link is required to keep up a good knowledge of each member’s health : it’s usually the role of a redundant heartbeat link, in other words a messaging bus.

So we are dealing here with a two member cluster, and it is not a detail.

Loosing a member and Split-Brain

The cluster use the heartbeat link to monitor the health status of each member, and failover the services on the healthy member when disaster occur. A member is deemed healthy, if a quorum can be reached among cluster members. When the cluster consist of only two members, it’s the word of one against the other … So we need a referee (called “Tie-Breaker”, “Failover Monitor” or “Witness” depending on the technology), in order to avoid split-brain. On a UCSM cluster, it’s the default gateway that play this referee job when one member can’t join the other.

Note : UCSM cluster do some other testing between FI and chassis. The FI check if the chassis’s SEEPROM is reachable. The following article on UCSGuru dive a bit more about the tests and the particular case of C-Series servers, which don’t have SEEPROM : HA with UCSM Integrated Rack Mounts.

What’s happening when L1&L2 link is down on a UCSM Cluster.

Ok, that was a pretty long introduction, to finally come the first question. What’s the observed behaviour from an operator/administrator point of view?

When one of the links become down, a “major” alert is raised on UCSM for each FI : “HA Cluster Interconnect Link Failure”.

  • Error “immediately” visible,
  • No impact on data path.

When both links are down (L1 & L2), after a timeout period (about minutes) :

  • A new error is raised, this one is “Critical” : “HA Cluster Interconnect Total link failure”,
    • Subordinate FI is deemed unreachable,
    • The B side is shut (assuming that B was the subordinate), DATA PATH included :
      • FEX,
      • LAN & SAN uplinks.

So keep an eye on your L1 & L2 links, and be sure that at least one of them is always up.

Unfortunately I don’t know the exact duration of the timeout. At least, I can say we are talking about minutes.

To keep comparing with Nexus and VPC, we can see a similar behaviour when VPC Peer-Link is down : the secondary is sacrificed on the split-brain altar.

Back to a nominal situation

As soon as at least one link is back alive :

  • Resync of subordinate FI on the primary,
  • Data links reactivation,
  • UCSM cluster back alive.

Some best practices about L1&L2

  • L1 & L2 links should be direct-connected, no switch between them,
  • During a migration or a physical move, it may happen to have only L1 active for a period of time, but keep it as short as possible and bring him back his L2 buddy as soon as possible,
  • It is about Ethernet : keep in mind distance limitations, don’t stretch more than 100 meters!

Updating an UCS system – from FI to OS drivers

Update October 27th 2015
At the time writing this article, I mentioned my experience on upgrading from version 2.2(3a), and the unexpected reboots of “some” blades. It turns this bug have been identified and is fixed since 2.2(3g). I forgot to update this article as promised : thanks to Patrick for asking after in the comments. 🙂

First to narrow the scope, the problem affected only UCS B200 M4. It was not obvious for me, as the deploye was a greenfield with only B200 M4 blades. It’s logged under bug number CSCut61527.

What is it about? The B200 M4’s BMC (Baseboard Management Card, a part of IPMI/CIMC system) sometimes return an invalid FRU and make the blade reboot … Yeah you read it right : a management controller taking potentially down production workloads …

This caveat is around since 2.2(3a) and 2.2(1b), and was fixed first on release 2.2(3g). Here is the link to the proper section of Release Notes for UCSM 2.2. When you are there, just look for CSCut61527 at the 10th row in the table.

upgrade-ucsm--bug-cscut61527--amp--b200m4--mdash--envoy-eacute-s

Lesson learned : always double-check your current UCSM version before adding B200 M4 blades if it’s not a greenfield deployment!

There is plenty of writing about “how to upgrade UCS” (from official Cisco documentation to independent blog posts) but I found none going from UCSM up to ESXi drivers (disclaimer : I looked after it less than 5min :-).

So here is my 2 cents on the matter.

What do I need to update my UCS system?

The detailed list of objects you need to upgrade are the following, from top-to-bottom :

  1. UCSM itself, which is a cluster management software running in Active/Passive mode on Fabric Interconnects,
  2. The Fabric Interconnects,
  3. The IO Modules, aka FEXs,
  4. Servers (either Blade or Rack format), whichcan be separated in three major section :
    1. BIOS,
    2. Controllers (SAS, CIMC),
    3. Adapters cards,
  5. Drivers, specific to your Operating System.

This document is old and some information may be outdated, but still describe quite well the “What” : Cisco UCS Firmware Versioning.

Where should I look for the software pieces?

It is rare enough for Cisco products to be mentioned : you don’t need to have a system linked to your CCOID to be able to download UCS related software 🙂

Fortunately, all pieces of software listed on the previous section are grouped by bundles and you don’t have (anymore) to download each packages separately :

  • Infrastructure Bundle : it contains UCSM, FI and FEX softwares/firmwares,
  • B-Series or C-Series Bundle : it contains BIOS, Controllers and Adapter cards firmwares,
  • An ISO with all C-Series or B-Series drivers.

Note : Instead of downloading 2GB of drivers, if you are looking for drivers on a particular Operating System, it may be better to look for Cisco UCS drivers on the OS editor’s site. For example if you look for the lastest Cisco UCS enic and fnic drivers for VMware, you can find them on vmware.com. It’s a 2MB download versus 2GB …

Updating the UCS system

In this section, I will not go for a screen-by-screen explanation but will rather explain the key steps and possible warnings you need to be aware of before starting the upgrade.

First, the documentation you should definitely check :

At the time of writing this article, with the current version being 2.2(3e), the recommend upgrade path is Top-to-Bottom, it’s generally the way to go. Yet on some earlier versions (1.4 if I am correct), required Bottom-to-Top.

It’s really unlikely that would change back again, but you should definitely check the documentation and the last release note update to know what’s the current and supported method. Here is Upgrading Cisco UCS from Release 2.1 to Release 2.2 document.

This doodle illustrate the updated parts and the actual order to follow.

Update UCS

Step 0 is about preparation. You need to upload the firmware packages to the Fabric Interconnect boot flash (the packages are copied to both fabric interconnects).

  1. Upgrade the UCSM Software. It’s supposed to be non-disruptive for data path and you should only relaunch the UCSM client. My recent experience when upgrading from 2.2(3a) to 2.2(3d) was catastrophic : some blades rebooted randomly 4-5 times … Not so “non-disruptive”. I managed to reproduce the same behavior on another system and a SR is currently open. I may update this post later depending on the SR’s issue.
  2. Stage the Firmware (~10-20min) on all FEXs (“Update Firmware” on Equipment>Firmware Management>Installed Firmware”) and set it to be active on next reboot (“Activate Firmware” without forgetting the related “active on next reboot” checkbox). This will save you a reboot, as the FEX will reboot anyway when the Fabric Interconnect will be upgraded,
  3. Upgrade the Fabric Interconnect which is holding the secondary role, wait for reboot (~15min) then change the cluster lead to get primary on the newly updated FI,
  4. Upgrade the remaining Fabric Interconnect and wait for reboot (~15min), then take back the cluster lead to the initial state (there is no automatic fail-back for UCSM),
  5. Update the blades : best way is through Maintenance and Firmware Policies,
    1. Be sure that your service profile is set to “User Ack” for maintenance Policy,
    2. For ESXi node, take it in maintenance mode first from your vSphere Client,
    3. Ack the reboot request on UCSM as your ESX nodes are in Maintenance mode.

Note : you can edit the default “Host Firmware Package” policy to use the right package version (for blade and rack), even without any service profile created. This way, any UCS server connected to the fabric will be automatically updated to the desired baseline. This will effectively prevent running different firmware due to different shipping/buying batches.

Most upgrade guides stop here, right after updating the hardware. Let me say that it’s golden path to #fail 🙂. The next part is about updating your ESXi driver to the most current driver version, supported by your UCS firmware release.

 Updating VMware ESXi drivers

At the end of the day, what matter is how your Operating System handle your hardware. That is the driver’s job. If it’s obsolete, either it works non-optimized and without “new features/enhancements” (that’s the best option) or it may lead to some unpredictable behaviors …

Bets are high you installed your ESXi on a UCS server using a Custom ISO, available at vmware.com. Bets are higher that as the VIC card’s exposed vETH and vHBA are recognized, nobody has bothered to update them. If so, you run with a 2-3 years old driver …

You can check your current enic (vETH) and fnic (vHBA) driver version on your ESXi host with the following commands :

#vmkload_mod -s enic
#vmkload_mod -s fnic

If you find Enic version 2.12.42 and Fnic version 1.6.0.5, you run the ISO’s version and I would highly recommend to upgrade.

Download your drivers at vmware.com following this navigation path : vmware.com > download > vsphere > Driver & Tools.

  1. Select the relevant vSphere version (5.X, do not choice “update 01, 02” links),
  2. Download Drivers for Cisco enic and fnic.

35a19d2f-76d2-4aa2-866b-154aa497514a.png

It’s 1MB per download on vmware.com, compared to the UCS Drivers ISO on Cisco.com which contains all drivers for all systems, but worth 2GB …

To apply the update, you have choice between esxcli on the esx shell, or via Update Manager.

No rocket science here, just follow this standard VMware KB and pick an option you are comfortable with : http://kb.vmware.com/kb/2005205 or KB1032936 for vSphere 4.x.

Reminder : you do NOT need a Windows based vCenter when using Update Manager. You just have to plan a Windows system to install VMware Update Manager utility, then you can enjoy using vCenter Appliance.

In addition, this Troubleshooting Technote go into all the details regarding how to check and update UCS Drivers for ESXi, Windows Server and Linux (Red Hat & Suse).

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-manager/116349-technote-product-00.html

 

VMware don’t play nice with CoD. For now. Me neither …

No, I am not going to talk about Call of Duty. At least not today, not on this blog post, even maybe not during this life … 🙂

So what’s CoD and what the hell this have to do with VMware? CoD stands for Cluster-on-Die, a new NUMA setting found on some shiny Intel Haswell processors, aka E5-v3.

It’s active by default.

Side note on this Feature
With Haswell, Intel have gone wild and each processor can hold up to 18 cores! That’s a lot, but nothing new here. The bottom line is that with so many cores on a single socket, it may lead to more latency toward memory access with some workloads.
So there is a bunch of new features, to get you confused have choice and find the best suiting option for your context.

Here is a link to a Cisco white paper, talking about the UCS B200 M4 and BIOS tuning for performance. You could find there plenty of details on each new BIOS settings. The following is an abstract for the “Cluster on Die Snoop” functionality

Cluster on Die Snoop
Cluster on Die (CoD) snoop is available on Intel Xeon processor E5-2600 v3 CPUs that have 10 or more cores. Note that some software packages do not support this setting: for example, at the time of writing, VMware vSphere does not support this setting. CoD snoop is the best setting to use when NUMA is enabled and the system is running a well-behaved NUMA application. A well-behaved NUMA application is one that generally accesses only memory attached to the local CPU. CoD snoop provides the best overall latency and bandwidth performance for memory access to the local CPU. For access to remote CPUs, however, this setting results in higher latency and lower bandwidth. This snoop mode is not advised when NUMA is disabled.

Problem : VMware doesn’t support this new option. It’s outlined on the KB2087032, for vSphere 5.1 in the front line, but you can read that this problem persist for vSphere 5.5.
I ran on this problem with a brand new UCS M4, the result (that I am aware of) is pretty fun : vSphere host see 4 sockets instead of 2 sockets … Licensing guys @VMware will be happy 🙂

Workaround : This is a well-known issue today for vSphere, and no doubt it will be fixed soon by VMware. In the mean time, the recommended action is to “disable” this setting. Actually, you cannot “disable” it, you just have to choose another CPU snoop mode. The best balanced mode seems to be “Home Snoop” mode.

Home Snoop
Home Snoop (HS) is also available on all Intel Xeon processor E5-2600 v3 CPUs and is excellent for NUMA applications that need to reach a remote CPU on a regular basis. Of the snoop modes, HS provides the best remote CPU bandwidth and latency, but with the penalty of slightly higher local latency, and is the best choice for memory and bandwidth-intensive applications: such as, certain decision support system (DSS) database workloads.

The Third available mode for CPU Snooping defined by Intel on E5-2600 v3 is “Early Snoop”, which is best suited for low latency environments.

So to configure that parameter on a UCS system, the best option is to define a custom BIOS policy through UCSM, tweaking the QPI snoop mode and choose “Home-Snoop”.

Changing the BIOS policy for this value take effect immediately, without needing to reboot. vSphere see that change immediately and report the correct socket number.

Just remember to be at least in firmware version 2.2(3c) to get that feature available, but my advice would be to run at least UCSM 2.2(3d), that’s the current recommended firmware version for B200 M4 as writing.

UPDATE March 25th : “Home Snoop” is the default setting for B200 M4 BIOS, when your system is shipping with UCSM > 2.2.3(c).