Update October 27th 2015
At the time writing this article, I mentioned my experience on upgrading from version 2.2(3a), and the unexpected reboots of “some” blades. It turns this bug have been identified and is fixed since 2.2(3g). I forgot to update this article as promised : thanks to Patrick for asking after in the comments. 🙂
First to narrow the scope, the problem affected only UCS B200 M4. It was not obvious for me, as the deploye was a greenfield with only B200 M4 blades. It’s logged under bug number CSCut61527.
What is it about? The B200 M4’s BMC (Baseboard Management Card, a part of IPMI/CIMC system) sometimes return an invalid FRU and make the blade reboot … Yeah you read it right : a management controller taking potentially down production workloads …
This caveat is around since 2.2(3a) and 2.2(1b), and was fixed first on release 2.2(3g). Here is the link to the proper section of Release Notes for UCSM 2.2. When you are there, just look for CSCut61527 at the 10th row in the table.
Lesson learned : always double-check your current UCSM version before adding B200 M4 blades if it’s not a greenfield deployment!
There is plenty of writing about “how to upgrade UCS” (from official Cisco documentation to independent blog posts) but I found none going from UCSM up to ESXi drivers (disclaimer : I looked after it less than 5min :-).
So here is my 2 cents on the matter.
What do I need to update my UCS system?
The detailed list of objects you need to upgrade are the following, from top-to-bottom :
- UCSM itself, which is a cluster management software running in Active/Passive mode on Fabric Interconnects,
- The Fabric Interconnects,
- The IO Modules, aka FEXs,
- Servers (either Blade or Rack format), whichcan be separated in three major section :
- Controllers (SAS, CIMC),
- Adapters cards,
- Drivers, specific to your Operating System.
This document is old and some information may be outdated, but still describe quite well the “What” : Cisco UCS Firmware Versioning.
Where should I look for the software pieces?
It is rare enough for Cisco products to be mentioned : you don’t need to have a system linked to your CCOID to be able to download UCS related software 🙂
Fortunately, all pieces of software listed on the previous section are grouped by bundles and you don’t have (anymore) to download each packages separately :
- Infrastructure Bundle : it contains UCSM, FI and FEX softwares/firmwares,
- B-Series or C-Series Bundle : it contains BIOS, Controllers and Adapter cards firmwares,
- An ISO with all C-Series or B-Series drivers.
Note : Instead of downloading 2GB of drivers, if you are looking for drivers on a particular Operating System, it may be better to look for Cisco UCS drivers on the OS editor’s site. For example if you look for the lastest Cisco UCS enic and fnic drivers for VMware, you can find them on vmware.com. It’s a 2MB download versus 2GB …
Updating the UCS system
In this section, I will not go for a screen-by-screen explanation but will rather explain the key steps and possible warnings you need to be aware of before starting the upgrade.
First, the documentation you should definitely check :
At the time of writing this article, with the current version being 2.2(3e), the recommend upgrade path is Top-to-Bottom, it’s generally the way to go. Yet on some earlier versions (1.4 if I am correct), required Bottom-to-Top.
It’s really unlikely that would change back again, but you should definitely check the documentation and the last release note update to know what’s the current and supported method. Here is Upgrading Cisco UCS from Release 2.1 to Release 2.2 document.
This doodle illustrate the updated parts and the actual order to follow.
Step 0 is about preparation. You need to upload the firmware packages to the Fabric Interconnect boot flash (the packages are copied to both fabric interconnects).
- Upgrade the UCSM Software. It’s supposed to be non-disruptive for data path and you should only relaunch the UCSM client. My recent experience when upgrading from 2.2(3a) to 2.2(3d) was catastrophic : some blades rebooted randomly 4-5 times … Not so “non-disruptive”. I managed to reproduce the same behavior on another system and a SR is currently open. I may update this post later depending on the SR’s issue.
- Stage the Firmware (~10-20min) on all FEXs (“Update Firmware” on Equipment>Firmware Management>Installed Firmware”) and set it to be active on next reboot (“Activate Firmware” without forgetting the related “active on next reboot” checkbox). This will save you a reboot, as the FEX will reboot anyway when the Fabric Interconnect will be upgraded,
- Upgrade the Fabric Interconnect which is holding the secondary role, wait for reboot (~15min) then change the cluster lead to get primary on the newly updated FI,
- Upgrade the remaining Fabric Interconnect and wait for reboot (~15min), then take back the cluster lead to the initial state (there is no automatic fail-back for UCSM),
- Update the blades : best way is through Maintenance and Firmware Policies,
- Be sure that your service profile is set to “User Ack” for maintenance Policy,
- For ESXi node, take it in maintenance mode first from your vSphere Client,
- Ack the reboot request on UCSM as your ESX nodes are in Maintenance mode.
Note : you can edit the default “Host Firmware Package” policy to use the right package version (for blade and rack), even without any service profile created. This way, any UCS server connected to the fabric will be automatically updated to the desired baseline. This will effectively prevent running different firmware due to different shipping/buying batches.
Most upgrade guides stop here, right after updating the hardware. Let me say that it’s golden path to #fail 🙂. The next part is about updating your ESXi driver to the most current driver version, supported by your UCS firmware release.
Updating VMware ESXi drivers
At the end of the day, what matter is how your Operating System handle your hardware. That is the driver’s job. If it’s obsolete, either it works non-optimized and without “new features/enhancements” (that’s the best option) or it may lead to some unpredictable behaviors …
Bets are high you installed your ESXi on a UCS server using a Custom ISO, available at vmware.com. Bets are higher that as the VIC card’s exposed vETH and vHBA are recognized, nobody has bothered to update them. If so, you run with a 2-3 years old driver …
You can check your current enic (vETH) and fnic (vHBA) driver version on your ESXi host with the following commands :
#vmkload_mod -s enic
#vmkload_mod -s fnic
If you find Enic version 2.12.42 and Fnic version 188.8.131.52, you run the ISO’s version and I would highly recommend to upgrade.
Download your drivers at vmware.com following this navigation path : vmware.com > download > vsphere > Driver & Tools.
- Select the relevant vSphere version (5.X, do not choice “update 01, 02” links),
- Download Drivers for Cisco enic and fnic.
It’s 1MB per download on vmware.com, compared to the UCS Drivers ISO on Cisco.com which contains all drivers for all systems, but worth 2GB …
To apply the update, you have choice between esxcli on the esx shell, or via Update Manager.
No rocket science here, just follow this standard VMware KB and pick an option you are comfortable with : http://kb.vmware.com/kb/2005205 or KB1032936 for vSphere 4.x.
Reminder : you do NOT need a Windows based vCenter when using Update Manager. You just have to plan a Windows system to install VMware Update Manager utility, then you can enjoy using vCenter Appliance.
In addition, this Troubleshooting Technote go into all the details regarding how to check and update UCS Drivers for ESXi, Windows Server and Linux (Red Hat & Suse).