Tag Archives: vCloud Director

Links collection #2

Here is the Links Collection #2

This one is pretty short, and focused on VMFS & VMDK.
The thinking on the first link about all possible alternatives and the trade-offs is really interesting from a design point-of-view.

Next, some useful links about vCloud Director & vCloud Automation Center.

VMFS & VMDK

vCloud Director

Links collection #1

Here is a collection of some interesting articles & papers I stumbled upon last week.
This list is mainly for my own record and allow me to not keep a web browser with more than 20 tabs opened over one month. 🙂

Sure “cloud bookmarks” solutions are great. There is plenty of them out there, and I use them.
But the drawbacks are the following :
– it’s quicker for me to write down that link rather than struggling with folder and tags,
– I would need my credentials or an agent to access my bookmarks,
– it could be lengthy to find back that link 2 years later,
– that link is maybe only needed on a temporary basis or particular context, and do not deserve a bookmark.

Finally, I think it’s easier to share with others over a blog (no need to affiliate on a profile, etc …).
So, after this long introduction, here is “Link Collection #1”.

EMC VPLEX

  1. RecoverPoint Comes Clean with VPLEX (clearpathsg)
  2. Interesting use cases for VPLEX (vijay swami)
  3. A Deeper Look at VPLEX (Scott Lowe)

vMSC (vSphere Metro Streched Cluster)

  1. vSphere Metro Stretched Cluster with vSphere 5.5 and PDL AutoRemove (longwhiteclouds)

VMFS & VMDK

  1. Support for virtual machine disks larger than 2 TB in vSphere 5.5 (2058287) (vmware)
  2. vsphere 5.5 Jumbo VMDK Deep dive (longwhiteclouds)

vSphere, vCenter & vCloud Director 5.5

  1. VMware vCenter Server Appliance (VCSA) 5.5 deployment tips and tricks (ivobeerens)
  2. Top 10 things you must read about vSphere 5.5 (vsphere land)
  3. Comparison of vCloud Director Maximums 1.5 / 5.1 / 5.5 (virtualizationexpress)

  4. Comparison of vSphere Maximums 5.1 / 5.5 (vsphere land)

  5. vCenter, vSphere & vCloud Director 5.5 configuration maximums (vmware)
  6. vCloud Director 5.1 Configuration Maximums (2036392) (vmware)
  7. vSphere 5.1 configuration maximums (vmware)

Open Source Clouds

  1. Beyond Chef and Puppet: Ten essential DevOps tools (TechTarget)

vCD, Redhat, and the network : common pitfalls

This post could have been named “the good, the bad and the ugly”, you are free to choose which one you map with : vCD, Red Hat and the network … 😀

The purpose here, is to sum up common pitfalls when setting up a vCloud Director environment. As you will see, most of them are not related to vCloud, but rather to linux.

What do you need for a vCloud cell?

Before starting, let’s do a quick refresh on what you need for a vCloud Cell:

  • a VM with minimum hardware requirement and a supported guest OS (see installation guides),
  • a minimum of two nics : one for the web portal, one for VM consoles (VMRC proxy).

According to the installation guides, for all versions prior to 5.5 you will need:

  • 2GB of RAM,
  • ~1GB of disk space to install vCD binaries,
  • an RHEL 5.x or 6.x depending on the version of vCD (no CentOS support).

Starting with vCD 5.5, you will need 4GB of RAM and a little more disk space (1350MB).
Note: version 5.5 officially support CentOS 6.4.

There is no recommandation/requirement explicitly stated by VMware for cpu count. However, any decent vCD design should include a dedicated management cluster, and the cpu ressource will not be a bottleneck on this cluster. So I tend to setup vCD cells with 2 vCPUs.

Here, I am digressing from the original matter of this post. Let the design advices for another post, and focus on configuration errors.

The more common problems are the following:

  1. Hang on reboot of the vCD cell,
  2. Being unable to ping the 2nd nic,
  3. Unable to access the vCloud web portal.

1. Hang on reboot of the vCD cell

It’s mainly related to vSphere 5.1 and RHEL/CentOS 6.x. The problem have been widely discussed on VMware community forums, here are some links:

With these informations, you should be able to get rid of this nasty reboot hang …

2. Unable to ping the 2nd nic

If you choosed RHEL6 for your vCD cell guest OS and your nics are on the same vlan, you should have some problems to ping the second nic. This is due to RHEL’s default setting regarding “reverse path filtering”.

Basicly, RHEL drop packets when the route for outbound traffic differs from the route of incoming traffic. It’s a new “default behavior” with RHEL 6.
More details could be found on the subject following this link to the related Red Hat KB.

3. Unable to access the vCloud web portal

Even if you have dodged all these traps, you could still be frustrated when trying to launch the vCloud web portal for first configuration. And it’s mainly because of the default security parameters of RHEL: The iptable firewall is running.

Based on your security context, you could just disable the firewall or a better approach, configure it …
By default RHEL is not listening on port 443, so you have no chance to execute the first install wizard.

Fast spoiler : add the following rule and restart your firewall

iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
service iptables save
service iptables restart

Obviously you will need to open other ports to get a fully functional vCD, but we are here focussing on what you need to get access to the main portal.

Kendrick Coleman wrote an exhaustive article about how to setup the RHEL firewall for the vCD cell. I strongly encourage you to read it if you decide to not completely turn off iptables.

And finally, the yoda quote:
“To be an iptables jedi, only one way there is luke : rtfm of netfilter/iptables

More links to help you configure your vCD cell right, on the first try: