vCSA Root Password expiry

So this has been such a common problem encountered by my clients that I decided to write an article on it for SearchVMware.

It’s pretty simple to fix, and I’ve included screenshots in the article along with step-by-step instructions on how to bypass the root lockout.

http://searchvmware.techtarget.com/tip/Troubleshooting-vCSA-root-password-failure

Enjoy….

Advertisements

vCenter Server Appliance & WinSCP

The other day I had to pull off the SSL certs for the vCSA and I was struggling to connect to the appliance even after enabling SSH and Bash shell access from within the VAMI.

Turns out a bit more configuration is required before you can connect to the vCSA via SCP and this is mainly due to the vCSA having 2 shells – Appliance shell and Bash shell.

What you need to do is change the default shell in the vCSA to Bash… have a look at the following KB for the solution steps: http://kb.vmware.com/kb/2107727

BTW, in case you didn’t know where the SSL cert for the vCSA resides, you’ll find it here:
/etc/vmware-vpx/ssl/rui.crt

Rumors Are Afoot: Is the Dell/EMC Merger…

Rumors Are Afoot: Is the Dell/EMC Merger Starting to Unravel?

Rumors Are Afoot: Is the Dell/EMC Merger…

It has been an interesting timeto be living in the IT world. The ripples and ructions caused bythe Dell/EMC merger have quashed almost everyother conversation. This is the biggest take-private merger transaction for a tech company ever, dwarfed only by the $106 billion Time Warner/AOL deal in 2006. When the mergerwas originally announced on October … Continue reading Rumors Are Afoot: Is the Dell/EMC Merger Starting to Unravel? → Rumors Are Afoot: Is the Dell/EMC Merger Starting to…Read More


VMware Advocacy

Deploying VSAN 6.1 ROBO

One of the things I’m fortunate to have access to at MTI Technology is the Solution Centre which has all sorts of kit that can be used for demos and for consultants to play around with.

After coming back from VMworld, one of the things I really wanted to test out was how easy it would be to deploy VSAN 6.1 in a ROBO solution. Fortunately I had a pair of old Dell R810s lying around and managed to cobble together enough disks and a pair of SSDs in order to create two VSAN nodes!

VSAN ROBO allows you to deploy a 2-node VSAN cluster (rather than the standard 3-nodes) with a Witness Server located on another site – usually this would be your primary data centre (as per diagram below). It also allows several ROBO deployments to be managed from a single vCenter Server. VSAN ROBO uses the same concepts as VSAN Stretched Cluster, using Fault Domains to determine how data is distributed across the VSAN nodes. The Witness Server is uniquely designed with the sole purpose of providing cluster quorum services during failure events and to store witness objects and cluster metadata information and in so doing eliminates the requirement of the 3rd physical VSAN node.

vsan-robo-wit

Note: Whenever you deploy any VMware product into a production environment, make sure that you check the Hardware Compatibility List!
In my case for VSAN, neither the server nor the storage controller in the R810 was supported – but as it was only a demo environment it wasn’t of top priority.

Before I go through how I configured VSAN ROBO, there are a few things I need to state upfront which I don’t recommend you doing in a production environment:

  1. Using the same subnet for the VSAN network – in my demo environment I only have 1 subnet, so I’ve had to stick everything on the same VLAN. Ideally you should separate out the VSAN traffic away from the Mgmt and VM traffic.
  2. Using a SSD from a desktop PC for the Cache drive – ideally this should be an enterprise grade SSD as VSAN uses the SSD for caching so you really need one that has a higher endurance rate.

Also there are a few features that are not supported in the ROBO solution (but available in standard VSAN):

  • SMP-FT support
  • Max value for NumberOfFailuresToTolerate is 1
  • Limit of 3 for the number of Fault Domains (2 physical nodes and the witness server).
  • All Flash VSAN.
  • VSAN ROBO licensing is purchased in packs of 25 VMs, with 1 license per site. This means a maximum of 25 VMs can be licensed per site! However, 1 pack can be used across multiple ROBO sites (so 25 VMs across 5 sites).

From a configuration perspective, configuring a VSAN Cluster for ROBO is extremely simple as it is performed through a wizard within the vSphere Web Client. From a network perspective, the two VSAN Cluster nodes are to be configured over a single layer 2 network with multicast enabled. There are a few requirements for the Witness Server:

  • 1.5 Mbps connectivity between nodes and witness
  • 500 milliseconds latency RTT
  • Layer 3 network connectivity without multicast to the nodes in the cluster

 

So for my demo environment, I have 2x R810s with 1x Intel Xeon X6550 and 32GB RAM. For my SSD I found an old 240GB Micron M500 SSD (MLC NAND flash) and stuck it into a Dell HD caddy, for my HDDs I have 5x 146GB SAS drives. The Witness server resides within my main VMware environment (which runs on UCS blades and a VNX5200).

I won’t go into how I installed vSphere ESXi 6.0 u1….. however, just remember that you’ll need to install ESXi onto an SD or USB drive as you want to use all local drives for VSAN (in my case I installed ESXi onto a 8GB USB drive).

I created a new VMware cluster within my vCenter and added the 2 VSAN nodes. I then deployed the Witness Server, which in my case was the nested ESXi host within a virtual appliance. There’s actually 3 sizes for the Witness Appliance – Tiny, Medium, Large. I deployed a Medium appliance. vsan1a vsan2vsan3

I won’t step through how to deploy the OVA as it’s pretty routine stuff. If you load up the console for the Witness server, you’ll be greeted with the familiar DCUI of vSphere ESXi.
vsan4

Once it’s deployed and configured with the relevant IP address and hostname, you can add the Witness server into your vCenter Server as just another ESXi host.

vsan5 vsan6

One thing that’s slightly different is the Witness Server comes with its own vSphere license and so doesn’t consume one of your own licenses. Note that the license key is censored so you can’t use it elsewhere!
vsan7

Once the Witness Server has been added to the vCenter Server you may find that there is a warning on the host which says “No datastores have been configured”
vsan8

This occurs because the nestled ESXi host does not have any VMFS datastores configured, the warning can be ignored, but if you’re like me and hate the exclamation mark warnings in your environment you can easily get rid of the warning by adding a small 2GB disk to the witness appliance VM (Editing the Hardware settings) and then adding a datastore on top of the new disk.
vsan9

You should be able to notice that the icon for the witness appliance within the vCenter Server inventory is slightly different from your physical hosts – it’s shaded light blue to differentiate it from standard ESXi hosts.
vsan10

The next step is to configure the VSAN network on the witness server. There is already a portgroup pre-defined called witnessPgDo note remove this port group as it has special modifications to make the MAC addresses on the network adapters match the nested ESXi MAC addresses!
There should be a VMkernel port already configured in the portgroup, edit the port and tag it for VSAN traffic.
vsan11 vsan12

At this point, ensure that your witness server can talk to the VSAN nodes.

Note: Typically an ESXi host has a default TCP/IP stack and as a result only has a single default gateway – more often than not, this default route is associated with the management network TCP/IP stack. In a normal deployment scenario, the VLAN for the management network would be isolated from the VSAN network, as such there is no path between the two networks and no default gateway on the VSAN network. A way around this problem is to use static routes to define a routing entry which indicates which path should be used for traffic between the witness server and the VSAN nodes. I won’t go into configuring static routes, you can find more detailed information in the VSAN 6.1 Stretched Cluster Guide.

Once your witness server is talking to the VSAN nodes, it’s time to configure the VSAN ROBO solution. This is as simple as creating fault domains.

I won’t go into how to turn on the VSAN cluster and disk management as this is simple stuff and has been covered off in numerous other VSAN blogs/guides. One thing I will mention is that because I have 2 very old servers, I had to configure each individual disk as a RAID-0 set as the RAID controller in the server did not support pass-through. Once configured and detected by the ESXi host as storage devices, I then had to manual set the SSD device as a Flash Disk:
vsan13

I also ended up manually claiming the disks for VSAN.

vsan14

Once the 2 nodes have been configured for VSAN, next comes the creation of the Fault Domains. As previously mentioned, VSAN ROBO works by creating 2 Fault Domains and a witness server – just like you would for a VSAN stretched cluster. However, in this case only 1 server is assigned to each fault domain.

vsan15 vsan16 vsan17 vsan18 vsan19

Note: You probably have noticed that the wizard still states “VSAN Stretched Cluster” on all the screens, unfortunately VMware didn’t write separate code for VSAN ROBO, so it’s still classed as a stretched cluster.

Once VSAN ROBO has been deployed you can check the health of the VSAN by selecting the cluster and Monitor->Health.
vsan20 vsan20a
The first warning is regarding the VSAN HCL, and shows that my server and its RAID controllers are not listed in VMwares’ VSAN HCL. =)

Next license the VSAN ROBO cluster, note what features get switched off when licensing for VSAN ROBO.
vsan21 vsan22

There is already a default VSAN storage Policy, creating a VM and assigning this policy gives a Failure To Tolerate of 1. Viewing the Physical Disk Placement you can see that data is mirrored on the 2 VSAN nodes with metadata stored on the Witness Server.
vsan23

Something I found very useful was the “Proactive Tests” option for VSAN which provides the ability to perform a real time test of cluster functionalities and dependencies – creating a small VM, checking network multicast between hosts plus storage IO.

vsan24

 

 

Voila…. a basic VSAN ROBO deployment…..

Don’t forget to download the Storage Management Pack for vROps so you can get an in-depth view of your VSAN deployment from within vROps:
https://solutionexchange.vmware.com/store/products/vrealize-operations-management-pack-for-storage-devices

Taking a Zero Trust Approach to Security

See how IT leaders can leverage micro-segmentation for zero trust security #VMware #NSX

Taking a Zero Trust Approach to Security

Traditional security approaches that follow the “trust but verify” concept are insufficient against today’s persistent and evolving cyberthreats. A new approach, the Zero Trust model of information security, eliminates the assumption that there are “trusted” and “untrusted” networks. With the Zero Trust model, we flip “trust but verify” into “verify and never trust” and take a data-centric approach to security.


VMware Advocacy

VMware Workstation from 1999 to 2015

VMware Workstation from 1999 to 2015

VMware Workstation from 1999 to 2015

About 16 years ago, long before ESXi and vSphere, VMware published their first product: VMware 1.0 . VMware was the first application that allowed to run multiple operating systems on a single x86 machine. The VMware Virtual Platform technology adds a thin software layer that allows multiple guest operating systems to run concurrently on a single standard PC.


VMware Advocacy

Unable to connect to VAMI after upgrading the vCSA

One of the plus points with upgrading your vCenter Server Appliance to 6.0 update 1 is the fact that VMware have re-introduced the Virtual Appliance Management Interface (VAMI). This was one of my bug-bears with 6.0… how any sort of administration/configuration work required you to access the vCSA shell!

Recently after upgrading a customers vCSA from 6.0 to 6.0 update 1, we couldn’t access the VAMI to change the network and password policy settings. We rebooted the vCSA several times but still the VAMI was inaccessible, within Chrome we were getting the following error:

vami

I couldn’t work out why the VAMI services wasn’t coming online….. After several minutes of searching on Google, I came across the following VMware KB:
http://kb.vmware.com/kb/2132965

It turns out that there is a known bug with the VAMI web-service if you disable IPv6 within the vCSA console (which is what I had done as there was no requirement from the customer to use IPv6).

There is currently no resolution to this bug, and in order to solve the issue you have to edit the lighttpd configuration file.
(lighttpd is a light-weight open-source web server)

To workaround this issue set the server.use-ipv6 parameter to disable in the /etc/applmgmt/appliance/lighttpd.conf.
  1. Connect to the vCenter Appliance or Platform Service Controller Appliance through SSH or console.
  2. Run this command to enable access the Bash shell:
    shell.set –enabled true
  3. Type shell and press Enter.
  4. Open the lighttpd.conf file using a text editor:
    vi/etc/applmgmt/appliance/lighttpd.conf
    vami1
  5. Search for the entry server.use-ipv6=”enable”
  6. Change enable to disable.
    server.use-ipv6=”disable”
    vami2
  7. Start the VAMI service by running this command:
    service vami-lighttp start
  8. You should now be able to access the VAMI from a browser (https://vCSA_IP_address:5480 or https://vCSA_FQDN:5480).