vSphere 6.5 Product Interoperability – brain fade moment!

So it’s probably worth reminding everyone that there are still VMware products that are not yet supported on vSphere 6.5!

I unfortunately found out the hard way when I broke my work’s demo environment (or at least half of it).

Now even though I’ve blogged about compatibility issues previously eating too many mince pies and drinking too much bucks fizz over the Christmas and New Year festivities has obviously taken its toll on my grey matter, and coming back to work in the new year I decided it would be a nice idea to upgrade a part of my works demo environment to vSphere 6.5 so that we can use it to demo to customers!

The problem was I upgraded the part of the lab running NSX and when I got to the point of trying to push the NSX VIBs onto the ESXi hosts (when preparing the hosts to join the NSX cluster), it was having none of it and failing! After several unsuccessful attempts, it slowly dawned on me that NSX was one of those ‘unsupported’ products that doesn’t work with vSphere 6.5…..

Damn…..

Fortunately I didn’t destroy my old vCenter Server 6.0u2 appliance so was able to roll back by re-installing the ESXi hosts with 6.0.

 

Anyways, the products still not supported are:

  • VMware NSX
  • VMware Integrated OpenStack
  • vCloud Director for Service Providers
  • vRealize Infrastructure Navigator
  • Horizon Air Hybrid-Mode
  • vCloud Networking and Security
  • vRealize Hyperic
  • vRealize Networking Insight

 

Definitely worth keeping an eye on this VMware KB: Important information before upgrading to vSphere 6.5 (2147548)

And if you do end up upgrading to vSphere 6.5, then make sure you follow the recommended upgrade sequence in this VMware KB: Update sequence for vSphere 6.5 and its compatible VMware products (2147289)

vCenter Server Appliance – filesystem out of space

So it’s all happening this week with this upgrade/clean up of the MTI solution centre!! =)

Upon finishing all the upgrades and reconfiguring vSphere Replication and Site Recovery Manager, I noticed the DR vCSA was a bit unresponsive…. taking ages to log into Web Client (sometimes it didn’t even get that far) – signing into the VAMI, I noticed that there was a critical error regarding the log file.

vcsa01

If you weren’t aware, one of the changes to vCSA with 6.0 was the deployment of 11 VMDKs with the appliance, one for each component service of vCenter. In previous versions there were only 2 virtual disks and this proved problematic when trying to increase disk capacity for particular components of vCenter Server (ie if you only wanted to increase the log directory).

As the vCSA was running in a demo environment, I decided to only do a ‘Tiny’ install – and it turns out that vCSA just ran out of capacity for logging – a quick jump onto the console proved this to be true:

vcsa02

The following VMware KB provides details into the 11 VMDKs and what mount points are attached to each vdisk: https://kb.vmware.com/kb/2126276.

vcsa04

I followed the instructions to increase the capacity of the log vdisk (VMDK5) and then gave the vCSA a reboot…..

vcsa03

The vCSA is now healthy and back to normal. =)

As a footnote, here’s a VMware KB that explains how to increase he maximum backup size and index of the vCSA to try and resolve he issue of the log directory fill up: https://kb.vmware.com/kb/2143565

Cannot connect to vCenter Server via vSphere Client – timeout

I’ve been upgrading my company’s solution centre to vSphere/vCenter 6.0 update 2 the past week and noticed that I was having issues logging into the vCenter Server Appliances I had deployed.

It was a strange issue because I could log into the Windows vCenter Server I had deployed in my primary cluster, but couldn’t log into the vCenter Server Appliance I had deployed in my secondary cluster….. hmmm…. Web Client worked fine for both, but it was the vSphere C# client that was timing out for the vCSA!

vc01.jpg

After much head scratching and trawlling through logs (Found at C:\Users\username\AppData\Local\VMware\vpx\viclient-x-0000.log), it turns out the problem is with the default time out value of the vSphere client for authentication.

The default timeout value is 30 seconds, and my suspicion is that the vCSA was taking slightly longer to respond to authentication…. changed the value to 60 seconds and it all worked fine!

Fire up vSphere Client and connect to another vCenter Server or ESXi host, then click Edit->Client Settings. Change the Client-Server Command Timeout value to Use a custom value and the Timeout in seconds to 60.

vc02

Here’s the VMware KB article about timeout values: https://kb.vmware.com/kb/2072539, there’s also instructions on how to edit the Windows registry if you can’t bring up vSphere client.

Just for the sake of it, here’s the error log:

[viclient:Error :P: 3] 2016-09-06 10:12:35.520 RMI Error Vmomi.SessionManager.Login - 4
<Error type="VirtualInfrastructure.Exceptions.RequestTimedOut">
 <Message>The request failed because the remote server 'xxxxx' took too long to respond. (The command has timed out as the remote server is taking too long to respond.)</Message>
 <InnerException type="System.Net.WebException">
 <Message>The command has timed out as the remote server is taking too long to respond.</Message>
 <Status>Timeout</Status>
 </InnerException>
 <Title>Connection Error</Title>
 <InvocationInfo type="VirtualInfrastructure.MethodInvocationInfoImpl">
 <StackTrace type="System.Diagnostics.StackTrace">
 <FrameCount>17</FrameCount>
 </StackTrace>
 <MethodName>Vmomi.SessionManager.Login</MethodName>
 <Target type="ManagedObject">SessionManager:SessionManager [xxxxx]</Target>
 <Args>
 <item></item>
 <item></item>
 <item></item>
 </Args>
 </InvocationInfo>
 <WebExceptionStatus>Timeout</WebExceptionStatus>
 <SocketError>Success</SocketError>
</Error>
[viclient:Critical:M: 6] 2016-09-06 10:12:35.531 Connection State[xxxxx]: Disconnected
[viclient:SoapMsg :M: 6] 2016-09-06 10:12:35.532 Attempting graceful shutdown of service ...
[viclient:SoapMsg :M: 6] 2016-09-06 10:12:35.534 Pending Invocation Count: 0
[viclient:SoapMsg :M: 6] 2016-09-06 10:12:35.535 Graceful shutdown of service: Success
[ :Error :M: 6] 2016-09-06 10:12:35.543 Error occured during login
VirtualInfrastructure.Exceptions.LoginError: The server 'xxxxx' took too long to respond. (The command has timed out as the remote server is taking too long to respond.)
 at VirtualInfrastructure.LoginMain.Process(BackgroundWorker worker, DoWorkEventArgs e)
 at VirtualInfrastructure.LoginWorkerImpl.Worker_DoWork(Object sender, DoWorkEventArgs e)
...
 at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)
 VirtualInfrastructure.Exceptions.RequestTimedOut: The request failed because the remote server 'xxxxx' took too long to respond. (The command has timed out as the remote server is taking too long to respond.)
 at VirtualInfrastructure.Soap.SoapServiceWrapper.DoInvokeSync(ManagedObject mo, MethodName methodName, Object[] parameters, Int32 timeoutSecs)
 at VirtualInfrastructure.Soap.SoapTransport.VirtualInfrastructure.Transport.InvokeMethod(ManagedObject mo, MethodName methodName, Object[] pars)
 at VirtualInfrastructure.ManagedObject.InvokeMethod(MethodName methodName, Object[] pars)
 at Vmomi.SessionManager.Login(String userName, String password, String locale)
 at VmomiSupport.VcServiceImpl.LoginNormally(LoginSpec loginSpec)
 at VmomiSupport.VcServiceImpl.Login(LoginSpec loginSpec)
 at VirtualInfrastructure.LoginMain.Process(BackgroundWorker worker, DoWorkEventArgs e)
 System.Net.WebException: The command has timed out as the remote server is taking too long to respond.

 --- End of inner exception stack trace ---

Modifying VMware Site Recovery Manager – Windows 2012 UAC error

I first came across this issue when helping a customer uninstall Site Recovery Manager last year and wanted to blog about it but because I was pretty busy it totally slipped my mind….. until today!! I’ve been cleaning up the Solution Centre at MTI and tried to uninstall SRM for a new build…. and came across the same Windows User Access Control error. =)

srm02

Turns out that in Windows 2012. even when you go into User Accounts to turn off the UAC, it doesn’t disable it.

srm01

There’s a Microsoft Technet article which explains how to edit the Windows Registry in order deactivate UAC.

  1. Go to Start > Run, type regedit and click OK. The Registry Editor window opens.
  2. Navigate to HKEY_LOCAL_MACHINE > SOFTWARE > Microsoft > Windows > CurrentVersion > policies > system.
  3. Right-click EnableLUA and select Modify.
  4. In the Edit DWORD window, change the Value data from 1 to 0.
  5. Restart the Windows machine and re-run the SRM uninstall program.

srm03

Installing vShield Endpoint (vCNS Mgr 5.5.4-3)

Very quick blog entry as I’m busy tying up loose ends before jetting off on my summer hols….

It’s pretty easy to install vShield Endpoint as it’s a wizard-based OVA deployment. I’m not going to step through the process as it’s very simple (plus the install guide explains it very well). Once that’s done log into the console and run ‘setup’ to configure the IP address and DNS information.

After that, it’s a case of logging into vShield Manager and connecting to vCenter Server.

Once connected to the vCenter, you should see your datacenter and hosts in a hierarchical tree on the left menu. Select each host and installed vShield Endpoint.

vShield Installation guide: http://www.vmware.com/pdf/vshield_55_install.pdf

However, I did encounter a few issues (due to prior deployments which hadn’t been cleaned up properly).

Error 1: VMKernel Portgroup present on incorrect vSwitchvcns1
This occurred because the hosts had a previous vSwitch labelled vmservice-vswitch, but the VMkernel port vmservice-vmknic-pg resided on a different vSwitch (previous deployment). To correct this I had to delete the old VMkernel port and recreate it on the correct vmservice-vswitch.

Error 2: VirtualMachine Portgroup present on incorrect vSwitch

vcns2Again this was due to a mis-configuration on a previous deployment! What should happen is once you’ve setup the vmservice-vswitch and created the vmservice-vmknic-pg portgroup and VMkernel port, the installer will create a new portgroup on that vSwitch called vmservice-vshield-pg. Like before, this was residing on the wrong vSwitch.

In the end I just deleted the wrong vSwitch and started again by creating the vmservice-vswitch and the vmservice-vmknic-pg. After that the installation of vShield Endpoint went swimmingly!

vcns3

Which goes to show that cleaning up an old deployment within your demo environment can sometimes be very handy! =)

 

Known bug with upgrading vCSA via VAMI

So there’s a known bug where upgrading vCSA via the VAMI freezes at 70%…. I was doing a mass upgrade of all my vCSAs in the demo environment at work, and all of them got stuck at 70%.

vcsa

After reading the Release Notes for 6.0U1b, it turns out it’s a known issue: http://pubs.vmware.com/Release_Notes/en/vsphere/60/vsphere-vcenter-server-60u1b-release-notes.html

New In the vCenter Server Appliance Management Interface, the vCenter Server Appliance update status might be stuck at 70%
In the vCenter Server Appliance Management Interface, the vCenter Server Appliance update status might be stuck at 70%, although the update is successful in the back end. You can check the update status in the /var/log/vmware/applmgmt/software-packages.log file. After a successful update, a message similar to the following is seen in the log file:
Packages upgraded successfully, Reboot is required to complete the installation

Workaround: None.

Anyways, after checking the software-packages.log, I could see the packages upgraded successfully entry so just rebooted the vCSA. All up and working again!

vcsa2

If you want steps on how to upgrade your vCSA, then have a look at my previous blog entry: Upgrading vCenter Server Appliance to 6.0 update 1

vCenter Server Appliance & WinSCP

The other day I had to pull off the SSL certs for the vCSA and I was struggling to connect to the appliance even after enabling SSH and Bash shell access from within the VAMI.

Turns out a bit more configuration is required before you can connect to the vCSA via SCP and this is mainly due to the vCSA having 2 shells – Appliance shell and Bash shell.

What you need to do is change the default shell in the vCSA to Bash… have a look at the following KB for the solution steps: http://kb.vmware.com/kb/2107727

BTW, in case you didn’t know where the SSL cert for the vCSA resides, you’ll find it here:
/etc/vmware-vpx/ssl/rui.crt