HCIBench 1.6.2 – Testing vSAN performance

Over the past month or so I’ve been running a number of performance tests on VxRail and vSAN solutions.

HCIBench is a brilliant tool to help end-users understand the type of performance that they can achieve with their vSAN solution.

It’s essentially an automation wrapper around the popular Vdbench tool. Vdbench is an utility specifically created to help engineers and customers generate disk I/O workloads to use for validating storage performance and storage data integrity. Vdbench is a complex beast to run, with lots of different variables that can be configured via CLI… so the HCIBench wrapper helps simplify workload profiles and makes it so much easier to run benchmark tests!!

Please note, HCIBench is a VMware Labs Fling and so there’s limited support available and it shouldn’t be used in production environments (although the latter is just to cover themselves). If I’m honest, the creators of HCIBench are pretty good at replying to comments and feedback!

https://labs.vmware.com/flings/hcibench

It’s definitely worth remembering that as a benchmark tool, it can’t quite simulate real-world workloads! However, if you understand how your workload behaves (ie block size, read/write ratio, etc) then you can get pretty close to creating a workload profile that matches your workload (albeit running a test at max. workrate rather than the bursty rate we see in real-life).

 

HCIbench was updated 2 days ago in response to the recent release of vSphere 6.5u1, and in my opinion is even cooler now that it can utilise the new vSAN Performance Diagnostic feature of vSAN 6.6.1 (API integration with the new Performance Diagnostics part of vSAN Cloud Analytics).

You can now run an HCIBench test and view detailed results of the test in Performance Diagnostics with supporting graphs – you’re able to select a goal for the test based on “Max IOPS”, “Max Throughput” or “Min Latency”, and then get details on potential issues found in the analysed data which you can then use to improve the workload profile you’re using in HCIBench.

Point your browser here for more info:
https://blogs.vmware.com/virtualblocks/2017/07/31/what-to-expect-from-hcibench-1-6-2/

Note: You need to have Customer Experience Improvement Program(CEIP) and vSAN Performance Service turned on to get this feature enabled

Advertisements

More on vSAN Encryption

So not long after my article was published on SearchVMware, the guys at Virtual Blocks (VMware’s own storage blog) released 2 articles which went into vSAN encryption in a bit more detail.

https://blogs.vmware.com/virtualblocks/2017/06/24/vsan-encryption-1/
https://blogs.vmware.com/virtualblocks/2017/06/24/vsan-encryption-2/

It’s definitely worth noting that using hardware encryption does have an overhead whenever you need to rekey (eg when you need to rekey every drive), obviously because vSAN encryption is within the hypervisor this overhead is significantly reduced.

The First article simply goes over what vSAN encryption is all about, the second dives into more detail on how it’s setup, the trust model of the KMS, and also how the disk format is changed when vSAN encryption is enabled. I find this 2nd article very informative in trying to understand how vSAN encryption works.

There’s also a new KB that briefly goes over the different between vSAN encryption and VM encryption: Understanding vSAN Datastore Encryption vs. VMcrypt Encryption

Enjoy…. =)

VMware vSAN 6.6 launched – so What’s New?

Earlier this year it was announced that vSAN had grown to over 7000 customers since launch, which is a pretty decent number given the product went GA just over 3 years ago and we’re on the 6th iteration! What’s even more impressive is how quickly VMware are turning these updates around (almost every 6 months we get an update of sorts), we only got vSAN 6.5 at VMworld last year and 6 months later we now have version 6.6 – what’s funny is half my customers haven’t even started implementing their 6.5 upgrade plan yet and now they will have to re-write that plan…. Lol… =)

In fact I see the number of customers growing quite significantly this year given the huge drive towards HCI – something that I’m seeing within my company’s customer-base (and in the market in general)!

Today sees vSAN 6.6 go GA, and it amazes me on how many new features VMware have packed into this release – features that make vSAN more faster, cost effective and much more secure! And to think that this is just a “minor” patch release! With vSAN 6.6, customers can now evolve their data centre without risk, control IT costs and scale to tomorrow’s business needs (sorry, that was a marketing blurb that I just had to fit in somewhere as it sounded good).

vSAN features

(Note: I know that slide says “Not for distribution”. However, the vSAN vExperts have been given permission to use the material in their blogs)

The biggest features in my opinion are vSAN Data-at-Rest Encryption, Unicast communication and Enhanced Stretched Clustering with Local Protection – these are the 3 features I’m going to concentrate on within this post, trying to expound on all the new features would involve me writing a lengthy technical whitepaper! =)

That said, other new features are as follows:

  • ESXi Host Client (HTML-5) – management and monitoring functionality available on each host in the case where vCenter server is offline.
  • Simpler installation/configuration – The ability to create a single node vSAN datastore by using the vCSA installer and then allowing you to deploy vCSA/PSC onto that vSAN datastore.
  • Enhanced rebalancing – allowing large components to be split up during redistribution.
  • Site Affinity in Stretched Clusters – a new Affinity policy rule allows users to request where a VM gets deployed to, although this is only applicable when the PFTT is set to 0. Although it’s worth noting that DRS/HA rules should be aligned to data locality!
  • Always-On Protection – Enhanced repairs with Re-sync traffic throttling – allowing vSAN to respond to failed disks/nodes more quickly, intelligently and more efficiently. New Degraded Device Handling (DDH) intelligently monitors the health of drives and proactively evacuates data before failures can happen.
  • Maintenance Pre-Check – enhanced checks to ensure there are enough resources for vSAN when entering maintenance mode (or decommissioning vSAN nodes).
  • Stretched Cluster Witness Replacement UI – simpler method of changing the Witness host without having to disable the Stretched Cluster.
  • vSAN Cloud Analytics – pro-active, real-time support notifications and recommendations with real-time custom alerts through the vSAN health Service.
  • API enhancements – vSAN SDK updated to handle all new features, with additional enhanced PowerCLI support.
  • vSAN Config Assist / Firmware Update – Enhanced health monitoring and HCL checks using health-check assistant to ensure the vSAN hardware has the latest firmware and drivers installed.
  • Enhanced Performance – up to 50% higher all-flash IOPs performance per host and Health Monitoring
  • New Hardware Support – Support for Intels new Optane technology, NVMe SSDs and larger 1.6TB SSDs for cache drives.
  • Support for Photon Platform 1.1 as well as a Docker Volume Driver – great for customers (ie DevOps) who prefer working with micro-services/containers. This allows customers to use vSAN as storage for Docker VMs giving them the ability to apply storage based polices (such as FTT, QoS, access permissions, etc) to the VM, it also gives customers the ability to support persistent storage to allow stateful container apps to be built (such as DBs).

 

Data-at-Rest Encryption

EMC love calling this by the acronym D@RE…. But this hasn’t quite filtered down to the VMware team…. =)

VMware vSAN 6.6 introduces the industry’s first native HCI security solution with software-defined data-at-rest encryption within the hypervisor. Data-at-rest encryption is built right into the vSAN kernel, and is enabled at the cluster allowing all vSAN objects to be encrypted (ie the entire vSAN datastore).

In my opinion this is one of the most important new feature in vSAN 6.6 – we all know that security within IT has become top priority, featuring very high on a company’s risk-register, but IT Admins have always been reluctant to either deploy encryption at the OS level or let application owners encrypt their apps and data. Data-at-rest encryption takes away that decision by encrypting when the data resides on your vSAN Datastore.

It’s hardware-agnostic which means you can deploy the storage hardware device of your own choice – it doesn’t require the use of expensive Self-Encrypting Drives (SEDs)!

vSAN DARE

vSAN Encryption is available for both All-Flash and Hybrid configurations and integrates with KMIP 1.1 compliant key management technologies. When vSAN Encryption is enabled, encryption is performed using an XTS AES 256 cipher and occurs both at the cache and capacity tier – wherever data is at rest, which means you can rest assured that if a cache or capacity drive is stolen the data is encrypted! Plus vSAN Encryption is fully compatible with vSANs all-flash space efficiency features, like dedupe, compression and Erasure Coding, delivering highly efficient and secure storage – as data comes into the cache tier it’s encrypted, then as it de-stages it’s decrypted and any relevant dedupe or compression occurs to the data (4k blocks) before it’s re-encrypted as it hits the capacity tier (512b or smaller blocks). As it’s data encryption at rest, I believe that vSAN traffic traversing the network maybe sent in the clear which means you will need to ensure vSAN traffic is protected accordingly.

It’s worth mentioning that whist the cryptographic mechanics are similar to VM encryption that was introduced in vSphere 6.5 (ie it requires a KMS and uses the same encryption modules), there is a vast difference in the way they’re implemented – VM encryption is per-VM (via vSphere API for IO filtering – VAIO), whilst with vSAN encryption it is the entire datastore. Also you get space-saving benefits from vSAN encryption as previously mentioned. The other major difference is that vSAN encryption can carry on functioning if vCenter Server is lost or powered off because the encryption keys are transferred to each vSAN host and via KMIP each host talks directly to the KMS, whereas VM encryption requires you to go through vCenter Server to communicate to the KMS. Not to mention VM-encryption does have some performance impacts and requires Ent Plus licenses.

Turning on vSAN encryption is as simple as clicking a checkbox within the settings of the vSAN cluster and choosing your KMS (which does need to be setup prior to enabling encryption). However, it’s worth noting that a rolling disk reformat is required when encryption is enable which can take a considerable amount of time – especially if large amounts of data residing on the disks must be migrated during the reformatting.

vsan-encrypt

With the enhanced API support, customers who like to automate their infrastructure will be able to setup an encrypted vSAN cluster with all the relevant KMS configuration via scripting – great for automating large scale deployments!

 

Removal of Multicast

vSAN Multicast

Another big announcements with vSAN 6.6 is that VMware are switching from multicast to unicast for their communication mechanism. This obviously makes networking a lot simpler to manage and setup as customers won’t need to enable multicast on their network switches, or IGMP snooping, or even PIM for routing. It may even mean that customers could use cheaper switches (which may not handle Multicasting very well).

Bit of background:

Typically IP Multicast is used to efficiently send communications to many recipients. The communication can be in the form of one source to many recipients (one-to-many) or many sources to many recipients (many-to-many).

vSAN used multicast to deliver metadata traffic among cluster nodes for efficiency and to optimise network bandwidth consumption for the metadata updates. This eliminates the computing resource and network bandwidth penalties that unicast imposes in order to send identical data to multiple recipients. vSAN depended on multicast for host discovery – the process of joining and leaving cluster groups, as well as other intra-cluster communication services.

While Layer 3 is supported, Layer 2 is recommended to reduce complexity. All VMkernel ports on the vSAN network subscribe to a multicast group using IGMP. IGMP snooping configured with an IGMP querier can be used to limit the multicast traffic to only the switch ports where the vSAN uplinks are connected to – this avoids unnecessary IP multicast floods within the Layer 2 segments.

Although one of the issues that could occur was when multiple vSAN clusters reside on the same layer 2 network – the default multicast address should be changed within the additional vSAN clusters to prevent multiple clusters from receiving all multicast streams.

I believe vSAN now relies on vCenter Server to determine cluster membership, however I haven’t yet read about how the vSAN team have managed to implement unicast communication as that information is still in limited supply. It’ll be interesting to understand how they have done it considering multicast was an efficient and easy way of replicating instructions to multiple nodes within the vSAN cluster when a node needed to perform an action. Although one thing worth noting is that unicast communication probably lends itself to cloud platforms a lot easier than trying to implement a multicast solution!

 

Local Protection for Stretched Clusters

Stretched vSAN Clusters were introduced back with vSAN 6.1 and built on the foundations of Fault Domains, it was basically a RAID-1 configuration of a vSAN object across two sites – which basically means a copy of the data in each site with a witness site for cluster quorum type services during failure events. The problem was if 1 site failed you would only have a single copy left and an additional failure could lead to data loss. It also meant that if a single host failed in any of the sites then the data on that host would need to be resynced again from the other site (to rebuild the RAID-1).

vSAN ESC

This new enhancement to Stretched Clusters now gives users more flexibility with regards to local and site protection. For example, you can now configure the local clusters at each site to tolerate two failures whilst also configuring the stretched cluster to tolerate the failure of a site! Brilliant news!

When enabling Stretched Clusters, there are now two protection policies – a “Primary FTT” and a “Secondary FTT”. Primary FTT defines the cross-site protection and is implemented as a RAID-1. It can be set to 0 or 1 in a stretched cluster – 0 means the VM is not stretched whilst 1 means the VM is stretched. Secondary FTT defines how it is protected within a site, and this can be RAID-1, RAID-5 or RAID-6.

One thing to note is that the witness must still be available in order to protect against the loss of a data site!

This new feature doesn’t increase the amount of traffic being replicated between sites as a “Proxy Owner” has been implemented per site, which means instead of writing to all replicas in the second site, a single write is done to the Proxy Owner and it’s then the responsibility of this Proxy Owner to write to all the replicas on that local site.

 

So that’s about it for now…. if you require more information then pop along to the following sites:

Duncan Epping (Chief Technologist in the Office of CTO for the Storage & Availabiliy BU at VMware) has created some great demos of vSAN 6.6 which can be found on his blog site: http://www.yellow-bricks.com

Things to Note

The underlying release for vSAN 6.6 is vSphere 6.5.0d which is a patch release for vSphere 6.5. For existing vSAN users upgrading to vSAN 6.6, please consult VMware Product Interoperability Matrices to ensure upgrading from your current vSAN version is supported.

Please note that for vSAN users currently on vSphere 6.0 Update 3 – upgrade to vSAN 6.6 is NOT yet supported.

The parent release of vSAN 6.6 is vSphere 6.5 and as shown by VMware Product Interoperability Matrices, an upgrade from 6.0 U3 to vSphere 6.5 (and hence vSAN 6.5) is NOT supported. Please refer to this KB Supported Upgrade Paths for vSAN 6.6 for further details.

 

p/s: I’ve always liked Rawlinson Rivera‘s Captain vSAN cartoon!! =)

vSphere/vCenter 6.5 released

So post VMworld, I wrote a long article about what’s new for vSphere 6.5 which I was hoping would be published on SearchVMware.com…. unfortunately I’m still waiting on it to be published, last I heard the article was too long and they were splitting it up into two articles! ¬_¬”

Anyways, whilst I wait for the article to be published, I’ll give a quick summary of things I’ve learnt about the new vSphere/vCenter 6.5 that was released 2 days ago.

  • New HTML5 vSphere Client
  • Fully Integrated vSphere Update Manager and AutoDeploy with vCenter Server Appliance
  • Native High Availability for the vCSA
  • Native backup/restore for vCSA
  • Built-in monitoring web interface for the vCSA
  • Over 2x increase in scale and 3x in performance
  • Easy to migrate from Windows vCenter to vCSA
  • Client Integration Plugin for the vSphere Web Client is no longer required
  • The vCSA deployment installer can be run on Windows, Mac and Linux
  • The installer now supports install, upgrade, migrate and restore
  • vSphere API Explorer
  • VM Encryption / Encrypted vMotion
  • Secure Boot (for ESXi host and VM)
  • VMware Tools 10.1 and 10.0.12 (for older guest OSes that are out of support)
  • Multi-factor authentication with Smartcard or SecurID
  • VMFS-6 (4k drive support in 512e mode – emulating 512 sectors)
  • Automatic Space Reclamation – VAAI UNMAP now automatic and integrated it UI
  • VVOLs 2.0 plus VASA 3.0
  • vSphere HA is now known as vSphere Availability, enhancements to Admission Control
  • HA Orchestrated Restarts (adding in dependencies when HA restarts a VM)
  • Proactive HA (when host components are failing they are put into a quarantine mode)
  • Enhancements to DRS (VM distribution, CPU Over-commit, Network aware)
  • Predictive-DRS if vRealize Operations 6.4 is deployed (forecasted trends will kick off DRS)
  • vSphere Replication enhancements (now 5min RPOs like vSAN)

 

To find out more information, head along to the following:

 

In addition to the GA of vSphere/vCenter 6.5 there were a load of other releases on the same day:

 

I’m still waiting on the launch of vRealize Automation 7.2 and NSX 6.3….. those should be imminent as well!

As always, all downloads are available via the My VMware Portal.

VMware vSphere ESXi and vCenter Server 6.0.0b Released

So the first minor release for vSphere ESXi 6.0 is out alongside the second minor release for vCenter Server 6.0.

https://www.vmware.com/support/vsphere6/doc/vsphere-vcenter-server-600b-release-notes.html
https://www.vmware.com/support/vsphere6/doc/vsphere-esxi-600b-release-notes.html

Looking through the release notes, I don’t think I’ve experienced any of those bugs that have been fixed – which is a good indication of a stable software release….. I’m guessing that the public beta of vSphere 6 actually ironed out a lot of bugs!

As always, read through the release notes prior to upgrading. =)

Comparing the Configuration of vCenter Server Appliance 5.5 and 6.0

Great White Paper here for those of you transitioning from 5.5 to 6.0 and want to know what the differences are:

http://www.vmware.com/files/pdf/products/vsphere/VMware-vsphere-60-vcenter-server-appliance-55-60-comparison.pdf

For me the major difference is VMware have dropped the Virtual Appliance Management Interface (VAMI), which makes sense as why would you want to manage your virtual environment from one browser url and administer your appliance from another! They’ve rolled all the configuration of the vCSA into the installation wizard, and also all the administrative aspects into the admin section of the Web Client.

I always found it a pain to fire up Web Client at https://<vCenter Server>:9443/vsphere-client and then the VAMI at http://<vCenter Server>:5480

=)

No coredump target has been configured

So recently a number of customers have been experiencing a core dump target error after rebooting their ESXi hosts…. quite strangely I also recently experienced the same issue when my demo environment went down a few weeks ago due to a power failure.

coredump1

There isn’t really a clear explanation why this happens, but it seems to be a common occurrence with end-users… it’s also quite simple to fix, but the KB isn’t exactly the clearest of instructions: http://kb.vmware.com/kb/2004299

Firstly enable SSH on the host experiencing the error:
ssh1 ssh2

Next, open a putty session to the host and login as root.

Check to see if there is currently an active diagnostic partition using the following esxcli command:
esxcli system coredump partition get
Check to see if there are any available diagnostic partitions by running the following command:
esxcli system coredump partition list

It’s more than likely you would get a similar output as below:
coredump2

Usually the coredump partition is configured on the boot device. We now need to find the boot device and the diagnostic partition. Run the following command to list all the storage devices attached to the host.
ls /dev/disks/ -l
or ls /vmfs/devices/disks/ -l
Usually the boot device can be easily identified because it would be the only device with multiple partitions:
coredump3

(If you want to understand more about partitions that are created by ESXi, have a look at this KB: http://kb.vmware.com/kb/1036609)

Once you have the device ID, run the following command to display the partition table for the device:
partedUtil getptbl “/dev/disks/DeviceName”
coredump4

Usually the partitions will be labelled and you can easily identify the coredump partition – labelled “vmkDiagnostic” this is quite often the 7th partition. If you’re unfortunate and don’t have labelled partitions, then usually you can identify the diagnostic partiton from the GUID displayed – this is usually “9D27538040AD11DBBF97000C2911D1B8”

Once you’ve identified the partition, you will have to re-point the coredump target to this partition.

To configure and activate a specific partition, use the command:
esxcli system coredump partition set –partition=”Partition_Name”
esxcli system coredump partition set –enable true

To automatically select and activate an accessible diagnostic partition, use the command:
esxcli system coredump partition set –enable true –smart

If the partition cannot be automatically set, you may have to deactivate the previous partition link and re-running the command, as follows:
coredump5

Once done, double check the core dump partition has been configured by running the following command:
esxcli system coredump partition get

If all is successful, reboot the host to complete the configuration and to ensure the partition is stored after rebooting.

Installing/Upgrading vCenter Server Appliance 6.0

I’ve been itching to deploy vSphere 6.0 GA for weeks now (since it was launched last month – wanted to replace my vSphere 6.0 Beta environment) but due to work commitments I’ve had to put this pet-project on the back-burner….. really hate when vendors release new toys at the end of quarter as it means I can’t get to play with it for a month or so!! >_<”

Installing and upgrading the vCSA 6.0 is significantly different than previous releases, it no longer gets distributed as an OVA which means you don’t use the OVF import in vSphere Client that we’re all so used to doing! Instead, vCSA 6.0 gets distributed as an ISO image – which is a bit weird for an appliance!

Hmm…. “So how do I deploy it?” is the most obvious question that most end-users will ask…. Well, you pretty much have to mount the ISO image onto your workstation/laptop/desktop/VM and then run the installation from the mounted drive…..

You may think that it’s a bit of a pain, but the installation process is quite simple and the wizard is very intuitive!

But why would VMware do away with the OVA package?!?
Well if I was to make an educated guess then this could be because they want to phase out the vSphere C# Client, and if you aren’t able to client onto your newly created host then how do you deploy an OVA?
For example, in a freshly installed ESXi host there’s no easy way to manage it without either a vSphere Client or a vCenter Server – at present you can’t open a web-client to the host in order to manage it (see below screenshot of the ESXi hosts’ landing page), so it makes sense to do away with the OVA deployment method and design it so you can mount the installation package for deployment of the vCSA without having to import the OVA via the soon-to-be-retired (maybe) vSphere client!
vcsa01

Now there’s two ways you can install vCSA 6.0 – Guided or Scripted. For ease of deployment, I’m going to discuss the Guided approach using the installation wizard. The Scripted approach is aimed at people who wish to automate the deployment of (several) vCSAs.

So before we get started, there are certain pre-requisites which must be completed prior to deploying the vCSA (in addition to what is listed in the documentation)

  1. Ensure that the hostname being assigned to the vCSA is in DNS, ideally both forward and reverse lookup. This will help with the installation process (I won’t go into the reasoning or what happens as several people have already posted online to mention the installation could fail if no DNS entry can be found).
  2. Ensure you install the Client Integration Plug-in before running the installation – the installer will not run without it installed! (This is both for fresh installs and upgrades!)
    vcsa02
  3. Do not input more than 1 DNS server (even though the installer prompts that you can). This will cause the installer to fail – as pointed out in the Release Notes.
  4. Ensure you enter the network settings correctly, as there is no pre-check function available and any errors will lead to firstboot errors – again, as pointed out in the Release Notes!
    Especially watch out for VLAN configuration errors, ensure the vCSA is on the correct VLAN and it’s routable to the machine you’re deploying from (as well as the ESXi host itself).

Right, now you’re ready to mount the ISO on your deployment device (my case – my Win 7 laptop) and start the installation process! In my case I’m using MagicDisc to mount the ISO.

First up, install the Client Integration Plug-In which is found in vcsa directory.
vcsa05 vcsa06

Next launch the setup via the vcsa-setup.html file:
vcsa04

This will open up a webpage which will prompt you to allow the client integration plug-in to run, the screens below are for Chrome (left) and IE (right):
vcsa07 vcsa08

Next hit the Install button:
vcsa09

Accept the EULA and enter the ESXi host information where you are going to deploy the vCSA, accept any certification warnings:
vcsa10 vcsa11

Enter the FQDN for the appliance and the new root password.
vcsa12

Next choose the deployment type. In my case I want to deploy the embedded PSC. I won’t go into the technicalities of what the PSC is, and the different deployment scenarios – if you wish to learn more than head along to Derek Seaman’s site which explains the PSC in more detail!
vcsa13vcsa14

Next enter the SSO password and domain details.
vcsa15

Select the appliance size based on your virtual environment (number of hosts and VMs)
vcsa16

Select the datastore you wish to deploy the appliance on
vcsa17

Choose whether to use the internal vPostgres DB or an external Oracle DB
vcsa18

Input the network configuration details, ensuring the FQDN is resolvable in DNS. Pay attention to the NTP server, especially if deploying/connecting to another PSC – if they’re out of sync, it could cause installation issues!
vcsa19

Review the configurations and click Finish to start the installation.
vcsa20

Once complete, the installation wizard will give you the details to connect to the web client, the URL will be https://fqdn/vsphere-client (no more port number required at the end of the url!!). Remember, if you’ve changed the SSO domain earlier, then the login user will be administrator@SSO-Domain
vcsa21 vcsa22

Now that the vCSA has been deployed, there is a new way of joining it to an Active Directory Domain, which will help you configure the Identity Sources for SSO. Log into the web client and then on the home page select System Configuration.
vcsa29

Under System Configuration, click Nodes and then select the vCenter Server and click the Manage tab.
vcsa25

Under Advanced, select Active Directory, and click Join. Type in the Active Directory details. Note: The User name must be in User Principle Name (UPN) format – eg joebloggs@acme.com.
vcsa26

Click OK to join the vCenter Server Appliance to the Active Directory domain. Now Right-click the node you edited and select Reboot to restart the appliance so that the changes are applied.
vcsa27

Now you can add in the domain as a SSO Identity Source as you would usually do. However, you can choose Active Directory (Integrated Windows Authentication) and it should populate the domain details and pick up the information from when you joined the vCSA to the domain.
vcsa28

For more information, point your browsers to the vCenter Server 6.0 Deployment Guide.

vSphere 6.0 Launched

So Tuesday was quite an eventful day….. not only did it snow in my neck of the woods (South West London) and cause chaos to road traffic – which meant I had to walk just over a mile to the station in freezing weather as the buses weren’t going anywhere – it was also the launch event for VMware vSphere 6.0 and also EMC’s EVO:RAIL offering – VSPEX Blue.
So lets start with a blog on vSphere 6.0 (VSPEX Blue to follow)……

I had previously blogged about all the goodies that were talked about at VMworld 2014 last October and on Tuesday, Pat Gelsinger and Ben Fathi announced the eagerly awaited 6.0 to the world! If you missed the event, then you can still register to view the video recording here: http://www.vmware.com/now.html

Whilst there was no date mentioned for GA, you can probably expect it to be available by the end of Q1 2015.

There are over 650 feature improvements with vSphere 6.0, and frankly I don’t even know more than 10% of what those improvements are!!
Anyways, here are what I think are the most important improvements:

vSphere 6.0

  • Increased maximum configs:
    • 128 vCPUs and 4TB of vRAM per VM
    • 64 hosts and 8000 VMs per cluster
    • 480 CPUs and 12TB of memory per host (need to find a manufacturer who can make such a beast first!!)
  • New VM hardware version – v11
  • The long awaited Virtual Volumes (which I talked about previously in my VMworld 2014 update post here) – doing away with LUNs and filesystems and allowing VMs to write their VMDKs straight to the storage array.

vCenter Server 6.0

  • Linked Mode now supported on the vCenter Server Appliance (so no reason you can’t kiss goodbye to that Windows installation!)
  • Content Library – organising ISO images, templates, vApps, etc. in one location
  • Improved security, user administration and task/event logging.
  • Long Distance vMotion – as long as the latency isn’t greater than 100ms
  • Cross vSwitch vMotion – must be on same L2 Network (so between vSS, or between vDS or from vSS to vDS, but not supported on vDS to vSS)
  • Cross vCenter vMotion – removing the previous boundary so now you can change compute, storage, network and vCenter!
  • vMotion of MSCS VMs using pRDMs
  • multi-vCPU Fault Tolerance – currently up to 4 vCPUs per VM and 8 vCPUs in FT per host
    • FT no longer requires a shared disk, which means your secondary FT copy could be residing on a different storage array.
    • FT is integrated with the VADP APIs allowing FT VMs to be backed up (snapshot)
  • Platform Services Controller (SSO on steroids) – which contains SSO, license manager, a certificate authority service and certificate store (which makes creation and provisioning of SSL certificates a bit easier). Deployed as a separate vApp with its own native replication (to other PSCs).
  • vSphere HA Component Protection (protects VMs against mis-configurations and connectivity problems)
  • NFS 4.1 support
  • Instant Clone (Project Fargo) Capability – this enables a running VM to be cloned such that the new VM is created identical to the original, which means you can get a new, running, booted up VM in less than a second.
  • Web Client performance has been improved (yay) with faster login times! Plus there have been some usability improvements which means tasks are completed faster, performance charts actually plot properly, the VM remote console offers better console access and security.
  • The classic C# vSphere client is still with us (they haven’t quite got rid of it yet… probably because of the VUM plugin and also the only way you can access ESXi hosts) and now lets you view the new VM hardware versions (v10 and 11) but to edit you need to use the Web Client.
  • vSphere Replication enhancements allowing compression of replication traffic, faster syncing but still the same 15min RPO
    • Ability to isolate vSphere Replication traffic onto its own network
  • vSphere Data Protection now includes all of the Advanced functionalities:
    • Up to 8TB of deduped data per VDP Appliance
    • Up to 800 VMs per VDP Appliance
    • Application level backup and restore of SQL Server, Exchange, SharePoint
    • Replication to other VDP Appliances and EMC Avamar
    • Data Domain support (DD Boost)

Virtual SAN 6.0
(Obviously too good to be called 2.0)

  • All flash configurations – think ‘very’ cheap all-flash array!!
  • Fault Domain – which means you can plan your deployment to include several hosts in a domain (or even a whole rack)
  • Capacity planning – “What if scenarios”
  • Support for hardware-based check-summing/encryption
  • Virtual SAN Health Services plugin
  • Direct Attached JBODs for blade servers (only those on the HCL)
  • Greater scale
    • 64 hosts per cluster
    • 200 VMs per host
    • 62TB max VMDK size
    • New on-disk format enables fast cloning and snapshotting
    • 32 VM snapshots

PHEW……..

As you can see, that’s quite a hefty list of features – and it’s not even the complete list……. Anyways, like everyone else I’m itching to get my hands on the GA so that I can deploy it within MTI’s Solution Centre!

For more info pop along to: http://www.vmware.com/products/vsphere/

vCenter Site Recovery Manager – Shared Recovery Site

So a lot of customers use Site Recovery Manager as the tool to automate their Disaster Recovery Policy, usually it’s deployed on an one-to-one relationship – i.e. a single protected site and a single recovery site where the SRM/vCenter server on the protected site is paired to a SRM/vCenter server on the recovery site.

Not many people are aware that since SRM 4.0 you could actually set up a Shared Recovery Site which allows you to ‘fan-in’ your recovery from several protected sites (ie N-to-1 relationship).
This could be very useful if an organization has several remote/branch offices and requires them to be protected by a single shared site. Another example is where a service provider offers business continuity services to multiple customers – DR-as-a-Service.

In a shared recovery site configuration, you install one SRM Server instance on each protected site. On the recovery site, you install multiple SRM Server instances to pair with each SRM Server instance on the protected sites (so if you had two protected sites, in the recovery site you need to deploy two SRM servers).
All of the SRM Server instances on the shared recovery site connect to the same vCenter Server instance.

Similar to how standard SRM is deployed, you can use either array-based replication or vSphere replication.

Image
In the diagram above, you can see that there are two field offices protected by the their Head office.

SRM Server A (Field Office 1) is paired to SRM Server C (HO)
SRM Server B (Field Office 2) is paired to SRM Server D (HO)

So what are the limitations of a Shared Recovery Site?

  • You can’t reconfigure an existing standard SRM deployment to use a shared recovery site, this is because a custom installer is used when deploying SRM which allows you to set the SRM Identifier (which must be identical for the SRM pair).
  • Each instance of the SRM Server at the shared recovery site must be deployed on its own host machine, ie you can’t deploy multiple SRM servers on the same VM.
  • Each instance of the SRM server requires its own database.
  • You’re limited to a maximum of 10 protected sites per shared recovery site.
  • When accessing SRM through vCenter Server on the shared recovery site, you can view all the SRM extensions (and pairs) and hence see all the VMs, protection groups, recovery plans, etc.

One thing you need to be very careful with is how your solution is licensed if you’re thinking about using the Reprotect functionality of SRM (or bi-directional protection).
As with standard 1:1 SRM deployments, you can re-use your protected site SRM licenses at the recovery site (assuming they are no longer in use at the protected site) and invoke reprotect/failback.
However, if you use a new set of keys at the recovery site, then you need to ensure you have enough licenses to cover the VMs that need to be reprotected.

For example:
Site A – licensed to protect 20 VMs.
Site B – licensed to protect 10 VMs.
Shared Recovery Site – licensed to protect 25 VMs.

If you don’t transfer the licenses from the protected site to the shared recovery site, then you can only perform reprotect on 25 VMs. If you recover all of the VMs from sites A and B to the shared recovery site and attempt to perform reprotect, you have sufficient licenses to reprotect only 25 of the 30 VMs that you recovered!

The Operational Limits for SRM in a shared recovery site can be found here:
http://kb.vmware.com/kb/2008061

At some point before Christmas I’m hoping to get this all up and running in MTI’s demo centre…… =)