Dell EMC updates VxRail software to address Spectre

So Dell EMC have finally released the patches for their VxRail appliances, I know many of my customers were asking about these patches – in a way it’s good it was slightly delayed given how many normal VMware customers experienced issues when patching and how one patch was pulled by VMware!

The good thing about VxRail is that any software patches or updates released have been tried and tested by the Dell EMC CPSD engineering team, so they should be ready for roll out with minimum disruption!

Updates 4.0.401 and 4.5.150 are now available to download from Dell EMC’s support portal.

Release notes can be found here:
https://support.emc.com/docu80740_VxRail-Appliance-Software-4.0.x-Release-Notes.pdf?language=en_US
https://support.emc.com/docu86659_VxRail-Appliance-Software-4.5.x-Release-Notes.pdf?language=en_US

It’s worth noting that at present this patch only contains 2 of the 3 required fixes for Intel to address the Speculative Execution vulnerability (Spectre – Meltdown doesn’t really affect VMware and hence VxRail). The 3rd fix has not yet been released by Intel and Dell EMC basically decided they couldn’t wait any longer as Intel drag their heels!

Advertisements

Spectre & Meltdown Update

So it seems that the microcode patches released by VMware associated with their recent Security Advisory (VMSA-2018-0004) have been pulled….
https://kb.vmware.com/s/article/52345
So that’s ESXi650-201801402-BG, ESXi600-201801402-BG, or ESXi550-201801401-BG.

The microcode patch provided by Intel was buggy and there seems to be issues when VMs access the new speculative execution control mechanism (Haswell & Broadwell processors). However, I can’t seem to find much around what these issues are…

For the time being, if you haven’t applied one of those microcode patches, VMware recommends not doing so and to apply the patches listed in VMSA-2018-0002 instead.

If you have applied the latest patches you will have to edit the config files of each ESXi host and add in a line that hides the new speculative execution control mechanism and reboot the VMs on that host. Detailed information can be found in the KB above.

 

Finally William Lam has created a very handy PowerCLI script that will help provide information about your existing vSphere environment and help identify whether you have hosts that are impacted by Spectre and this new Intel Sighting issue: https://www.virtuallyghetto.com/2018/01/verify-hypervisor-assisted-guest-mitigation-spectre-patches-using-powercli.html

HCIBench 1.6.2 – Testing vSAN performance

Over the past month or so I’ve been running a number of performance tests on VxRail and vSAN solutions.

HCIBench is a brilliant tool to help end-users understand the type of performance that they can achieve with their vSAN solution.

It’s essentially an automation wrapper around the popular Vdbench tool. Vdbench is an utility specifically created to help engineers and customers generate disk I/O workloads to use for validating storage performance and storage data integrity. Vdbench is a complex beast to run, with lots of different variables that can be configured via CLI… so the HCIBench wrapper helps simplify workload profiles and makes it so much easier to run benchmark tests!!

Please note, HCIBench is a VMware Labs Fling and so there’s limited support available and it shouldn’t be used in production environments (although the latter is just to cover themselves). If I’m honest, the creators of HCIBench are pretty good at replying to comments and feedback!

https://labs.vmware.com/flings/hcibench

It’s definitely worth remembering that as a benchmark tool, it can’t quite simulate real-world workloads! However, if you understand how your workload behaves (ie block size, read/write ratio, etc) then you can get pretty close to creating a workload profile that matches your workload (albeit running a test at max. workrate rather than the bursty rate we see in real-life).

 

HCIbench was updated 2 days ago in response to the recent release of vSphere 6.5u1, and in my opinion is even cooler now that it can utilise the new vSAN Performance Diagnostic feature of vSAN 6.6.1 (API integration with the new Performance Diagnostics part of vSAN Cloud Analytics).

You can now run an HCIBench test and view detailed results of the test in Performance Diagnostics with supporting graphs – you’re able to select a goal for the test based on “Max IOPS”, “Max Throughput” or “Min Latency”, and then get details on potential issues found in the analysed data which you can then use to improve the workload profile you’re using in HCIBench.

Point your browser here for more info:
https://blogs.vmware.com/virtualblocks/2017/07/31/what-to-expect-from-hcibench-1-6-2/

Note: You need to have Customer Experience Improvement Program(CEIP) and vSAN Performance Service turned on to get this feature enabled

More on vSAN Encryption

So not long after my article was published on SearchVMware, the guys at Virtual Blocks (VMware’s own storage blog) released 2 articles which went into vSAN encryption in a bit more detail.

https://blogs.vmware.com/virtualblocks/2017/06/24/vsan-encryption-1/
https://blogs.vmware.com/virtualblocks/2017/06/24/vsan-encryption-2/

It’s definitely worth noting that using hardware encryption does have an overhead whenever you need to rekey (eg when you need to rekey every drive), obviously because vSAN encryption is within the hypervisor this overhead is significantly reduced.

The First article simply goes over what vSAN encryption is all about, the second dives into more detail on how it’s setup, the trust model of the KMS, and also how the disk format is changed when vSAN encryption is enabled. I find this 2nd article very informative in trying to understand how vSAN encryption works.

There’s also a new KB that briefly goes over the different between vSAN encryption and VM encryption: Understanding vSAN Datastore Encryption vs. VMcrypt Encryption

Enjoy…. =)

VMware vSAN 6.6 launched – so What’s New?

Earlier this year it was announced that vSAN had grown to over 7000 customers since launch, which is a pretty decent number given the product went GA just over 3 years ago and we’re on the 6th iteration! What’s even more impressive is how quickly VMware are turning these updates around (almost every 6 months we get an update of sorts), we only got vSAN 6.5 at VMworld last year and 6 months later we now have version 6.6 – what’s funny is half my customers haven’t even started implementing their 6.5 upgrade plan yet and now they will have to re-write that plan…. Lol… =)

In fact I see the number of customers growing quite significantly this year given the huge drive towards HCI – something that I’m seeing within my company’s customer-base (and in the market in general)!

Today sees vSAN 6.6 go GA, and it amazes me on how many new features VMware have packed into this release – features that make vSAN more faster, cost effective and much more secure! And to think that this is just a “minor” patch release! With vSAN 6.6, customers can now evolve their data centre without risk, control IT costs and scale to tomorrow’s business needs (sorry, that was a marketing blurb that I just had to fit in somewhere as it sounded good).

vSAN features

(Note: I know that slide says “Not for distribution”. However, the vSAN vExperts have been given permission to use the material in their blogs)

The biggest features in my opinion are vSAN Data-at-Rest Encryption, Unicast communication and Enhanced Stretched Clustering with Local Protection – these are the 3 features I’m going to concentrate on within this post, trying to expound on all the new features would involve me writing a lengthy technical whitepaper! =)

That said, other new features are as follows:

  • ESXi Host Client (HTML-5) – management and monitoring functionality available on each host in the case where vCenter server is offline.
  • Simpler installation/configuration – The ability to create a single node vSAN datastore by using the vCSA installer and then allowing you to deploy vCSA/PSC onto that vSAN datastore.
  • Enhanced rebalancing – allowing large components to be split up during redistribution.
  • Site Affinity in Stretched Clusters – a new Affinity policy rule allows users to request where a VM gets deployed to, although this is only applicable when the PFTT is set to 0. Although it’s worth noting that DRS/HA rules should be aligned to data locality!
  • Always-On Protection – Enhanced repairs with Re-sync traffic throttling – allowing vSAN to respond to failed disks/nodes more quickly, intelligently and more efficiently. New Degraded Device Handling (DDH) intelligently monitors the health of drives and proactively evacuates data before failures can happen.
  • Maintenance Pre-Check – enhanced checks to ensure there are enough resources for vSAN when entering maintenance mode (or decommissioning vSAN nodes).
  • Stretched Cluster Witness Replacement UI – simpler method of changing the Witness host without having to disable the Stretched Cluster.
  • vSAN Cloud Analytics – pro-active, real-time support notifications and recommendations with real-time custom alerts through the vSAN health Service.
  • API enhancements – vSAN SDK updated to handle all new features, with additional enhanced PowerCLI support.
  • vSAN Config Assist / Firmware Update – Enhanced health monitoring and HCL checks using health-check assistant to ensure the vSAN hardware has the latest firmware and drivers installed.
  • Enhanced Performance – up to 50% higher all-flash IOPs performance per host and Health Monitoring
  • New Hardware Support – Support for Intels new Optane technology, NVMe SSDs and larger 1.6TB SSDs for cache drives.
  • Support for Photon Platform 1.1 as well as a Docker Volume Driver – great for customers (ie DevOps) who prefer working with micro-services/containers. This allows customers to use vSAN as storage for Docker VMs giving them the ability to apply storage based polices (such as FTT, QoS, access permissions, etc) to the VM, it also gives customers the ability to support persistent storage to allow stateful container apps to be built (such as DBs).

 

Data-at-Rest Encryption

EMC love calling this by the acronym D@RE…. But this hasn’t quite filtered down to the VMware team…. =)

VMware vSAN 6.6 introduces the industry’s first native HCI security solution with software-defined data-at-rest encryption within the hypervisor. Data-at-rest encryption is built right into the vSAN kernel, and is enabled at the cluster allowing all vSAN objects to be encrypted (ie the entire vSAN datastore).

In my opinion this is one of the most important new feature in vSAN 6.6 – we all know that security within IT has become top priority, featuring very high on a company’s risk-register, but IT Admins have always been reluctant to either deploy encryption at the OS level or let application owners encrypt their apps and data. Data-at-rest encryption takes away that decision by encrypting when the data resides on your vSAN Datastore.

It’s hardware-agnostic which means you can deploy the storage hardware device of your own choice – it doesn’t require the use of expensive Self-Encrypting Drives (SEDs)!

vSAN DARE

vSAN Encryption is available for both All-Flash and Hybrid configurations and integrates with KMIP 1.1 compliant key management technologies. When vSAN Encryption is enabled, encryption is performed using an XTS AES 256 cipher and occurs both at the cache and capacity tier – wherever data is at rest, which means you can rest assured that if a cache or capacity drive is stolen the data is encrypted! Plus vSAN Encryption is fully compatible with vSANs all-flash space efficiency features, like dedupe, compression and Erasure Coding, delivering highly efficient and secure storage – as data comes into the cache tier it’s encrypted, then as it de-stages it’s decrypted and any relevant dedupe or compression occurs to the data (4k blocks) before it’s re-encrypted as it hits the capacity tier (512b or smaller blocks). As it’s data encryption at rest, I believe that vSAN traffic traversing the network maybe sent in the clear which means you will need to ensure vSAN traffic is protected accordingly.

It’s worth mentioning that whist the cryptographic mechanics are similar to VM encryption that was introduced in vSphere 6.5 (ie it requires a KMS and uses the same encryption modules), there is a vast difference in the way they’re implemented – VM encryption is per-VM (via vSphere API for IO filtering – VAIO), whilst with vSAN encryption it is the entire datastore. Also you get space-saving benefits from vSAN encryption as previously mentioned. The other major difference is that vSAN encryption can carry on functioning if vCenter Server is lost or powered off because the encryption keys are transferred to each vSAN host and via KMIP each host talks directly to the KMS, whereas VM encryption requires you to go through vCenter Server to communicate to the KMS. Not to mention VM-encryption does have some performance impacts and requires Ent Plus licenses.

Turning on vSAN encryption is as simple as clicking a checkbox within the settings of the vSAN cluster and choosing your KMS (which does need to be setup prior to enabling encryption). However, it’s worth noting that a rolling disk reformat is required when encryption is enable which can take a considerable amount of time – especially if large amounts of data residing on the disks must be migrated during the reformatting.

vsan-encrypt

With the enhanced API support, customers who like to automate their infrastructure will be able to setup an encrypted vSAN cluster with all the relevant KMS configuration via scripting – great for automating large scale deployments!

 

Removal of Multicast

vSAN Multicast

Another big announcements with vSAN 6.6 is that VMware are switching from multicast to unicast for their communication mechanism. This obviously makes networking a lot simpler to manage and setup as customers won’t need to enable multicast on their network switches, or IGMP snooping, or even PIM for routing. It may even mean that customers could use cheaper switches (which may not handle Multicasting very well).

Bit of background:

Typically IP Multicast is used to efficiently send communications to many recipients. The communication can be in the form of one source to many recipients (one-to-many) or many sources to many recipients (many-to-many).

vSAN used multicast to deliver metadata traffic among cluster nodes for efficiency and to optimise network bandwidth consumption for the metadata updates. This eliminates the computing resource and network bandwidth penalties that unicast imposes in order to send identical data to multiple recipients. vSAN depended on multicast for host discovery – the process of joining and leaving cluster groups, as well as other intra-cluster communication services.

While Layer 3 is supported, Layer 2 is recommended to reduce complexity. All VMkernel ports on the vSAN network subscribe to a multicast group using IGMP. IGMP snooping configured with an IGMP querier can be used to limit the multicast traffic to only the switch ports where the vSAN uplinks are connected to – this avoids unnecessary IP multicast floods within the Layer 2 segments.

Although one of the issues that could occur was when multiple vSAN clusters reside on the same layer 2 network – the default multicast address should be changed within the additional vSAN clusters to prevent multiple clusters from receiving all multicast streams.

I believe vSAN now relies on vCenter Server to determine cluster membership, however I haven’t yet read about how the vSAN team have managed to implement unicast communication as that information is still in limited supply. It’ll be interesting to understand how they have done it considering multicast was an efficient and easy way of replicating instructions to multiple nodes within the vSAN cluster when a node needed to perform an action. Although one thing worth noting is that unicast communication probably lends itself to cloud platforms a lot easier than trying to implement a multicast solution!

 

Local Protection for Stretched Clusters

Stretched vSAN Clusters were introduced back with vSAN 6.1 and built on the foundations of Fault Domains, it was basically a RAID-1 configuration of a vSAN object across two sites – which basically means a copy of the data in each site with a witness site for cluster quorum type services during failure events. The problem was if 1 site failed you would only have a single copy left and an additional failure could lead to data loss. It also meant that if a single host failed in any of the sites then the data on that host would need to be resynced again from the other site (to rebuild the RAID-1).

vSAN ESC

This new enhancement to Stretched Clusters now gives users more flexibility with regards to local and site protection. For example, you can now configure the local clusters at each site to tolerate two failures whilst also configuring the stretched cluster to tolerate the failure of a site! Brilliant news!

When enabling Stretched Clusters, there are now two protection policies – a “Primary FTT” and a “Secondary FTT”. Primary FTT defines the cross-site protection and is implemented as a RAID-1. It can be set to 0 or 1 in a stretched cluster – 0 means the VM is not stretched whilst 1 means the VM is stretched. Secondary FTT defines how it is protected within a site, and this can be RAID-1, RAID-5 or RAID-6.

One thing to note is that the witness must still be available in order to protect against the loss of a data site!

This new feature doesn’t increase the amount of traffic being replicated between sites as a “Proxy Owner” has been implemented per site, which means instead of writing to all replicas in the second site, a single write is done to the Proxy Owner and it’s then the responsibility of this Proxy Owner to write to all the replicas on that local site.

 

So that’s about it for now…. if you require more information then pop along to the following sites:

Duncan Epping (Chief Technologist in the Office of CTO for the Storage & Availabiliy BU at VMware) has created some great demos of vSAN 6.6 which can be found on his blog site: http://www.yellow-bricks.com

Things to Note

The underlying release for vSAN 6.6 is vSphere 6.5.0d which is a patch release for vSphere 6.5. For existing vSAN users upgrading to vSAN 6.6, please consult VMware Product Interoperability Matrices to ensure upgrading from your current vSAN version is supported.

Please note that for vSAN users currently on vSphere 6.0 Update 3 – upgrade to vSAN 6.6 is NOT yet supported.

The parent release of vSAN 6.6 is vSphere 6.5 and as shown by VMware Product Interoperability Matrices, an upgrade from 6.0 U3 to vSphere 6.5 (and hence vSAN 6.5) is NOT supported. Please refer to this KB Supported Upgrade Paths for vSAN 6.6 for further details.

 

p/s: I’ve always liked Rawlinson Rivera‘s Captain vSAN cartoon!! =)

vSphere/vCenter 6.5 released

So post VMworld, I wrote a long article about what’s new for vSphere 6.5 which I was hoping would be published on SearchVMware.com…. unfortunately I’m still waiting on it to be published, last I heard the article was too long and they were splitting it up into two articles! ¬_¬”

Anyways, whilst I wait for the article to be published, I’ll give a quick summary of things I’ve learnt about the new vSphere/vCenter 6.5 that was released 2 days ago.

  • New HTML5 vSphere Client
  • Fully Integrated vSphere Update Manager and AutoDeploy with vCenter Server Appliance
  • Native High Availability for the vCSA
  • Native backup/restore for vCSA
  • Built-in monitoring web interface for the vCSA
  • Over 2x increase in scale and 3x in performance
  • Easy to migrate from Windows vCenter to vCSA
  • Client Integration Plugin for the vSphere Web Client is no longer required
  • The vCSA deployment installer can be run on Windows, Mac and Linux
  • The installer now supports install, upgrade, migrate and restore
  • vSphere API Explorer
  • VM Encryption / Encrypted vMotion
  • Secure Boot (for ESXi host and VM)
  • VMware Tools 10.1 and 10.0.12 (for older guest OSes that are out of support)
  • Multi-factor authentication with Smartcard or SecurID
  • VMFS-6 (4k drive support in 512e mode – emulating 512 sectors)
  • Automatic Space Reclamation – VAAI UNMAP now automatic and integrated it UI
  • VVOLs 2.0 plus VASA 3.0
  • vSphere HA is now known as vSphere Availability, enhancements to Admission Control
  • HA Orchestrated Restarts (adding in dependencies when HA restarts a VM)
  • Proactive HA (when host components are failing they are put into a quarantine mode)
  • Enhancements to DRS (VM distribution, CPU Over-commit, Network aware)
  • Predictive-DRS if vRealize Operations 6.4 is deployed (forecasted trends will kick off DRS)
  • vSphere Replication enhancements (now 5min RPOs like vSAN)

 

To find out more information, head along to the following:

 

In addition to the GA of vSphere/vCenter 6.5 there were a load of other releases on the same day:

 

I’m still waiting on the launch of vRealize Automation 7.2 and NSX 6.3….. those should be imminent as well!

As always, all downloads are available via the My VMware Portal.

VMware vSphere ESXi and vCenter Server 6.0.0b Released

So the first minor release for vSphere ESXi 6.0 is out alongside the second minor release for vCenter Server 6.0.

https://www.vmware.com/support/vsphere6/doc/vsphere-vcenter-server-600b-release-notes.html
https://www.vmware.com/support/vsphere6/doc/vsphere-esxi-600b-release-notes.html

Looking through the release notes, I don’t think I’ve experienced any of those bugs that have been fixed – which is a good indication of a stable software release….. I’m guessing that the public beta of vSphere 6 actually ironed out a lot of bugs!

As always, read through the release notes prior to upgrading. =)

Comparing the Configuration of vCenter Server Appliance 5.5 and 6.0

Great White Paper here for those of you transitioning from 5.5 to 6.0 and want to know what the differences are:

http://www.vmware.com/files/pdf/products/vsphere/VMware-vsphere-60-vcenter-server-appliance-55-60-comparison.pdf

For me the major difference is VMware have dropped the Virtual Appliance Management Interface (VAMI), which makes sense as why would you want to manage your virtual environment from one browser url and administer your appliance from another! They’ve rolled all the configuration of the vCSA into the installation wizard, and also all the administrative aspects into the admin section of the Web Client.

I always found it a pain to fire up Web Client at https://<vCenter Server>:9443/vsphere-client and then the VAMI at http://<vCenter Server>:5480

=)

No coredump target has been configured

So recently a number of customers have been experiencing a core dump target error after rebooting their ESXi hosts…. quite strangely I also recently experienced the same issue when my demo environment went down a few weeks ago due to a power failure.

coredump1

There isn’t really a clear explanation why this happens, but it seems to be a common occurrence with end-users… it’s also quite simple to fix, but the KB isn’t exactly the clearest of instructions: http://kb.vmware.com/kb/2004299

Firstly enable SSH on the host experiencing the error:
ssh1 ssh2

Next, open a putty session to the host and login as root.

Check to see if there is currently an active diagnostic partition using the following esxcli command:
esxcli system coredump partition get
Check to see if there are any available diagnostic partitions by running the following command:
esxcli system coredump partition list

It’s more than likely you would get a similar output as below:
coredump2

Usually the coredump partition is configured on the boot device. We now need to find the boot device and the diagnostic partition. Run the following command to list all the storage devices attached to the host.
ls /dev/disks/ -l
or ls /vmfs/devices/disks/ -l
Usually the boot device can be easily identified because it would be the only device with multiple partitions:
coredump3

(If you want to understand more about partitions that are created by ESXi, have a look at this KB: http://kb.vmware.com/kb/1036609)

Once you have the device ID, run the following command to display the partition table for the device:
partedUtil getptbl “/dev/disks/DeviceName”
coredump4

Usually the partitions will be labelled and you can easily identify the coredump partition – labelled “vmkDiagnostic” this is quite often the 7th partition. If you’re unfortunate and don’t have labelled partitions, then usually you can identify the diagnostic partiton from the GUID displayed – this is usually “9D27538040AD11DBBF97000C2911D1B8”

Once you’ve identified the partition, you will have to re-point the coredump target to this partition.

To configure and activate a specific partition, use the command:
esxcli system coredump partition set –partition=”Partition_Name”
esxcli system coredump partition set –enable true

To automatically select and activate an accessible diagnostic partition, use the command:
esxcli system coredump partition set –enable true –smart

If the partition cannot be automatically set, you may have to deactivate the previous partition link and re-running the command, as follows:
coredump5

Once done, double check the core dump partition has been configured by running the following command:
esxcli system coredump partition get

If all is successful, reboot the host to complete the configuration and to ensure the partition is stored after rebooting.

Installing/Upgrading vCenter Server Appliance 6.0

I’ve been itching to deploy vSphere 6.0 GA for weeks now (since it was launched last month – wanted to replace my vSphere 6.0 Beta environment) but due to work commitments I’ve had to put this pet-project on the back-burner….. really hate when vendors release new toys at the end of quarter as it means I can’t get to play with it for a month or so!! >_<”

Installing and upgrading the vCSA 6.0 is significantly different than previous releases, it no longer gets distributed as an OVA which means you don’t use the OVF import in vSphere Client that we’re all so used to doing! Instead, vCSA 6.0 gets distributed as an ISO image – which is a bit weird for an appliance!

Hmm…. “So how do I deploy it?” is the most obvious question that most end-users will ask…. Well, you pretty much have to mount the ISO image onto your workstation/laptop/desktop/VM and then run the installation from the mounted drive…..

You may think that it’s a bit of a pain, but the installation process is quite simple and the wizard is very intuitive!

But why would VMware do away with the OVA package?!?
Well if I was to make an educated guess then this could be because they want to phase out the vSphere C# Client, and if you aren’t able to client onto your newly created host then how do you deploy an OVA?
For example, in a freshly installed ESXi host there’s no easy way to manage it without either a vSphere Client or a vCenter Server – at present you can’t open a web-client to the host in order to manage it (see below screenshot of the ESXi hosts’ landing page), so it makes sense to do away with the OVA deployment method and design it so you can mount the installation package for deployment of the vCSA without having to import the OVA via the soon-to-be-retired (maybe) vSphere client!
vcsa01

Now there’s two ways you can install vCSA 6.0 – Guided or Scripted. For ease of deployment, I’m going to discuss the Guided approach using the installation wizard. The Scripted approach is aimed at people who wish to automate the deployment of (several) vCSAs.

So before we get started, there are certain pre-requisites which must be completed prior to deploying the vCSA (in addition to what is listed in the documentation)

  1. Ensure that the hostname being assigned to the vCSA is in DNS, ideally both forward and reverse lookup. This will help with the installation process (I won’t go into the reasoning or what happens as several people have already posted online to mention the installation could fail if no DNS entry can be found).
  2. Ensure you install the Client Integration Plug-in before running the installation – the installer will not run without it installed! (This is both for fresh installs and upgrades!)
    vcsa02
  3. Do not input more than 1 DNS server (even though the installer prompts that you can). This will cause the installer to fail – as pointed out in the Release Notes.
  4. Ensure you enter the network settings correctly, as there is no pre-check function available and any errors will lead to firstboot errors – again, as pointed out in the Release Notes!
    Especially watch out for VLAN configuration errors, ensure the vCSA is on the correct VLAN and it’s routable to the machine you’re deploying from (as well as the ESXi host itself).

Right, now you’re ready to mount the ISO on your deployment device (my case – my Win 7 laptop) and start the installation process! In my case I’m using MagicDisc to mount the ISO.

First up, install the Client Integration Plug-In which is found in vcsa directory.
vcsa05 vcsa06

Next launch the setup via the vcsa-setup.html file:
vcsa04

This will open up a webpage which will prompt you to allow the client integration plug-in to run, the screens below are for Chrome (left) and IE (right):
vcsa07 vcsa08

Next hit the Install button:
vcsa09

Accept the EULA and enter the ESXi host information where you are going to deploy the vCSA, accept any certification warnings:
vcsa10 vcsa11

Enter the FQDN for the appliance and the new root password.
vcsa12

Next choose the deployment type. In my case I want to deploy the embedded PSC. I won’t go into the technicalities of what the PSC is, and the different deployment scenarios – if you wish to learn more than head along to Derek Seaman’s site which explains the PSC in more detail!
vcsa13vcsa14

Next enter the SSO password and domain details.
vcsa15

Select the appliance size based on your virtual environment (number of hosts and VMs)
vcsa16

Select the datastore you wish to deploy the appliance on
vcsa17

Choose whether to use the internal vPostgres DB or an external Oracle DB
vcsa18

Input the network configuration details, ensuring the FQDN is resolvable in DNS. Pay attention to the NTP server, especially if deploying/connecting to another PSC – if they’re out of sync, it could cause installation issues!
vcsa19

Review the configurations and click Finish to start the installation.
vcsa20

Once complete, the installation wizard will give you the details to connect to the web client, the URL will be https://fqdn/vsphere-client (no more port number required at the end of the url!!). Remember, if you’ve changed the SSO domain earlier, then the login user will be administrator@SSO-Domain
vcsa21 vcsa22

Now that the vCSA has been deployed, there is a new way of joining it to an Active Directory Domain, which will help you configure the Identity Sources for SSO. Log into the web client and then on the home page select System Configuration.
vcsa29

Under System Configuration, click Nodes and then select the vCenter Server and click the Manage tab.
vcsa25

Under Advanced, select Active Directory, and click Join. Type in the Active Directory details. Note: The User name must be in User Principle Name (UPN) format – eg joebloggs@acme.com.
vcsa26

Click OK to join the vCenter Server Appliance to the Active Directory domain. Now Right-click the node you edited and select Reboot to restart the appliance so that the changes are applied.
vcsa27

Now you can add in the domain as a SSO Identity Source as you would usually do. However, you can choose Active Directory (Integrated Windows Authentication) and it should populate the domain details and pick up the information from when you joined the vCSA to the domain.
vcsa28

For more information, point your browsers to the vCenter Server 6.0 Deployment Guide.