vSphere Central – new resource centre

A little while back I caught the vSphere blog about vSphere Central being launched and ended up bookmarking the portal to have a look at a later date. I had totally forgot about it till today when I needed to look up the PSC topology diagrams and Google sent me to the new vSphere 6.5 Topology and Upgrade Planning Tool (more on this later). Turns out this portal is exactly like Storage Hub (resource portal for everything vSAN, SRM and storage related)!

Everything technical you need to know about vSphere and vCenter can be found on this portal:

  • How to install vCenter and vSphere
  • How to migrate to vCSA
  • How to upgrade vCenter and vSphere
  • vCenter and PSC architecture
  • SSL certificate management
  • PSC Deployment Types
  • Product Interoperability Matrix
  • All the new features in 6.5 explained (vCenter HA, Backup/Restore, etc)

It really is a great resource portal, and even better you can download each section as a PDF! Beats the documentation site for vSphere as it’s far more easier to navigate!

The content is in a range of formats, most of it is text taken from the technical pdf documents, but there are videos and walkthrough demos also scattered throughout the topics.

One of the things launched with vSphere Central was the vSphere 6.5 Topology and Upgrade Planning Tool.

This tool aims to help customers plan and execute both upgrades to vSphere 6.5 as well as new deployments. With this initial release, the tool is focused on the most common upgrade paths and deployments of vCenter Server 6.5. The tool works by asking a series of questions while providing some guidance along the way to help answer those questions eventually making some recommendations on topology and upgrade and deployment steps.

In the past I used to refer to the VMware KB on deployment topologies: https://kb.vmware.com/kb/2147672

Some of the guys in the vSphere technical marketing team then came up with the PSC Topology Decision Tree which was a large poster – https://blogs.vmware.com/vsphere/2016/04/platform-services-controller-topology-decision-tree.html

This tool was inspired by the Decision Tree poster and extends its capability.

What I especially like about the tool is that after answering a series of questions regarding how I’m planning to design the vCenter/PSC deployment it gives me a recommended Topology diagram and then explains the steps to go about deploying the solution:

topology

Anyways, it’s a great tool…. and the portal is a brilliant collection of resources! Go use it! Bookmark it now…! =)

Advertisements

Goodbye vCenter Server for Windows and Flash-based vSphere web client!

Hmm…. it’s not even VMworld yet and VMware decide to make 2 big-ish announcements.

Although tbh, since vSphere 6.5 was released these 2 announcements have long been coming!

Finally, after loads of speculation, VMware had announced that vCenter Server for Windows and the Flash-based vSphere web client is to be deprecated with the launch of the next version of vSphere. Updates to 6.5 will continue supporting the 2 features, but come vSphere 7.0 it will be no more….

https://blogs.vmware.com/vsphere/2017/08/farewell-vcenter-server-windows.html

 

“vCSA-exclusive capabilities such as file-based backup and restore, unified update and patching, native vCenter High Availability, and a significant performance advantage mean that the VCSA has become the platform of choice for vCenter Server.  Additionally, due to the integrated nature of appliance packaging, VMware is able to both better optimize and innovate vCenter Server at an accelerated pace.  Finally, with the VCSA, VMware can provide support for the entire vCenter Server stack including the vCenter Server application, the underlying operating system (Photon OS), and the database (vPostgres). By doing so, VMware can ensure that customers can focus on what matters most while having a single source for updates, security patches, and support.  The VCSA model is simply a better model for vCenter Server deployment and lifecycle management.”

That pretty much sums up why VMware are 100% behind the vCSA, although they miss out the whole “screw you Microsoft licensing!!” part! Plus given that 6.5 ships with a migration tool that helps you move/upgrade from a Windows vCenter to an Appliance vCenter, it’s no surprised that more and more people are moving over when it comes round to upgrade time!

In fact ever since 6.5 was released, I’ve not even deployed a single Windows vCenter Server – all my customers have been moved over to the vCSA.

https://blogs.vmware.com/vsphere/2017/08/goodbye-vsphere-web-client.html

With regards to the vSphere Web Client, loads of people found the flash-based version was frustratingly difficult to use – it was slow, it was notoriously prone to crashing and frankly it was based on in-secure Flash technology (not to mention that Adobe themselves are dropping flash). HTML5 is the way to go baby!

So with those announcements in mind….. I may think about changing some of my VMworld sessions to jump on the vCSA and Web Client update sessions!!

 

RIP…..

VMware vSAN 6.6 launched – so What’s New?

Earlier this year it was announced that vSAN had grown to over 7000 customers since launch, which is a pretty decent number given the product went GA just over 3 years ago and we’re on the 6th iteration! What’s even more impressive is how quickly VMware are turning these updates around (almost every 6 months we get an update of sorts), we only got vSAN 6.5 at VMworld last year and 6 months later we now have version 6.6 – what’s funny is half my customers haven’t even started implementing their 6.5 upgrade plan yet and now they will have to re-write that plan…. Lol… =)

In fact I see the number of customers growing quite significantly this year given the huge drive towards HCI – something that I’m seeing within my company’s customer-base (and in the market in general)!

Today sees vSAN 6.6 go GA, and it amazes me on how many new features VMware have packed into this release – features that make vSAN more faster, cost effective and much more secure! And to think that this is just a “minor” patch release! With vSAN 6.6, customers can now evolve their data centre without risk, control IT costs and scale to tomorrow’s business needs (sorry, that was a marketing blurb that I just had to fit in somewhere as it sounded good).

vSAN features

(Note: I know that slide says “Not for distribution”. However, the vSAN vExperts have been given permission to use the material in their blogs)

The biggest features in my opinion are vSAN Data-at-Rest Encryption, Unicast communication and Enhanced Stretched Clustering with Local Protection – these are the 3 features I’m going to concentrate on within this post, trying to expound on all the new features would involve me writing a lengthy technical whitepaper! =)

That said, other new features are as follows:

  • ESXi Host Client (HTML-5) – management and monitoring functionality available on each host in the case where vCenter server is offline.
  • Simpler installation/configuration – The ability to create a single node vSAN datastore by using the vCSA installer and then allowing you to deploy vCSA/PSC onto that vSAN datastore.
  • Enhanced rebalancing – allowing large components to be split up during redistribution.
  • Site Affinity in Stretched Clusters – a new Affinity policy rule allows users to request where a VM gets deployed to, although this is only applicable when the PFTT is set to 0. Although it’s worth noting that DRS/HA rules should be aligned to data locality!
  • Always-On Protection – Enhanced repairs with Re-sync traffic throttling – allowing vSAN to respond to failed disks/nodes more quickly, intelligently and more efficiently. New Degraded Device Handling (DDH) intelligently monitors the health of drives and proactively evacuates data before failures can happen.
  • Maintenance Pre-Check – enhanced checks to ensure there are enough resources for vSAN when entering maintenance mode (or decommissioning vSAN nodes).
  • Stretched Cluster Witness Replacement UI – simpler method of changing the Witness host without having to disable the Stretched Cluster.
  • vSAN Cloud Analytics – pro-active, real-time support notifications and recommendations with real-time custom alerts through the vSAN health Service.
  • API enhancements – vSAN SDK updated to handle all new features, with additional enhanced PowerCLI support.
  • vSAN Config Assist / Firmware Update – Enhanced health monitoring and HCL checks using health-check assistant to ensure the vSAN hardware has the latest firmware and drivers installed.
  • Enhanced Performance – up to 50% higher all-flash IOPs performance per host and Health Monitoring
  • New Hardware Support – Support for Intels new Optane technology, NVMe SSDs and larger 1.6TB SSDs for cache drives.
  • Support for Photon Platform 1.1 as well as a Docker Volume Driver – great for customers (ie DevOps) who prefer working with micro-services/containers. This allows customers to use vSAN as storage for Docker VMs giving them the ability to apply storage based polices (such as FTT, QoS, access permissions, etc) to the VM, it also gives customers the ability to support persistent storage to allow stateful container apps to be built (such as DBs).

 

Data-at-Rest Encryption

EMC love calling this by the acronym D@RE…. But this hasn’t quite filtered down to the VMware team…. =)

VMware vSAN 6.6 introduces the industry’s first native HCI security solution with software-defined data-at-rest encryption within the hypervisor. Data-at-rest encryption is built right into the vSAN kernel, and is enabled at the cluster allowing all vSAN objects to be encrypted (ie the entire vSAN datastore).

In my opinion this is one of the most important new feature in vSAN 6.6 – we all know that security within IT has become top priority, featuring very high on a company’s risk-register, but IT Admins have always been reluctant to either deploy encryption at the OS level or let application owners encrypt their apps and data. Data-at-rest encryption takes away that decision by encrypting when the data resides on your vSAN Datastore.

It’s hardware-agnostic which means you can deploy the storage hardware device of your own choice – it doesn’t require the use of expensive Self-Encrypting Drives (SEDs)!

vSAN DARE

vSAN Encryption is available for both All-Flash and Hybrid configurations and integrates with KMIP 1.1 compliant key management technologies. When vSAN Encryption is enabled, encryption is performed using an XTS AES 256 cipher and occurs both at the cache and capacity tier – wherever data is at rest, which means you can rest assured that if a cache or capacity drive is stolen the data is encrypted! Plus vSAN Encryption is fully compatible with vSANs all-flash space efficiency features, like dedupe, compression and Erasure Coding, delivering highly efficient and secure storage – as data comes into the cache tier it’s encrypted, then as it de-stages it’s decrypted and any relevant dedupe or compression occurs to the data (4k blocks) before it’s re-encrypted as it hits the capacity tier (512b or smaller blocks). As it’s data encryption at rest, I believe that vSAN traffic traversing the network maybe sent in the clear which means you will need to ensure vSAN traffic is protected accordingly.

It’s worth mentioning that whist the cryptographic mechanics are similar to VM encryption that was introduced in vSphere 6.5 (ie it requires a KMS and uses the same encryption modules), there is a vast difference in the way they’re implemented – VM encryption is per-VM (via vSphere API for IO filtering – VAIO), whilst with vSAN encryption it is the entire datastore. Also you get space-saving benefits from vSAN encryption as previously mentioned. The other major difference is that vSAN encryption can carry on functioning if vCenter Server is lost or powered off because the encryption keys are transferred to each vSAN host and via KMIP each host talks directly to the KMS, whereas VM encryption requires you to go through vCenter Server to communicate to the KMS. Not to mention VM-encryption does have some performance impacts and requires Ent Plus licenses.

Turning on vSAN encryption is as simple as clicking a checkbox within the settings of the vSAN cluster and choosing your KMS (which does need to be setup prior to enabling encryption). However, it’s worth noting that a rolling disk reformat is required when encryption is enable which can take a considerable amount of time – especially if large amounts of data residing on the disks must be migrated during the reformatting.

vsan-encrypt

With the enhanced API support, customers who like to automate their infrastructure will be able to setup an encrypted vSAN cluster with all the relevant KMS configuration via scripting – great for automating large scale deployments!

 

Removal of Multicast

vSAN Multicast

Another big announcements with vSAN 6.6 is that VMware are switching from multicast to unicast for their communication mechanism. This obviously makes networking a lot simpler to manage and setup as customers won’t need to enable multicast on their network switches, or IGMP snooping, or even PIM for routing. It may even mean that customers could use cheaper switches (which may not handle Multicasting very well).

Bit of background:

Typically IP Multicast is used to efficiently send communications to many recipients. The communication can be in the form of one source to many recipients (one-to-many) or many sources to many recipients (many-to-many).

vSAN used multicast to deliver metadata traffic among cluster nodes for efficiency and to optimise network bandwidth consumption for the metadata updates. This eliminates the computing resource and network bandwidth penalties that unicast imposes in order to send identical data to multiple recipients. vSAN depended on multicast for host discovery – the process of joining and leaving cluster groups, as well as other intra-cluster communication services.

While Layer 3 is supported, Layer 2 is recommended to reduce complexity. All VMkernel ports on the vSAN network subscribe to a multicast group using IGMP. IGMP snooping configured with an IGMP querier can be used to limit the multicast traffic to only the switch ports where the vSAN uplinks are connected to – this avoids unnecessary IP multicast floods within the Layer 2 segments.

Although one of the issues that could occur was when multiple vSAN clusters reside on the same layer 2 network – the default multicast address should be changed within the additional vSAN clusters to prevent multiple clusters from receiving all multicast streams.

I believe vSAN now relies on vCenter Server to determine cluster membership, however I haven’t yet read about how the vSAN team have managed to implement unicast communication as that information is still in limited supply. It’ll be interesting to understand how they have done it considering multicast was an efficient and easy way of replicating instructions to multiple nodes within the vSAN cluster when a node needed to perform an action. Although one thing worth noting is that unicast communication probably lends itself to cloud platforms a lot easier than trying to implement a multicast solution!

 

Local Protection for Stretched Clusters

Stretched vSAN Clusters were introduced back with vSAN 6.1 and built on the foundations of Fault Domains, it was basically a RAID-1 configuration of a vSAN object across two sites – which basically means a copy of the data in each site with a witness site for cluster quorum type services during failure events. The problem was if 1 site failed you would only have a single copy left and an additional failure could lead to data loss. It also meant that if a single host failed in any of the sites then the data on that host would need to be resynced again from the other site (to rebuild the RAID-1).

vSAN ESC

This new enhancement to Stretched Clusters now gives users more flexibility with regards to local and site protection. For example, you can now configure the local clusters at each site to tolerate two failures whilst also configuring the stretched cluster to tolerate the failure of a site! Brilliant news!

When enabling Stretched Clusters, there are now two protection policies – a “Primary FTT” and a “Secondary FTT”. Primary FTT defines the cross-site protection and is implemented as a RAID-1. It can be set to 0 or 1 in a stretched cluster – 0 means the VM is not stretched whilst 1 means the VM is stretched. Secondary FTT defines how it is protected within a site, and this can be RAID-1, RAID-5 or RAID-6.

One thing to note is that the witness must still be available in order to protect against the loss of a data site!

This new feature doesn’t increase the amount of traffic being replicated between sites as a “Proxy Owner” has been implemented per site, which means instead of writing to all replicas in the second site, a single write is done to the Proxy Owner and it’s then the responsibility of this Proxy Owner to write to all the replicas on that local site.

 

So that’s about it for now…. if you require more information then pop along to the following sites:

Duncan Epping (Chief Technologist in the Office of CTO for the Storage & Availabiliy BU at VMware) has created some great demos of vSAN 6.6 which can be found on his blog site: http://www.yellow-bricks.com

Things to Note

The underlying release for vSAN 6.6 is vSphere 6.5.0d which is a patch release for vSphere 6.5. For existing vSAN users upgrading to vSAN 6.6, please consult VMware Product Interoperability Matrices to ensure upgrading from your current vSAN version is supported.

Please note that for vSAN users currently on vSphere 6.0 Update 3 – upgrade to vSAN 6.6 is NOT yet supported.

The parent release of vSAN 6.6 is vSphere 6.5 and as shown by VMware Product Interoperability Matrices, an upgrade from 6.0 U3 to vSphere 6.5 (and hence vSAN 6.5) is NOT supported. Please refer to this KB Supported Upgrade Paths for vSAN 6.6 for further details.

 

p/s: I’ve always liked Rawlinson Rivera‘s Captain vSAN cartoon!! =)

VMware sells off vCloud Air to OVH

Hmm…. so that was an interesting announcement from VMware last week!….. although if I’m honest it makes perfect sense!

OVH Group announcing it’s intent to acquire the vCloud Air Business from VMware: https://www.vmware.com/radius/vmware-cloud-air-evolves/

Last year when VMware announced their tie up with AWS – vCloud on AWS – many had already started wondering what that partnership would do to VMware’s own cloud offering. The talking point was made more real when VMware also announced their Cross-Cloud Architecture which would allow a customer to choose which cloud platform to deploy their workloads onto – all from a single common operating environment. Then to make things worse, VMware announced VMware Cloud Foundation on IBM Cloud (or what was Softlayer)… an SDDC stack running VMware goodies on IBM Cloud compute!

That triple whammy pretty much made everyone think that vCloud Air’s time was up!!

I had a number of discussions at VMworld Europe last year where we talked about whether VMware would just shut down vCloud Air, or would they migrate it all onto AWS. Although the general consensus was that maybe they would sell off/spin off that part of their business – after all, VMware is a software business and vCloud Air was always seen as a ‘weird’ sibling…. not to mention that it competed against all it’s vCAN (VSPP) partners who were offering their own cloud services built on VMware technology!

I guess there’s no shame in what VMware are doing, Cisco, Dell and HP tried and failed to do what Amazon and Google are doing well at… although surprisingly Microsoft have managed to get Azure up and running well!

In a way, VMware are getting rid of what they probably saw as a hefty investment on infrastructure and hosting for little returns (I doubt there were many customers using vCloud Air to justify the expense of keeping it). Makes more sense to sell it to an existing cloud provider who knows how to sell Public Cloud services and IaaS! Although, I kind of have to wonder what OVH will do given VMware hosted vCloud Air in Equinix/Telstra data centres around the world….. guessing they’ll run down the contract with those providers and bring it all back in house!

In my opinion, selling off vCloud Air is probably a smart move….. VMware’s vision is to enable a customer to run “Any Application on Any Cloud, accessed by Any Device”, and it was going to be difficult to be Cloud-Agnostic if they owned a Public Cloud service! The whole Cross-Cloud Architecture would have produced a conflict of interest if they kept vCloud Air…. now that they’re shot of it, they can concentrate on pushing out their vCloud stack onto Azure and maybe even GCP given that they’re well on their way with the AWS partnership. Why try and beat them at their own game? It’s far easier to embrace them and partner!!

VMware are positioning themselves to be the broker of cloud services…. a single management point that allows end users to decide which public cloud is best for their workloads! In a way it’s a clever move, firstly because it puts the decision-making back with the end user, and secondly it now means that VMware can state that it’s the only virtualisation company that doesn’t tie you into a single cloud vendor (much like how Microsoft tries to ram Azure down the throat of Hyper-V customers).

Interesting times ahead……

Opinion Piece on VMware Licensing

So over the past few months I’ve been seeing a lot of customers within the Public Sector and Education looking at transitioning off VMware vSphere and onto Microsoft Hyper-V! With tightening budgets or even budget cuts, IT admins in these industries are looking for quick wins in slashing their IT bills and many see dropping VMware for the ‘free’ Microsoft hypervisor as an obvious choice!

The problem is, you can argue about VM densities per host, resource scheduling, live migrations, DR, and other technical aspects of why vSphere trumps Hyper-V…. However, the reply is always the same…. “Well Hyper-V is Good Enough for our environment…. and it’s Free!!”

Yes, Hyper-V is good enough as a hypervisor… and yes it’s free…. but when you have a large estate, the density ratio impacts the amount of servers you need to buy and you still need to invest in System Center with Virtual Machine Manager (SCVMM) if you want to effectively manage a cluster of Hyper-V hosts.

Unfortunately, I’m now of the impression that VMware advocates can no longer keep using the same argument when doing comparisons between vSphere and other hypervisors…. IT admins just don’t care any more…. “if the hypervisor is free and can virtualise my servers, then that’s the one I’m going for!!”

Anyways, I ended up sitting down and writing an opinion piece for SearchVMware.com on this topic….. you can view it here:

http://searchvmware.techtarget.com/opinion/Could-market-saturation-push-VMware-to-make-vSphere-Standard-free

VMware NSX – IOChain and how packets are processed within the kernel

During a meeting with a client when I was going over how packets are processed within the IOChain between a VM and a vSwitch, I was asked a question that stumped me…. what happens at Slot 3?

It’s common knowledge that the first 4 and last 3 slots in the IOchain are reserved for VMware and slots 4-12 are reserved for 3rd parties where services are inserted (or traffic redirected).

During my discussions I’ve only ever spoken about Slots 0-2 and 4-12…..

After much digging around and questioning the NSBU SEs, I was told that there was no real answer apart from it’s probably a VMware reserved slot for future use. =)

It’s also worth noting that Slot 15 used to be classed as a “reserved slot for future use” but is now intended to be used for Distributed Network Encryption when it becomes available (makes sense that encryption is the last thing that happens on the IOChain for packets leaving a VM, and decryption being the first for packets entering the VM).

Anyways, decided it’s probably worth blogging about IOChain slots. =)

 

So when a VM connects to a Logical switch there are several security services that each packet transverses which are implemented as IOChains processed within the vSphere kernel.

Slot 0: DVFilter – the Distibuted Virtual Filter monitors ingress/egress traffic on the protected vNIC and performs stateless filtering and ACL.

Slot 1: vmware-swsec – the Switch Security module learns the VMs IP/MAC address and captures any DHCP Ack or ARP broadcasts from the VM, redirecting the request to the NSX Controller – this is the ARP suppression feature. This slot is also where NSX IP Spoofguard is implemented.

Slot 2: vmware-sfw – this is where the NSX Distributed Firewall resides and where DFW rules are stored and enforced (so firewall rule and connection tables).

Slot 3: reserved for future use by VMware

Slot 4-12: 3rd party services – this is where traffic is redirected to 3rd party service appliances

Slot 13-14: reserved for future use by VMware

Slot 15: Distributed Network Encryption (when it becomes available)

vSphere 6.5 Product Interoperability – brain fade moment!

So it’s probably worth reminding everyone that there are still VMware products that are not yet supported on vSphere 6.5!

I unfortunately found out the hard way when I broke my work’s demo environment (or at least half of it).

Now even though I’ve blogged about compatibility issues previously eating too many mince pies and drinking too much bucks fizz over the Christmas and New Year festivities has obviously taken its toll on my grey matter, and coming back to work in the new year I decided it would be a nice idea to upgrade a part of my works demo environment to vSphere 6.5 so that we can use it to demo to customers!

The problem was I upgraded the part of the lab running NSX and when I got to the point of trying to push the NSX VIBs onto the ESXi hosts (when preparing the hosts to join the NSX cluster), it was having none of it and failing! After several unsuccessful attempts, it slowly dawned on me that NSX was one of those ‘unsupported’ products that doesn’t work with vSphere 6.5…..

Damn…..

Fortunately I didn’t destroy my old vCenter Server 6.0u2 appliance so was able to roll back by re-installing the ESXi hosts with 6.0.

 

Anyways, the products still not supported are:

  • VMware NSX
  • VMware Integrated OpenStack
  • vCloud Director for Service Providers
  • vRealize Infrastructure Navigator
  • Horizon Air Hybrid-Mode
  • vCloud Networking and Security
  • vRealize Hyperic
  • vRealize Networking Insight

 

Definitely worth keeping an eye on this VMware KB: Important information before upgrading to vSphere 6.5 (2147548)

And if you do end up upgrading to vSphere 6.5, then make sure you follow the recommended upgrade sequence in this VMware KB: Update sequence for vSphere 6.5 and its compatible VMware products (2147289)

What’s new with VMware vSAN 6.5?

Given that I’m a VMware vExpert for vSAN, I guess I’m kind of obliged to write about what’s new with the latest iteration of vSAN – 6.5….. =)

vSAN 6.5 is the 5th version of vSAN to be released and it’s had quite a rapid adoption in the industry as end-users start looking at Hyper-Converged Solutions. There are over 5000+ customers now utilising vSAN – everything from Production workloads through to Test & Dev, including VDI workloads and DR solutions! This is quite surprising considering we’re looking at a product that’s just under 3 years old… it’s become a mature product in such a short period of time!

The first thing to note is the acronym change…. it’s now little ‘v’ for vSAN in order to fall in line with most of the other VMware products! =)

So what are the key new features?

1. vSAN iSCSI

This is probably the most useful feature in 6.5 as it gives you the ability to create iSCSI targets and LUNs within your vSAN cluster and present these outside of the vSAN Cluster – which means you can now connect other VMs or physical servers to your vSAN storage (this could be advantageous if you’re trying to run a MSCS workload). The iSCSI support is native from within the VMkernel and doesn’t use any sort of storage appliance to create and mount the LUNs. At present only 128 targets are supported with 1024 LUNs and a max. LUN size of 62TB.

vsan-iscsi

It seems quite simple to setup (famous last words – I’ve not deployed 6.5 with iSCSI targets yet). First thing is to enabled the vSAN iSCSI Target service on the vSAN cluster, after that you create an iSCSI target and assign a LUN to it… that’s pretty much it!

Great thing about this feature is because the LUNs are basically vSAN objects, you can assign a storage policy to it and use all the nice vSAN SPBM features (dedupe, compression, erasure-coding, etc).

2. 2-node direct connect for vSAN ROBO + vSAN Advanced ROBO

Customers find it quite difficult to try and justify purchasing a 10GbE network switch in order to connect together a few nodes at a ROBO site. VMware have taken customer feedback and added a new feature which allows you to direct connect the vSAN ROBO nodes together using a cross-over network cable.

In prior versions of vSAN both vSAN traffic and witness traffic used the same VMkernel port which prevented the ability to use a direct connection as there would be no way to communicate with the witness node (usually back in the primary DC where the vCenter resides). In vSAN 6.5 you now have the ability to separate out vSAN and witness traffic onto separate VMkernel ports which means you can direct connect your vSAN ports together. This is obviously great as you can then stick in a 10GbE NIC and get 10Gb performance for vSAN traffic (and vMotion) without the need of a switch!

vsan_2node_robo

The only minor issue is you need to use the CLI to run some commands to tag a VMkernel port as the designated witness interface. Also the recommended setup would be to use 2 VMkernel ports per traffic flow in order to give you an active/standby configuration.

vsan-2node2nic

It’s also worth noting that the new vSAN Advanced ROBO licenses now allow end-users to deploy all-flash configurations at their ROBO site with the added space efficiency features!

3. vSAN All-Flash now available on all license editions

Yup, the All-Flash Tax has gone! You can now deploy an All-Flash vSAN configuration without having to buy an advanced or enterprise license. However, if you want any of the space saving features such as dedupe, compression and erasure coding then you require at least the Advanced edition.

4. 512e drive support

With larger drives now coming onto the market, there has been a request from customers for 4k drive support. Unfortunately there is still no support for the 4k native devices, however there is now support for 512e devices (so physical sector is 4k, logical sector emulates 512bytes).

More information on 4Kn or 512e support can be found here: https://kb.vmware.com/kb/2091600

5. PowerCLI cmdlets for vSAN

New cmdlets are available for vSAN allowing you to script and automate various vSAN tasks (from enabling vSAN to the deployment and configuration of a vSAN stretched cluster). The most obvious use will be using cmdlets to automatically assign storage policies to multiple VMs.

More info on he cmdlet updates available here: http://blogs.vmware.com/PowerCLI/2016/11/new-release-powercli-6-5-r1.html

6. vSAN storage for Cloud Native Apps (CNA)

Integration with Photon means you can now use a vSAN cluster in a CNA enviroment managed by Photon Controller. In addition, now that vSphere Integrated Containers (VIC) is included with vSphere 6.5, you can now use vSAN as storage for the VIC engine. Finally Docker Volume Driver enables you to create and manage Docker container data volumes on vSAN.

For more information about vSAN 6.5, point your browsers to this great technical website: https://storagehub.vmware.com/#!/vmware-vsan/vmware-vsan-6-5-technical-overview

VMware makes welcome changes in vSphere 6.5

So the 2nd and 3rd part of my vSphere 6.5 articles have made it onto the SearchVMware.com website… you can read about it here:

http://searchvmware.techtarget.com/tip/VMware-vSphere-65-puts-emphasis-on-security-applications

http://searchvmware.techtarget.com/tip/VMware-makes-welcome-changes-in-vSphere-65

 

You can read part 1 here: http://searchvmware.techtarget.com/tip/VMware-focuses-on-simplicity-in-vSphere-version-65