MTI Secure Hyper-Converged Infrastructure Webinar & Guide

Back end of February I presented a webinar with my colleague, Andrew Tang, around Key Challenges and Considerations for Securing Hyper-Converged Infrastructure.

The webinar has been uploaded for public consumption by the marketing team at MTI Technology.

As I mentioned previously in my blog, I don’t really touch upon product in this webinar as the last thing customers want is to be shoehorned into a certain vendor product… instead I hope the webinar gives enough information about what HCI is in general, why customers should be looking at HCI during their next infrastructure refresh, and more importantly what to consider when evaluating a HCI solution!

You can access the webinar recording here: https://mti.com/secure-hci-webinar-page/ (sorry, you have to fill in your details to gain access….)

Marketing has also finally released the HCI guide that both Andrew and myself put together around HCI, feel free to download that here: https://bit.ly/2qMY6qJ

Finally, if you’re interested in talking more about HCI then feel free to contact me or register for one of MTI’s HCI Discovery Workshops: https://bit.ly/2vQO3Gb

Advertisements

Spectre & Meltdown Update

So it seems that the microcode patches released by VMware associated with their recent Security Advisory (VMSA-2018-0004) have been pulled….
https://kb.vmware.com/s/article/52345
So that’s ESXi650-201801402-BG, ESXi600-201801402-BG, or ESXi550-201801401-BG.

The microcode patch provided by Intel was buggy and there seems to be issues when VMs access the new speculative execution control mechanism (Haswell & Broadwell processors). However, I can’t seem to find much around what these issues are…

For the time being, if you haven’t applied one of those microcode patches, VMware recommends not doing so and to apply the patches listed in VMSA-2018-0002 instead.

If you have applied the latest patches you will have to edit the config files of each ESXi host and add in a line that hides the new speculative execution control mechanism and reboot the VMs on that host. Detailed information can be found in the KB above.

 

Finally William Lam has created a very handy PowerCLI script that will help provide information about your existing vSphere environment and help identify whether you have hosts that are impacted by Spectre and this new Intel Sighting issue: https://www.virtuallyghetto.com/2018/01/verify-hypervisor-assisted-guest-mitigation-spectre-patches-using-powercli.html

vCenter Server Migration Tool: vSphere 6.0 Update 2m

Last year I blogged about the vCS to vCSA converter tool that VMware Labs released as a fling and how I had used it to pretty much convert all my lab vCenters (all bar one) to vCSAs….. since then I’ve been following the releases and a few months ago I noticed the Fling was deprecated (ie you can’t download it). I didn’t think much of it as VMworld 2016 was only round the corner, so thought it might be rolled into an impending vSphere/vCenter release….. unfortunately that never quite materialised in Las Vegas, and rumours are that vSphere 6.5 might be released in Barcelona.

So I was quietly surprised when I got an email notification from VMware Blogs to inform me that a new minor update of vSphere had been released specifically for migration puposes – vSphere 6.0 Update 2m (where the ‘m’ stands for migration).

vSphere 6.0 Update 2m is an automated end to end migration tool from a Windows vCenter Server 5.5 (any update) to a vCenter Server Appliance 6.0 Update 2 (so pretty much what the Fling used to achieve).

It’s common knowledge that trying to migrate from a Windows vCenter Server (with a SQL backend) to a vCenter Server Appliance was not an easy task – in fact in 90% of my customers I’ve just told them to start a fresh rather than go through the pain of scripting a migration. However, I’m so glad that VMware have rolled out the Converter fling into an actual production release – now we have an end-to-end migration tool which takes all the pain out of the equation!

Those of you who are interested in migrating from your Windows vCenter Server 5.5 (any update) to a vCenter Server Appliance 6.0 Update 2 should download and use this release. The vSphere 6.0 Update 2m download is an ISO consisting of the Migration Tool and vCenter Server Appliance 6.0 Update 2, roughly about 2.8GB in size.

Note: you cannot use this release to deploy a new installation of vCSA! To do that you just use the vCSA 6.0 Update 2 install.

What’s Supported:

  • Previous versions of Windows vCenter Server will need to upgraded to vCenter Server 5.5 prior to migration.
  • The best thing is that all database types currently supported with vCenter Server 5.5 will be migrated to the embedded vPostgres database in the vCSA!
  • It’s worth noting that if VMware Update Manager is installed on the same server as the Windows vCenter Server 5.5, it will need to be moved to an external server prior to starting the migration process.
  • VMware and 3rd party extension registrations are migrated, but may need to be re-registered.
  • vCenter Server 5.5 both Simple and Custom deployment types are supported.
  • Configuration, inventory, and alarm data will be migrated automatically, historical and performance data (stats, tasks, events) is optional.
  • If the source was a Simple vCenter Server 5.5 install (so SSO + vCS) then it will be migrated to a vCSA with embedded PSC.
  • If the source was a Custom vCenter Server 5.5 install (so separate SSO and vCS) then it will be migrated to a vCSA with external PSC.

Somethings that are worth mentioning prior to starting a migration are:

  • It preserves the personality of the Windows vCenter Server which includes but not limited to IP Address, FQDN, UUID, Certificates, MoRef IDs.
  • Changing of your deployment topology during the migration process is not allowed. For example, if your vSphere 5.5 Windows vCenter was deployed using the Simple deployment option, then your Windows vCenter Server 5.5 will become an embedded vCenter Server Appliance 6.0.
  • During the migration process the source Windows vCenter Server will be shutdown, plan accordingly for downtime.
  • The migration tool will also be performing an upgrade, standard compatibility and interoperability checks will still apply. Please use the interoperability matrix to make sure all VMware solutions are compatible with vSphere 6.0. Also talk to your 3rd solution vendors to make sure those solutions are also compatible with vSphere 6.0.

 

The only annoying thing is that because I’ve used the fling previously to convert all my Windows vCenter Servers, I now don’t have anything I can test this migration tool on!! >_<”

I’m currently in the process of digging out an old vCenter Server 5.5 ISO so that I can deploy it and upgrade it using the new release!

 

Anyways, those of you who haven’t yet upgraded to vCenter Server 6.0 and to an appliance, now there’s no reason why you can’t as you have a fully supported tool from VMware!

Best of all, they’re in the process of improving the migration tool so that it can be used to migrate from a Windows vCenter Server 6.0 install to a vCenter Server Appliance 6.0. One feature I hope they will also include is the ability to migrate from an existing vCSA to another vCSA.

vCenter Server 6.0 Update 2m links:

 

Installing vShield Endpoint (vCNS Mgr 5.5.4-3)

Very quick blog entry as I’m busy tying up loose ends before jetting off on my summer hols….

It’s pretty easy to install vShield Endpoint as it’s a wizard-based OVA deployment. I’m not going to step through the process as it’s very simple (plus the install guide explains it very well). Once that’s done log into the console and run ‘setup’ to configure the IP address and DNS information.

After that, it’s a case of logging into vShield Manager and connecting to vCenter Server.

Once connected to the vCenter, you should see your datacenter and hosts in a hierarchical tree on the left menu. Select each host and installed vShield Endpoint.

vShield Installation guide: http://www.vmware.com/pdf/vshield_55_install.pdf

However, I did encounter a few issues (due to prior deployments which hadn’t been cleaned up properly).

Error 1: VMKernel Portgroup present on incorrect vSwitchvcns1
This occurred because the hosts had a previous vSwitch labelled vmservice-vswitch, but the VMkernel port vmservice-vmknic-pg resided on a different vSwitch (previous deployment). To correct this I had to delete the old VMkernel port and recreate it on the correct vmservice-vswitch.

Error 2: VirtualMachine Portgroup present on incorrect vSwitch

vcns2Again this was due to a mis-configuration on a previous deployment! What should happen is once you’ve setup the vmservice-vswitch and created the vmservice-vmknic-pg portgroup and VMkernel port, the installer will create a new portgroup on that vSwitch called vmservice-vshield-pg. Like before, this was residing on the wrong vSwitch.

In the end I just deleted the wrong vSwitch and started again by creating the vmservice-vswitch and the vmservice-vmknic-pg. After that the installation of vShield Endpoint went swimmingly!

vcns3

Which goes to show that cleaning up an old deployment within your demo environment can sometimes be very handy! =)

 

Known bug with upgrading vCSA via VAMI

So there’s a known bug where upgrading vCSA via the VAMI freezes at 70%…. I was doing a mass upgrade of all my vCSAs in the demo environment at work, and all of them got stuck at 70%.

vcsa

After reading the Release Notes for 6.0U1b, it turns out it’s a known issue: http://pubs.vmware.com/Release_Notes/en/vsphere/60/vsphere-vcenter-server-60u1b-release-notes.html

New In the vCenter Server Appliance Management Interface, the vCenter Server Appliance update status might be stuck at 70%
In the vCenter Server Appliance Management Interface, the vCenter Server Appliance update status might be stuck at 70%, although the update is successful in the back end. You can check the update status in the /var/log/vmware/applmgmt/software-packages.log file. After a successful update, a message similar to the following is seen in the log file:
Packages upgraded successfully, Reboot is required to complete the installation

Workaround: None.

Anyways, after checking the software-packages.log, I could see the packages upgraded successfully entry so just rebooted the vCSA. All up and working again!

vcsa2

If you want steps on how to upgrade your vCSA, then have a look at my previous blog entry: Upgrading vCenter Server Appliance to 6.0 update 1

Upgrading vRealize Operations to 6.2

Now that vRealize Ops 6.2 has been released, it’s time to upgrade your Ops Manager virtual appliance. So how do you do that? Well, it’s pretty simple actually!

Nearly all of VMware’s virtual appliances have a simple upgrade process where you download an upgrade PAK file and upload it to the admin page of the appliance – and once uploaded it’s just a simple “click and install”….!

  1. First up, download the 6.2 upgrade PAK file from the My VMware Portal. You will required TWO upgrade PAK files, one to upgrade the vApps OS, the other to upgrade the vROps product.
    vrop01
    For an OS upgrade, the file is: vRealize_Operations_Manager-VA-OS-xxx.pak
    For the product upgrade of virtual appliance clusters, the file is: vRealize_Operations_Manager-VA-xxx.pak
  2. Before starting the upgrade it’s probably best to either take a backup or a snapshot of your entire vRealize Operations cluster as a precaution.
    Note: The cluster can be online or offline when running the upgrade.
    Log into the master node administrator interface via your web browser:
    https://<master-node-FQDN-or-IP-address>/admin
  3. On the left navigation menu, click Software Update. Note the version that vROps is currently at (for me it was 6.1). Click Install a Software Update.
    vrop02
  4. Firstly perform the OS upgrade. This updates the OS on the virtual appliance and restarts each virtual machine. Follow the wizard to locate and install the OS PAK file.
    vrop04
    Note: If you have customised the content that vROps provides – such as alerts, symptoms, recommendations, and policies – and you want to install content updates, a best practice is to clone the content before performing the upgrade. You can then select the option to reset out-of-the-box content when you install the software update, and the update will provide new content without overwriting any customised content.
    vrop03
  5. Click Upload to stage the upgrade files.
    vrop05
  6. Once upload has completed, a summary of what the PAK file contains is listed. Click Next and accept the EULA, then click Finish to start the upgrade process.
    vrop06
  7. Once the upgrade is complete, vROps will restart and you need to log back into the admin page. Navigate to Software Update and you will see a message stating what previous software update was installed.
    vrop07
  8. Now repeat the upload and installation process for the Product upgrade PAK file.
    vrop08
  9. Once again, vROps will reboot after the Product upgrade PAK file has been installed. Log back in and navigate to Software Update, you should now see that vROps has been upgraded.
    vrop09

 

There you go… nice and simple!

If you encounter any issues, then head over to the vROps 6.2 Release Notes: http://pubs.vmware.com/Release_Notes/en/vrops/62/vrops-62-release-notes.html

Deploying VSAN 6.1 ROBO

One of the things I’m fortunate to have access to at MTI Technology is the Solution Centre which has all sorts of kit that can be used for demos and for consultants to play around with.

After coming back from VMworld, one of the things I really wanted to test out was how easy it would be to deploy VSAN 6.1 in a ROBO solution. Fortunately I had a pair of old Dell R810s lying around and managed to cobble together enough disks and a pair of SSDs in order to create two VSAN nodes!

VSAN ROBO allows you to deploy a 2-node VSAN cluster (rather than the standard 3-nodes) with a Witness Server located on another site – usually this would be your primary data centre (as per diagram below). It also allows several ROBO deployments to be managed from a single vCenter Server. VSAN ROBO uses the same concepts as VSAN Stretched Cluster, using Fault Domains to determine how data is distributed across the VSAN nodes. The Witness Server is uniquely designed with the sole purpose of providing cluster quorum services during failure events and to store witness objects and cluster metadata information and in so doing eliminates the requirement of the 3rd physical VSAN node.

vsan-robo-wit

Note: Whenever you deploy any VMware product into a production environment, make sure that you check the Hardware Compatibility List!
In my case for VSAN, neither the server nor the storage controller in the R810 was supported – but as it was only a demo environment it wasn’t of top priority.

Before I go through how I configured VSAN ROBO, there are a few things I need to state upfront which I don’t recommend you doing in a production environment:

  1. Using the same subnet for the VSAN network – in my demo environment I only have 1 subnet, so I’ve had to stick everything on the same VLAN. Ideally you should separate out the VSAN traffic away from the Mgmt and VM traffic.
  2. Using a SSD from a desktop PC for the Cache drive – ideally this should be an enterprise grade SSD as VSAN uses the SSD for caching so you really need one that has a higher endurance rate.

Also there are a few features that are not supported in the ROBO solution (but available in standard VSAN):

  • SMP-FT support
  • Max value for NumberOfFailuresToTolerate is 1
  • Limit of 3 for the number of Fault Domains (2 physical nodes and the witness server).
  • All Flash VSAN.
  • VSAN ROBO licensing is purchased in packs of 25 VMs, with 1 license per site. This means a maximum of 25 VMs can be licensed per site! However, 1 pack can be used across multiple ROBO sites (so 25 VMs across 5 sites).

From a configuration perspective, configuring a VSAN Cluster for ROBO is extremely simple as it is performed through a wizard within the vSphere Web Client. From a network perspective, the two VSAN Cluster nodes are to be configured over a single layer 2 network with multicast enabled. There are a few requirements for the Witness Server:

  • 1.5 Mbps connectivity between nodes and witness
  • 500 milliseconds latency RTT
  • Layer 3 network connectivity without multicast to the nodes in the cluster

 

So for my demo environment, I have 2x R810s with 1x Intel Xeon X6550 and 32GB RAM. For my SSD I found an old 240GB Micron M500 SSD (MLC NAND flash) and stuck it into a Dell HD caddy, for my HDDs I have 5x 146GB SAS drives. The Witness server resides within my main VMware environment (which runs on UCS blades and a VNX5200).

I won’t go into how I installed vSphere ESXi 6.0 u1….. however, just remember that you’ll need to install ESXi onto an SD or USB drive as you want to use all local drives for VSAN (in my case I installed ESXi onto a 8GB USB drive).

I created a new VMware cluster within my vCenter and added the 2 VSAN nodes. I then deployed the Witness Server, which in my case was the nested ESXi host within a virtual appliance. There’s actually 3 sizes for the Witness Appliance – Tiny, Medium, Large. I deployed a Medium appliance. vsan1a vsan2vsan3

I won’t step through how to deploy the OVA as it’s pretty routine stuff. If you load up the console for the Witness server, you’ll be greeted with the familiar DCUI of vSphere ESXi.
vsan4

Once it’s deployed and configured with the relevant IP address and hostname, you can add the Witness server into your vCenter Server as just another ESXi host.

vsan5 vsan6

One thing that’s slightly different is the Witness Server comes with its own vSphere license and so doesn’t consume one of your own licenses. Note that the license key is censored so you can’t use it elsewhere!
vsan7

Once the Witness Server has been added to the vCenter Server you may find that there is a warning on the host which says “No datastores have been configured”
vsan8

This occurs because the nestled ESXi host does not have any VMFS datastores configured, the warning can be ignored, but if you’re like me and hate the exclamation mark warnings in your environment you can easily get rid of the warning by adding a small 2GB disk to the witness appliance VM (Editing the Hardware settings) and then adding a datastore on top of the new disk.
vsan9

You should be able to notice that the icon for the witness appliance within the vCenter Server inventory is slightly different from your physical hosts – it’s shaded light blue to differentiate it from standard ESXi hosts.
vsan10

The next step is to configure the VSAN network on the witness server. There is already a portgroup pre-defined called witnessPgDo note remove this port group as it has special modifications to make the MAC addresses on the network adapters match the nested ESXi MAC addresses!
There should be a VMkernel port already configured in the portgroup, edit the port and tag it for VSAN traffic.
vsan11 vsan12

At this point, ensure that your witness server can talk to the VSAN nodes.

Note: Typically an ESXi host has a default TCP/IP stack and as a result only has a single default gateway – more often than not, this default route is associated with the management network TCP/IP stack. In a normal deployment scenario, the VLAN for the management network would be isolated from the VSAN network, as such there is no path between the two networks and no default gateway on the VSAN network. A way around this problem is to use static routes to define a routing entry which indicates which path should be used for traffic between the witness server and the VSAN nodes. I won’t go into configuring static routes, you can find more detailed information in the VSAN 6.1 Stretched Cluster Guide.

Once your witness server is talking to the VSAN nodes, it’s time to configure the VSAN ROBO solution. This is as simple as creating fault domains.

I won’t go into how to turn on the VSAN cluster and disk management as this is simple stuff and has been covered off in numerous other VSAN blogs/guides. One thing I will mention is that because I have 2 very old servers, I had to configure each individual disk as a RAID-0 set as the RAID controller in the server did not support pass-through. Once configured and detected by the ESXi host as storage devices, I then had to manual set the SSD device as a Flash Disk:
vsan13

I also ended up manually claiming the disks for VSAN.

vsan14

Once the 2 nodes have been configured for VSAN, next comes the creation of the Fault Domains. As previously mentioned, VSAN ROBO works by creating 2 Fault Domains and a witness server – just like you would for a VSAN stretched cluster. However, in this case only 1 server is assigned to each fault domain.

vsan15 vsan16 vsan17 vsan18 vsan19

Note: You probably have noticed that the wizard still states “VSAN Stretched Cluster” on all the screens, unfortunately VMware didn’t write separate code for VSAN ROBO, so it’s still classed as a stretched cluster.

Once VSAN ROBO has been deployed you can check the health of the VSAN by selecting the cluster and Monitor->Health.
vsan20 vsan20a
The first warning is regarding the VSAN HCL, and shows that my server and its RAID controllers are not listed in VMwares’ VSAN HCL. =)

Next license the VSAN ROBO cluster, note what features get switched off when licensing for VSAN ROBO.
vsan21 vsan22

There is already a default VSAN storage Policy, creating a VM and assigning this policy gives a Failure To Tolerate of 1. Viewing the Physical Disk Placement you can see that data is mirrored on the 2 VSAN nodes with metadata stored on the Witness Server.
vsan23

Something I found very useful was the “Proactive Tests” option for VSAN which provides the ability to perform a real time test of cluster functionalities and dependencies – creating a small VM, checking network multicast between hosts plus storage IO.

vsan24

 

 

Voila…. a basic VSAN ROBO deployment…..

Don’t forget to download the Storage Management Pack for vROps so you can get an in-depth view of your VSAN deployment from within vROps:
https://solutionexchange.vmware.com/store/products/vrealize-operations-management-pack-for-storage-devices

Unable to connect to VAMI after upgrading the vCSA

One of the plus points with upgrading your vCenter Server Appliance to 6.0 update 1 is the fact that VMware have re-introduced the Virtual Appliance Management Interface (VAMI). This was one of my bug-bears with 6.0… how any sort of administration/configuration work required you to access the vCSA shell!

Recently after upgrading a customers vCSA from 6.0 to 6.0 update 1, we couldn’t access the VAMI to change the network and password policy settings. We rebooted the vCSA several times but still the VAMI was inaccessible, within Chrome we were getting the following error:

vami

I couldn’t work out why the VAMI services wasn’t coming online….. After several minutes of searching on Google, I came across the following VMware KB:
http://kb.vmware.com/kb/2132965

It turns out that there is a known bug with the VAMI web-service if you disable IPv6 within the vCSA console (which is what I had done as there was no requirement from the customer to use IPv6).

There is currently no resolution to this bug, and in order to solve the issue you have to edit the lighttpd configuration file.
(lighttpd is a light-weight open-source web server)

To workaround this issue set the server.use-ipv6 parameter to disable in the /etc/applmgmt/appliance/lighttpd.conf.
  1. Connect to the vCenter Appliance or Platform Service Controller Appliance through SSH or console.
  2. Run this command to enable access the Bash shell:
    shell.set –enabled true
  3. Type shell and press Enter.
  4. Open the lighttpd.conf file using a text editor:
    vi/etc/applmgmt/appliance/lighttpd.conf
    vami1
  5. Search for the entry server.use-ipv6=”enable”
  6. Change enable to disable.
    server.use-ipv6=”disable”
    vami2
  7. Start the VAMI service by running this command:
    service vami-lighttp start
  8. You should now be able to access the VAMI from a browser (https://vCSA_IP_address:5480 or https://vCSA_FQDN:5480).

Upgrading vCenter Server from 5.5 to 6.0u1

Now that VMworld Europe is over I’ve had more time to sit down and look at MTI‘s Solution Centre and decided that I’d take the opportunity to upgrade my company’s primary demo environment to vSphere 6.0. Previously I had held off doing an upgrade because we run a PernixData demo environment on our main ESXi cluster and were waiting for the new FVP to be released. Now that it has (FVP 3.0), there was no reason to stick to an outdated environment!

So like most guys who don’t RTFM….. I delved straight in and mounted the vCenter ISO to kick off the upgrade – the first thing it does is run a pre-upgrade check.
vc1

Unfortunately for my environment, the pre-upgrade check flagged up an unsupported database version…..
vc2

Turns out the lowest version of Microsoft SQL Server supported is 2008 R2 SP1 and the version I deployed years ago was 2008 R2 RTM (no SPs).

To verify the SQL Server version, compatibility level, and edition you can execute a simple SQL query:

  1. Open the SQL Server Management Studio and connect to the SQL Server that vCenter Server database resides on.
  2. Run the this query on the vCenter Server database to verify the version, level and edition:
    SELECT SERVERPROPERTY(‘productversion’), SERVERPROPERTY (‘productlevel’), SERVERPROPERTY (‘edition’)
    vc2a

To find out what SQL server build you have, pop along to this great website: http://www.sqlsecurity.com/faqs-1/sql-server-versions/2008-r2

The Database Interoperability Matrix for VMware can be found here: http://www.vmware.com/resources/compatibility/sim/interop_matrix.php

So if you’re in the same position as me, you pretty much have one of two options:

  1. Do a fresh install and lose all your historical data and other configurations from vCenter.
  2. Do a database migration to a supported DB.

Fortunately for me, you can easily migrate from SQL Server 2008 to 2012 – and again you have two options on how to do this:

  1. Do an in-place upgrade where the SQL Server is upgraded where it’s currently installed
  2. Do a database migration where the old SQL DB is migrated onto a new SQL Server environment.

In my case I decided the second option would be the best option as I also wanted to upgrade the OS to Windows Server 2012. There are a number of migration options available to you, but for me the easiest option was to do a backup of the old database and restore it onto the new database!

I won’t go into how to deploy SQL Server 2012, as there are loads of tutorials online so here’s the process I did to backup and restore my DB:

Note: In order to transfer the backed up database file from the old SQL Server 2008 R2 VM to the new SQL Server 2012 R2 VM I simply added a new vDisk to the 2008 VM, backed up the DB onto that vDisk, then attached it to the 2012 VM.
You will also need to know the user account assigned to the VCDB.

  1. Before backing up the vCenter Database, ensure the vCenter Server Services are stopped.
  2. Backup the vCenter Database from within SQL Server Management Studios: Right-click the DB, select Tasks and Back Up.
    vc3
  3. Create a Full Backup and choose the destination (in my case a new disk which I will disconnect and add to the new SQL VM).
    vc4
  4. Once backup is complete, remove the vDisk from the VM, ensuring you choose the “Remove from virtual machine” option, DO NOT CHOOSE THE “… and delete files from disk”.
    vc5
  5. On the new SQL VM, create a new vDisk and select “Use an existing virtual disk”.
    vc5a
  6. Browse to the datastore containing the old SQL VM and select the vmdk file relating to the vDisk with the database backups.
    vc6
  7. Once mounted, open a console to the new SQL VM and check the DB backup files are there. Open up SQL Server Management Studio and right-click Database and select Restore Database.
    vc6a
  8. Verify options are correct and restore.
    Restoring a database automatically creates the database files that are needed by the restoring database. By default, the files that are created by SQL Server during the restoration process use the same names and paths as the backup files from the original database on the source computer.
    Optionally, when restoring the database, you can specify the device mapping, file names, or path for the restoring database.
    vc7 vc8
    vc9
  9. When a database is restored on another system, the SQL Server login or Microsoft Windows user who initiates the restore operation becomes the owner of the new database automatically.
    Once the DB has been restored, there are a number of additional configurations required, one of which is to recreate the DB security users and SQL Agent Jobs.
  10. Create a new Login to SQL Server 2012 making sure the new login matches the old one from SQL Server 2008. Assign the VCDB as the default DB and ensure the new user is the VCDB owner.
    vc10vc12vc11
  11. Finally change the DB compatibility level from 2008 to 2012. This allows the usage of the new SQL Server 2012 features. The following script can be used to automate the change (rather than going into each database property):
    USE [master]
    GO
    ALTER DATABASE [mydatabase] SET COMPATIBILITY_LEVEL = 110
    where [mydatabase] is the database to change the compatibility level
  12. Re-create all the SQL Server Agent jobs, for a complete list of the jobs that should be present, see:
    http://kb.vmware.com/kb/2033096
  13. Configure Microsoft SQL Server TCP/IP for JDBC and create a 64bit ODBC DSN.
    vc13vc14
  14. Once the DB has been restored, you can remove the vDisk that was attached with the backup files.
  15. Complete the vCenter Server 6.0 installation (I won’t go through the process here). For the demo environment, we used an Embedded PSC Deployment and when prompted we chose the DSN to the migrated VCDB and chose to use the data on this DB rather than re-initialising the DB.
    vc15