The Joys and Pains of upgrades…..

So with the latest update releases across all the VMware products, I set about upgrading my demo kit last friday……

Upgrading the vCenter Server Appliance was straight forward enough….

  • mount the update iso to the VM
  • browse to the management webpage
  • click Update Repository selecting Use CD-ROM Updates
  • follow the wizard and kick back and relax as it goes off and does its job…..

After 2hrs and a quick reboot – voila…. sorted!

Next up was an update of my demo DR vCenter Server installed on a Windows VM with SQL Express…. again, simple and quick……

  • mount the ISO file, run the SSO installer (separate one, not the simple as that doesn’t work), run through the wizard making sure you use the SSO admin password! Reboot (as it changes the window services).
  • run the Web Client installer. Once complete check SSO hasn’t blown up or lost it’s AD domains! =)
  • run the Inventory Service installer.
  • run the vCenter Server installer.
  • run the vSphere client installer.
  • an hour later and it’s all done (actually I was multi-tasking with other work, so I guess you’re talking 20mins in total).

At some point I’ll have to update the production vCenter Server which I’m hoping will be the same pain-free and quick process – only difference is we’re using SQL Server 2008 R2…. if I’m daring enough maybe I’ll patch the SQL server to SP2…. =)

The only issue I encountered was after upgrading the VDP Appliance, it now doesn’t run backup jobs!

There seems to be a general consensus in the community that VDP either works or doesn’t! It either backs up VMs flawlessly, or it errors out and you spend hours trying to work out what caused the error because the logging functionality is pretty pants!

The reason I jumped straight into the VDP upgrade to 5.1.10 was because of the known issue with backing up Windows 2008 R2 VMs – mentioned here: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2035736

However, after a long and quite a tedious upgrade process, the VDP appliance now fails to run any backup jobs and errors out with a ‘no proxy available to service backup jobs’:

vdp: Failed to initiate a backup or restore for a virtual machine because no proxy was found to service the virtual machine

Pretty much stuck now as I can’t find any mention of how to clear out the error or re-attach the proxies…… *sigh*

Still early days for this update and so far I can’t find anyone else online who has encountered the same problem!

May have to give VMware tech support a call……

 

Edit: Well looks like there are problems with the upgrade process already – mainly SSO (yet again) not working well with multiple identity domains and users associated with a large number of groups: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2050941

Advertisements

vSphere & vCenter 5.1 update

Been a bit quiet lately as I’ve been snowed under with work and training courses….. >_<”

Anyways, some good news from VMware is that they’ve finally released an update to 5.1 that allows you to use Windows Server 2012 as the underlying OS for vCenter Server! Not to mention the support for SQL Server 2008 R2 SP2 and 2012!

Yay…..

http://blogs.vmware.com/vsphere/2013/04/vmware-vcenter-server-5-1-update-1-released.html

Link

10 most influential bosses in the storage arena

Quite interesting article on The Reg regarding the 10 most influential bosses in the storage arena…..

Surprised at seeing the CEO of Dropbox in the top 10….. as well as the CEO of Amazon and Facebook’s Open Compute Project at 7 & 8….. I guess cloud computing and cloud storage is really coming into the fore now!

Dropbox is pretty much used by most people in the IT industry to share large presentations and software…. great handy tool!

Amazon have pretty much commercialised the aspects of ‘cloud computing’ which explains why they’re there…..

And Facebook are pretty much leading the way with non-vendor specific hardware (although quite surprised Google aren’t listed as I’m sure they’re in the same boat as Facebook with building their own piece-meal hardware and they have vast storage requirements).

Violin Memory’s inclusion over the likes of more established storage vendors like IBM and NetApp is really interesting – especially since they mainly sell all-flash arrays. Then again, I supposed they’re currently one of the ‘innovators’ in the flash-array market and have been making big waves in flash-array tech…..

Not surprised that NetApp aren’t on there as frankly they haven’t innovated for a long time and in my opinion haven’t really impressed the market with some of their OnTap products (ie Clustered OnTap), but really surprised at seeing HP at No. 6 given how they let their EVAs and StorageWork portfolio decline over the years (although I believe they’re trying to rescue the division with the acquisition of 3Par). Again, I would have thought IBM would be on there considering their new Storwize V7000 arrays have been receiving decent praise (then again we don’t go up against IBM much as if a client is pretty much a ‘Big Blue’ house, they tend to be pretty closed off to other vendors!).

No surprise to see Joe Tucci – CEO of EMC – as No. 1….. no other CEO has overseen the recovery of such a big giant and gone on to make such clever acquisitions (DataDomain, Avamar, VMware, Isilon, XtremIO, Greenplum). I wonder what the future holds for EMC… they’re already No 1 in so many different product areas (data dedupe, storage, virtualisation, etc)… You have to wonder who’s next on their acquisition trail!

Another name missing from this list is probably Samsung…… Aren’t they the leading manufacturer of flash memory and SDRAM? Pretty sure I read that they are looking at entering the PCIe Flash-storage cards soon!

 

Anyways, interesting times ahead in the storage arena……. I guess the next big move would be to link server-based PCIe flash storage with back-end flash-based storage arrays!

Big Data & Hadoop

So one of the things I had to pick up very quickly since joining MTI was what Big Data is all about and how different vendors are approaching the management of ‘Big Data’….

If you try to Google “Big Data” you get so many websites blasting you with technical goobledigook…. all the big infrastructure companies (IBM, Microsoft, EMC) and even the consulting companies (Accenture, McKinsey) are guilty of trying to ‘over explain’ the concept!! It was quite frustrating that I couldn’t find one major player in the market that could explain what Big Data is in layman terms!! Not even a single simple paragraph without any of their marketing/technical bullsh!t spin….

It’s interesting that the best explanation of Big Data came from Intel…… who aren’t a storage vendor (afterall you need storage for ‘big data’), not a server manufacturer (ok, so not totally true but I’m talking about the HP/Dells of this world) and not a consulting company trying to sell loads of ‘Professional Service’ ……. Have a read of their whitepaper here: http://www.intel.co.uk/content/www/uk/en/big-data/unstructured-data-analytics-paper.html

 

So, Big Data…… what is it?!? Well, from what I can gather it’s a general term to explain the explosion of information that has occurred over the years with the greater use of the internet, social media, electronic communication, data gathering, etc….. A vast amount of information which is unstructured and of different varieties which companies are having trouble connecting together to make any business use of – ie an asset that they can’t utilise or analyse!

Big Data is characterised by the 3 Vs: Volume, Variety and Velocity……. the best explanation I found of these Vs was on the SAS website (http://www.sas.com/big-data/):

  • Volume – Many factors contribute to the increase in data volume – transaction-based data stored through the years, text data constantly streaming in from social media, increasing amounts of sensor data being collected, etc. In the past, excessive data volume created a storage issue. But with today’s decreasing storage costs, other issues emerge, including how to determine relevance amidst the large volumes of data and how to create value from data that is relevant.
  • Variety – Data today comes in all types of formats – from traditional databases to hierarchical data stores created by end users and OLAP systems, to text documents, email, meter-collected data, video, audio, stock ticker data and financial transactions. By some estimates, 80 percent of an organization’s data is not numeric! But it still must be included in analyses and decision making.
  • Velocity – According to Gartner, velocity “means both how fast data is being produced and how fast the data must be processed to meet demand.” RFID tags and smart metering are driving an increasing need to deal with torrents of data in near-real time. Reacting quickly enough to deal with velocity is a challenge to most organizations.

This is where Big Data analytics comes into play……. it’s a technology-enabled strategy for gaining  a better understanding into the data held by a company – a more accurate insight  into a customer/partner/business—and ultimately gaining competitive advantage. By having the ability to process and analyse real-time data (or stored data), companies can uncover hidden patterns, unknown correlations and other useful information  in order to make decisions faster, monitor emerging trends, rapidly change directions, and jump on new business opportunities!

 

Ok, so that pretty much sounded like marketing bullsh!t, so I have to apologise for writing all that……. but pretty much it’s all about tapping into the large amount of information that people are able to get their hands on and analysing it in order to extrapolate some form of useful data which will benefit you! It’s amazing how many jobs there are in the market for ‘big-data analysts’ or ‘data scientists’, not to mention the number of vendors jumping on board the bandwagon!

One of the articles I read on Intel’s whitepaper mentioned a very interesting fact about data growth…. that it took “from the dawn of civilization to 2003 to create 5 exabytes of information, we now create that same volume in just two days! By 2012, the digital universe of data will grow to 2.72 zettabytes (ZB) and will double every two years to reach 8 ZB by 2015.”

 

One name that keeps getting mentioned is Hadoop……. What the hell is Hadoop?? Fortunately Googling Hadoop gave a better result which was easier to digest!

(Interesting fact: Hadoop is actually named after a toy elephant that the programmers son had!)

The Apache Hadoop project is an open-source software framework (written in Java) that supports data-intensive distributed applications…. it allows the development of open-source software for reliable, scalable, distributed computing – Where the application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster. In addition, it provides a distributed file system that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster.

Basically the Hadoop stack is fast becoming the best approach to unstructured data analytics. The complete technology stack includes common utilities, a distributed file system, analytics and data storage platforms, and an application layer that manages distributed processing, parallel computation, workflow, and configuration management.

If you want more information, then the best source is Hadoops’ own website: http://hadoop.apache.org/ and Intel’s whitepape: http://www.intel.co.uk/content/www/uk/en/big-data/cloud-builders-xeon-apache-hadoop-guide.html?wapkw=cloud+builder+hadoop

(Another Interesting fact: Supposedly – and I guess not surprisingly – one of the biggest Hadoop clusters in the world is used at Facebook, they have over 100PB of data!)

 

So given that VMware’s motto is to “Virtualise everything” in a “software defined datacentre”, it comes as no surprise that they are trying to get people who are looking at the Hadoop stack to stick it on VMware. I mean, it does make sense in some way to stick a Hadoop cluster into a virtualised environment…. companies don’t need to data-crunch every hour of the day (ok, some do), but sticking it on VMware allows you to use ‘elastic scaling’ on the Hadoop cluster as and when more resources are required to crunch through the data! Make use of a cloud computing that allows a self-service consumption model!

In addition, the ability to share the infrastructure with non-big data resources makes sense – due to how VMs are isolated from each other, you can have your Hadoop cluster running alongside your other business application workloads…..

http://www.theregister.co.uk/2013/04/02/vmware_serengeti_hadoop_update/

http://cto.vmware.com/expanding-the-virtual-big-data-platform/

 

Anyways, I’m still learning on the job…… but at least I now know enough to talk my way out of a situation if a client ever asks the same questions I asked at the start of this post! =)