Wednesday 15 February 2012

VASA and Storage Profiles

Note:
This article started off as just a bit of news. But as I went on I started to write more and more about VASA and Storage Profiles in vSphere5. So it could be of interest to someone who doesn't even use NetApp storage…

This one kind of slipped under my radar. I'm doing some VCP prep work, and looking at VASA and Storage Profiles. It occurred to me that I hadn't spoken to my storage partners (Dell, NetApp, EMC) for a while on this issue. I'd been so focused on SRM 5.0 last year, that I really haven't kicked the tires on the new vSphere5 features yet…

I ping'd NetApp to ask if they had drop of their VASA plug-in. Normally, with this sort of thing I have to approach my storage partner directly to get access to code. That's because I kinda fall thru the cracks often – I'm neither an employee, customer or partner in the regular sense of the word.

Anyway's it turns out that NetApp have some public betas of their integration pieces of vSphere5.

Their VASA Provider 1.0 Beta is available to download here: https://communities.netapp.com/groups/vasa-provider-10-public-beta

Their VSC 4.0 Beta is here: https://communities.netapp.com/groups/vsc-40-external-beta

and their System Manager 2.0 for Mac OSX is here: https://communities.netapp.com/message/64401

Kudos to NetApp for making this public beta, and making it so easy to download – all you need is a forum account…

..
….
……

In case you don't know VASA is the sister to the VAAI API. Put together VASA and VAAI give you the best integration to vSphere5.  VASA stands for vStorage APIs for Storage Awareness – and the API allows the storage vendor communicate information back to the vSphere environment. In contrast in most case VAAI (vStorage APIs for Array Integration) sends instructions such as copy this VM to the storage array, and offloads the cloning process. VAAI is enabled by default, and all that is needed is a support array with a supported firmware. In contrast VASA requires that you install/enable the storage vendors provider in the "Storage Providers" area of vSphere. Most vendors have chosen to automatically register their VASA for you, or have utility that does it at the end once the VASA has been installed.

At basic level VASA will report status, info, topology, usage to the vSphere Client. Where things get interesting is when you start using stuff like Storage Profiles, Datastore Clusters and Storage DRS together. Storage Profiles allow you categorize your datastores by any method you like (the Gold, Silver, Bronze analogy seems to predominate). This sort of categorization is known as "user-defined", in contrast "system-defined" are are categorizes that are generated by the storage vendor – using the VASA provider. This functionality also plugs into SDRS because vSphere can use the information in the VASA provider (together with Storage Profiles) for making judgement calls on where to place the VM, and also when/where to move the VM if finds the datastore where it is located is running out of space or running out of IOPS.

Different vendors have different ways of getting their VASA into the vSphere environment. For example with the Dell Host Integration Tools for VMware Edition (HIT-VE). Dell have chosen to put ALL the integration into vSphere into a virtual appliance – that you download and configure to various components including vCenter, View and so on. By going through this configuration – the Dell Equallogic VASA Provider is automagically added into the Storage Providers management page.

You will see the evidence of VASA on the properties of the datastore in the Datastore & Datastore Cluster view, and also when you enabled Storage Profiles, and click the Manage Storage Capabilities button

In contrast the NetApp VASA is install .EXE. Critically, it should not be installed to the vCenter (that's something which is not the case with their VSC). So I took a spare Windows system (my SRM host for NYC actually. I'm going to be rebuilding my lab soon, so this server won't live for very long..). After the install you'll be asked if you want to run the NetApp VASA configuration utility. There's 4 pieces of configuration to do

  • Username/Password to "register" the VASA with vCenter – Click Save
  • Name/IP of NetApp Array together with username/password -Click Add
  • Set your thresholds – These allow you to set the parameters for when VASA thinks a volume or aggregate is full or nearly full.
  • Name of vCenter, and username/password to authenticate with it – Click Register
  • Click OK

This will add NetApp to the Storage Providers management page, and also update the Storage Profiles "Manage Storage Capabilities" dialog.

As you can see the name/description kinda of works though all the combinations of configurations you could have. Interestingly, there's no "vendor identifier" here. So when I enable multiple storage vendor providers – this dialog is going too look a bit complicated. It might be tricky to ID the right storage vendor – if you work with multiple storage vendors in a single instance of vCenter. Remember these attributes are later selected and made part of a storage profile – that storage profile gets attached to the VMs – so the right VMs go on the right storage. I know from experience that all the entries that begin "RAID" are coming from the Dell VASA, and after that all entries below that begin "NFS" or "VMFS" are coming from the NetApp VASA.

I'm not surprised to see this really. I found a similar situation arises when I use the vendors own storage plug-ins like EMC's VSI, NetApps VSC and Dell HIT-VE. The plug-ins kind of assume you work with just one storage vendor only. So occasionally, the EMC VSI will hit one of my NetApp NFS volumes – and in its "Storage Views" wonder what the heck is this? It's an irritation, not a show stopper – but it does require if you work with multi-vendors like I do – to work out which data is applies and which data is doesn't….

Anyway, to test the VASA, I created a new storage profile that I call "NetApp and SRM" – and select the system-defined attribute of NFS; Performance; Replication. I knew to select this particular system-defined attribute not because I'm some kind of storage brain-box/guru, but by looking at some of my datastore which were all ready setup for SnapMirror and Protection Groups backing them from VMware Site Recovery Manager.

I repeated this configuration for the Dell Equallogic VASA too – checking the "Name" on one of the replicated datastores, and then creating a storage profile called "Dell and SRM"…

Where the VASA configuration shows itself immediately is you go to create a brand new VM. When you get to the part where you locate the VM on the storage – you can select the preferred profile – and then the dialog will filter out the datastores. So when I select the "NetApp and SRM" storage profile – it correctly filtered out all the datastores that weren't NFS, Performance, Replicated and stuck them into the "Incompatible" area, and when I select the "Dell and SRM" storage profile it did the same for the Dell

Conclusions & Thoughts

The first thing you should know about storage profiles – simply selecting one does NOT stop stop an end-user selecting an incompatible datastore.  So its more of a "guide" to the user creating the VM, than an enforced policy. That's by design, and not a bug incidentally.I'm wondering if Storage Profiles might have more "value" to VMware customers if they were a policy, not a profile…

Secondly, even though you have a VASA provider installed – it doesn't make the Storage Profile for you. You have to do that. I guess storage can have MANY attributes, and therefore you might only want to use a handful of the system-defined attributes available.

Thirdly, remember system-defined and user-defined attributes are assigned to the datastore (automatically in the first case, and manually in the second). Storage Profiles are assigned to VMs – either all the files or different profiles to different types of virtual disk that make up the VM. That's why you see in the UI they are actually called "VM Storage Profiles"

Fouthly, all though storage profiles are pretty funky, they don't get automagically applied to VMs that are created already. Instead existing VMs need their storage profile configured for them. This allows for the "compliance" feature in Storage Profiles to report clearly whether a VM meets the profile or not. If they were just applied automagically, then you would get a garbage in = garbage out in the reports. If storage profiles were automagically assigned to existing VMs it would be to assume that every VM was on the right type of storage. That would be a rather big assumption…

To edit the setting of an existing VM, and assign a storage profile to it under the "Profile" tab.

I'm sure that cape-crusader @PowerCLIman would be able to automate this for us… I think that would mean we would have to be VERY consistent in the builds of VMs. Every VM would need to have the correct type of info assigned to each disk (disk1=OS, disk2=swap, disk3=data, disk4=logfiles for example).

The Check Compliance Now button & view allows you confirm that your VMs are located on the right datastore…

Here you can see that "Hard DIsk 2″ on DB01 is "non-compliant". Why is that? Well, this virtual disk is on ordinary non-replicated storage. Why? Because it the page file of Windows, which has been relocated out of C: to prevent it been replicated – and thus save bandwidth. What I really need is a "user-defined" storage capabilitie called "Windows Swap File"… attach that to the datastores that meet that specification – and then create Storage Profile and update the settings behind the VM, to indicate "NetApp and SRM" is used for "Hard Disk 1″ and that "Windows Swap File" is used for "Hard Disk 2″.

That's quite a bit of work… First create the definition in "Manage Storage Capabilities"

Then find the datastores that I think are suitable for this type of data…

Then create a storage profile that use the new user-defined attribute…

And finally update the settings of my existing VMs to indicate that right storage profile to be used with the right disk…

Once I "check for compliance" again, I get the lovely green ticks I want that tell me all is right in the world…

So were all good now….

Personally, I quite like VASA & Storage Profiles – although its a bit of leg work to get it up and running. It would have been nice to have a more efficient way of applying the right storage profile to existing VMs – as ever vCenter doesn't cut the mustard when it comes to "bulk administration".

It's also made me wonder how this feature fits in with SRM and the DataStore Clusters?

I works quite well with SRM I think – I can filter which datastores are replicated and which are not… but something tells me I would be creating user-defined storage profiles that represented my RPO's – so I could filter volumes by sync/asych or by the replication circle – 15min, 30min, 1hr and so on… The vendor based information is great, don't get me wrong. It's just that it doesn't offer the right level of information to make a decision on what this VMs RPO is. I guess that might be something that will come in future revisions of VASA….

I'm not really sure how storage profiles would "add-value" to Datastore Clusters. Normally, when we talk about datastore clusters – we creating a grouping of N datastores to create a single object (the Datastore Cluster) – so for example 4x500GB datastores could brought together to make a single 2TB Datastore Cluster. These datastores that make up the cluster – generally will have the same IO properties – but they can reside on different arrays. When a new VM gets created it is dropped on the "cluster", but ultimately it must reside on one of the datastores that make up the cluster. That's just like when you place a VM on a HA/DRS cluster, fundamentally it must execute on one of the ESX hosts in the cluster. Normally, when DS cluster are discussed folks talk about them having names like "Gold, Silver & Bronze". So I'm wondering if the future will be having DS Clusters called "Gold – RPO 15mins" and "Silver – RPO 30mins" and "Bronze – No Protection". If we do it makes wonder what the value of Storage Profiles will be then – when the datastore or Datastore Cluster name – already gives the person creating the VM the guidance they need?

Also I'm wondering from an SRM perspective how Datastore Clusters would change the way I present SRM. I normally create datastores associated with a server type or application in mind. Should I really creating datastores/clusters based on their attributes…?








Mike Yallits
mikeyallits@westco.ca

No comments:

Post a Comment