Monday, May 13, 2013

Array pair issues with NetApp NFS mounts

Where's my array pairs NetApp?!


I have seen this issue a few times now and it always seems to be with the NetApp arrays. You get everything configured, your array pairs are enabled and you see replicated devices but you go to make a protection group and....... no array pairs. I have seen this on both NFS and FC arrays but they presented and were fixed just the same. Here are the ways I have seen it fixed.

The "Include" List


This is the most obvious and is actually in the array documentation. Especially if you are using NFS mounts, you will need to go to the array managers, click the array, in the summary tab click the "Edit Array Manager" link, click next until you see the options page and add the datastore names in. For NFS, you need to add them AFTER the mount so for example, if you mounted the datastore with:

/NetAppArray/NFSDatastore

where /NetAppArray is the FQDN of the share and /NFSDatastore is the mount point. You will add the datastore to the include list with just "NFSDatastore". Remember that this is CaSe SeNsItIvE so be careful typing it all in!

IPs vs FQDN


This one is ANNOYING! Let's say you have your Prod site and a DR site. On the Prod site, you have the following datastore mounted and replicated:

/ProdNetAppArray/Prod_NFSDatastore

where /ProdNetAppArray is the FQDN of the share and /Prod_NFSDatastore is the mount point. This datastore is replicated to the following datastore at the DR site:

/10.10.20.100/DR_NFSDatastore

where /10.10.20.100 is the IP address of the share and /DR_NFSDatastore is the mount point. I have seen this cause issues even if they are in the include list correctly. The fix here was to mount them either both as IP or both as FQDN. I have also seen where they are both mounted as IP and changing them both to FQDN fixed the issue and vise-versa. 

vCenter is set via IP, not FQDN


This was the last fix that I have heard of but haven't seen personally. This issue occurred when during the install of SRM, vCenter was put in via the IP address instead of as the FQDN. Not sure why this would cause this issue but the fix was to do a modify install and when you are defining the vCenter Server, use the FQDN instead of the IP address. This MAY also work the opposite way aka, you set the VC as FQDN and changing it to IP fixes it. As I stated before, I haven't seen this one personally but another TSE here said this was the fix.

Well that wraps this one up! Hope this helps somebody and saves them the days of troubleshooting I did on it! Want the SRM findings of a TSE in the trenches? Follow me on Twitter! @SRM_Guru




**********************************************Disclaimer**********************************************
This blog is in no way sponsored, supported or endorsed by VMware. Any configuration or environmental changes are to be made at your own risk. Casey, VMware, and any other company and/or persons mentioned in this blog take no responsibility for anything.

1 comment:

  1. Just to add the below requirements for NetApp NFS mounts;

    1. The hostname referenced in snapmirror.conf must match the actual hostname. If it does not (as in our case where the hostname is uppercase and in snapmirror.conf its lowercase), then SRA will have a problem discovering the array device.

    2. Ensure that snapmirror relationships are not in a transferring state. If there are, transfers need to complete to confirm that SRA is able to pair the devices

    3. Remove unneeded export statements for RO snapmirror destinations. Having these export statements may cause issues when attempting a Recovery operation in that the destination hosts may not be able to mount the volume due to the existing export statements. It's best to remove these and to allow SRA to create them during the Recovery operation.

    ReplyDelete