I'm running a VNX5300 Unified system, File OE 7.0.14, Block OE 05.31.000.5.011. It has dual Control Stations and 2 Data Movers and some 120 disks. We use it for Block storage only at the moment with have a number of ESX hosts fibre attached.
I'm trying to upgrade the File & Block OE's to 7.0.35 & 31.502. I'm following the DVD ISO loop mount upgrade procedure to upgrade the File OE on each CS. I kicked off the "install_mgr -m upgrade" on CS0 and it starts out by performing a Pre Upgrade Health Check (PUHC) which failed on the "checking if symapi data is present" test and provided me with a procedure just like the one in emc209975. There were a number of orphaned disks which I successfully deleted by following the procedure.
When I re-ran the install, the PUHC again failed this time saying "Error 5026: CKM00112500866 d7, RAID5, doesn't match any storage profile". nas_disk -l reports that d7, d10 and d16 do not have any data movers associated with them. So I went through the "check if they are orphaned disks" procedure as per emc209975, but it appears as though the LUN ID's associated to each of these disks corresponds to a real LUN on the block side, so I did not want to delete these disks.
Re: VNX5300 File OE Pre Upgrade Health check fails
Problem solved! I logged an SR and sent them all the diagnostics, logs spcollects etc fromn the VNX. They determined that d7, d10 and d16 must have been allocated at one stage and them somehow unallocated. They're not assigned to either DM and we haven't configured and NAS filesystems, NFS or CIFS servers/shares, so I was able to do a nas_disk -d <dx> -perm to remove these 3 disks and the File & Block OE upgrades worked fine.