Start a Conversation

Unsolved

This post is more than 5 years old

C

2775

September 1st, 2017 15:00

less space showing in vmware after unmap

I had a volume set to thin on my EQL 6210XS and got a warning about used space over 90%. In VMware the datastore was reporting about 1TB free (out of 3TB) so a pretty big mismatch. It seems like the EQL arrays don't support auto unmap on 6.5 with VMFS6 so I ran the manual unmap (I used a host in maintenance mode in the cluster from which to run the command). Now the EQL is looking better (well below 90%) but my free space in vmware has gone down which makes no sense to me.

If I sum up the used space of all the vms I should be left with a little more than 1TB free space but after the unmap I lost a good 250-300GB of space it seems. I did notice there an leftover .asyncUnmapFile at the root of the datastore but I'm not sure if I can safely delete it (when I google .asyncUnmapFile i get one solution 3020 from vmware but it says the article is not applicable since I'm on 6.5 and VMFS6). Part of the vmkernel.log is shown below when the unmap ends.

Anyone know what's going on or do I need to call VMware/EQL support?

2017-09-01T21:20:36.015Z cpu52:68605 opID=775d14d4)WARNING: Res3: 4314: [type 1] resource 64 (cluster 1309) on volume labeled 'MyVol' already freed by another host: This may be a non-issue
2017-09-01T21:20:36.216Z cpu52:68605 opID=775d14d4)WARNING: Res3: 4314: [type 1] resource 64 (cluster 1309) on volume labeled 'MyVol' already freed by another host: This may be a non-issue
2017-09-01T21:20:36.428Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.463Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.497Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.531Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.570Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.617Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.662Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.711Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.746Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.785Z cpu52:68605 opID=775d14d4)Fil3: 14292: MyVol: Converting 0 LFBs to SFB clusters: Success
2017-09-01T21:20:36.789Z cpu52:68605 opID=775d14d4)Fil3: 8210: Max no space retries (10) exceeded for caller Fil3_SetFileLength (status 'No space left on device')
2017-09-01T21:20:36.799Z cpu52:68605 opID=775d14d4)WARNING: Res3: 4314: [type 1] resource 64 (cluster 1309) on volume labeled 'MyVol' already freed by another host: This may be a non-issue
2017-09-01T21:20:36.811Z cpu52:68605 opID=775d14d4)WARNING: Res3: 4314: [type 1] resource 64 (cluster 1309) on volume labeled 'MyVol' already freed by another host: This may be a non-issue
2017-09-01T21:20:36.818Z cpu52:68605 opID=775d14d4)WARNING: Res3: 4314: [type 1] resource 64 (cluster 1309) on volume labeled 'MyVol' already freed by another host: This may be a non-issue
2017-09-01T21:20:36.818Z cpu52:68605 opID=775d14d4)Fil3: 7022: Truncating failed: Not found

5 Practitioner

 • 

274.2K Posts

September 1st, 2017 15:00

Hello, 

You are correct that EQL does not support VMFS v6 auto unmap.  VMware designed that feature to only work with UNMAP granularity of 1MB or less.  On EQL that's 15MB, so you have to use the manual process.

When you do that, it creates a file, deletes it, then issues the UNMAP command on those LBAs. Since it appears that file was left over, you *should* be able to delete it w/o issue but I would check with your VMware support team first.   You also don't need to put node in maintenance mode, since the VMs are still running on other nodes connected to that same volume. 

 BTW, the mismatch is completely normal.  It will happen again.  As long as you have the physical space, the EQL volume could show 100% in-use and 0% in-use on VMware side (e.g. you moved off all the VMs)  The VMFS filesystem keeps track of what blocks are free to use (or reuse), so it won't cause any problems if you allocate all the blocks on a volume. 

 Regards,

Don 

5 Practitioner

 • 

274.2K Posts

September 8th, 2017 09:00

Hello, 

 Thank you for the update.  I too have never seen this occur nor had any other customer report it.  

Re: Thick vs. Thin EQL volume.  That would make no difference.  Thin provisioning on EQL side is just whether space is fully allocated or not.   There's no point in running UNMAP since the freed space won't be returned to the free space of that pool.  Only  the in-use amount show will change.  You should use what the OS says for in-use vs. block storage. 

 If you haven't already done so I would make sure your EQL firmware is up to date along with ESX. 

 Also make sure the ESXi nodes are following the Dell best practices as well. 

http://en.community.dell.com/techcenter/extras/m/white_papers/20434601/download

Regards, 

Don 

72 Posts

September 8th, 2017 09:00

Small update:

I opened a case with VMware. It should be ok to delete the file but it doesn't actually delete. I get an error saying directory or file doesn't exist.

After a little investigation on the vmware side by checking other cases, I was told there were similar cases where VMDK files wouldn't delete and had the same error. Some tool vmware has to try and fix VMFS issues didn't help these people and the only solution was to move the vms off and delete the datastore. In my case the support guy wasn't able to find any other case mentioning the asyncunmapfile not being deleted properly so I assume that even though the unmap finishes, it's actually not finishing correctly and hence the leftover file.

I did a quick test on some small luns but couldn't repro the issue. I ended up making another LUN and using svmotion to move the vms. I created two extra vms and then deleted them and ran unmap on this new lun and there was no issue.

I checked a couple other luns and discovered the same issue. This time I was paying more attention to the space and indeed, in vcenter I lost 300GB of space (which makes no sense) but gained 300GB on the EQL side. Also found the problem on a lun created a few weeks ago so I'm very curious what's causing this issue. A VMFS5 (fully provisioned on EQL side) doesn't seem to have this issue.

No Events found!

Top