Start a Conversation

Solved!

Go to Solution

1 Rookie

 • 

28 Posts

2436

February 15th, 2022 22:00

Disk Tier Low RAID 5-9 has insufficient space to hand demand -Dell EMC

Hi Team,

I have noticed that the Tier 3 raid has filled up and no more free disk space.

Prometheus_0-1644991923831.png

Also noticed the snapshots on my volumes might be bigger than normal size of snapshots? If i delete these huge 8TB snapshots, would this free up some space on the Tier 3?

Prometheus_1-1644991974581.png

 

4 Operator

 • 

1.7K Posts

February 16th, 2022 13:00

Again... you have not a 8TB snapshot.... thats your DATA!. All entries above are the snapshots and they are tiny.

When this is a VMFS with ESXi as Servers... do you use VM Snapshots from time to time, delete a VM and whats about Veeam or others? If you answer with yes.... when did you run a scsi unmap last time?

Regard
Joerg

Moderator

 • 

6.9K Posts

February 16th, 2022 08:00

Hello Prometheus,

Here is a link to a KB that maybe of assistance.

https://dell.to/3uWiVC8

4 Operator

 • 

1.7K Posts

February 16th, 2022 10:00

These 8TB are your base data. Your snaps ~560GB which are less 6% in total which is way under the normal 10% change ratio.

Does your Servers ever delete data on that volume? I ask because the space usage from Host perspektive is the same as the SC have recorded which indicates that there never was a SCSI UNMAP run or that the Hosts never have delete data on that given volume.

The SC3020 support "compression". You can select if all data or only frozen blocks are be compressed. We always use "all blocks".

Regards,
Joerg

1 Rookie

 • 

28 Posts

February 16th, 2022 13:00

Thanks Origin3k,

There are other snaps on the other 2 volumes that show 6TB and 4 TB snapshot. My snapshot profile is set to expire the snapshots after 1 week. So i believe the 8TB snapshot should have already been deleted or expired but its still there.

I am not sure if the servers delete data on that volume. I would like to run a SCSI unmap but not sure on how to go about doing this and how long this clean up will take. I have 3 volumes that use about 10TB for each volume so that is a total of 30TB. I use a lot of vMotion and periodic creation of VM snapshots done by Veeam backup so SCSI might be a good process to run.

1 Rookie

 • 

28 Posts

February 16th, 2022 13:00

Thanks DELL-Sam L

According to that link i have checked the recycle bin and it is empty. I would like to identify the identity and expire the snapshots but not sure if this might cause me to delete production VMs?. The current 8TB snaphot is the one i want to delete. My current snapshot profile allows me to keep the snaps for 1 week but the 8TB snapshot looks like it has been there for 1 week. So will deleting the snapshots affect the current production VMs?

4 Operator

 • 

1.7K Posts

February 16th, 2022 14:00

I have mark your snapshot data...

snapshot.png

You have activated the snapshot profile on 2/11 or have created this volume on that date because thats the date of the blocks.

1 Rookie

 • 

28 Posts

February 16th, 2022 14:00

Origin3K,

Where have you mark the snapshot data?

All i see is the image with the yellow triangle.

Prometheus_0-1645051551644.png

 

 

4 Operator

 • 

1.7K Posts

February 16th, 2022 15:00

You have to wait until a forum moderator or some AI have approved the uploaded image... it took some time.

1 Rookie

 • 

28 Posts

February 16th, 2022 16:00

Thanks Origin3k,

I have not run at scsi unmap at all. Don't think there was ever a time i ran a scsi unmap.

How do i run a scsi unmap? Could this be a process that is already running?

1 Rookie

 • 

28 Posts

December 23rd, 2022 15:00

Thank you. Running SCSI unmap resolved the disk storage issue.

1 Rookie

 • 

28 Posts

May 6th, 2023 21:00

Hi ,

Please show me the instructions on how to run the scsi unmap on my 4 hosts?

 

Thanks

4 Operator

 • 

1.7K Posts

May 8th, 2023 01:00

It enough to run it from ONE single hosts assumed that all Volumes are mapped to all Hosts.

[root@esx-node-94:~] cd /vmfs/volumes/
[root@esx-node-94:/vmfs/volumes] ls | grep -i "san-05"
san-05-001
san-05-002
san-05-003
san-05-004
[root@esx-node-94:/vmfs/volumes] for i in `ls | grep -i "san-05"`;do echo "runni
ng .... $i" && esxcli storage vmfs unmap -l $i -n 1024 ; done
running .... san-05-001
running .... san-05-002
running .... san-05-003
running .... san-05-004
[root@esx-node-94:/vmfs/volumes]

Take a look to the diagram/usage within DSM and you will see a massive increase in Bandwith on your volumes when unmap is running. You need to refresh DSM to see volume usage on volumes after you run it. If you have snapshots it may take some some and snap needs to be expired first.

 

Regards,
Joerg

1 Rookie

 • 

28 Posts

May 9th, 2023 19:00

Hi Origin3k,

 

Thank you for that. I have 3 other hosts mapped to the same volumes. Do i need to turn off all the other hosts and vrtuall machines before running the unmap ?

4 Operator

 • 

1.7K Posts

May 9th, 2023 21:00

No need to shutdown the VMs or other Hosts.  Just run the scsi unmap on one of your Hosts.

1 Rookie

 • 

28 Posts

May 14th, 2023 22:00

@Origin3kThanks for the command to run the unmap. I managed to run it on all 4 hosts and used the -n block size of 200MB instead of 1024. I suspect this freed up any blocks 200MB and larger. This operation only managed to free up 7%. I suspect there are many dead blocks that need freeing up but maybe are less than 200MB. How do i check if there are still dead blocks that need unmap to run?

 

 

No Events found!

Top