Start a Conversation

Unsolved

This post is more than 5 years old

15927

November 29th, 2017 23:00

What are the "Total Disk Space" and "Total Active Space on Disk" in SC Series

Dear all,

The picture below is the current environment of customers' SCv 2020. I have to prepare to explain to the customer, What are the "Total Disk Space" and "Total Active Space on Disk" in SC Series?

In picture below the actual data on volume is 1.61TB and the RAID Overhead is 1TB plus the snapshot which is 100GB. The total should be 2.71TB which I believe is "Total Disk Space". Please correct me if I am wrong.

Just need the explanation of "Active Space on Disk". If you may seen in the picture is 3.99TB so I need to explain to the customer what is the extra s1.82 TB space allocated to 3.99TB? And how we can get rid of it or should it be forever allocate.

Best regards

Natthaphol.

29 Posts

November 30th, 2017 12:00

Hello Natthaphol,

 

I have a link below with a bit more info on how space works with SCOS, it can be a bit tricky to understand it, but I want to point out on page 11 section 2.2 shows RAID efficiency, but should mention the math and actual RAID overhead, but I'll break it down a bit at this level before drilling down to the volume.

 

System Space:

Say your array has 6.5TB TB of Available Space, this does not mean you can use all of that because RAID overhead is not factored in.  And it is tough to say how much RAID overhead will be, but let's break it down.  Say you have a total of 7 HDD's, 1 of which is a spare, you are left with 6 usable disks, the system will allocate space as "redundant" or "Dual redundant" depending on the size of disk (see DSM Admin Guide page 60, which will discuss size of disk and redundancy levels). For this example we will say you have 7x 1.2TB disks, just like any HDD you will not get 1.2TB, after provisioning you will have 1.09TB of space the system can use, multiply this by 6 disks = 6.54TB (Which is your total available space, remember this does not factor in RAID overhead).

 

RAID Stripes- you have 6 usable disk, at a size of 1.2TB this means the system will default to dual redundant, which means you will see RAID10-3, RAID6-6, or RAID6-10, depending on the number of usable disks will dictate if you will be using a RAID6-6 stripe or 6-10 stripe.  If you have 10 or more usable disks RAID6-10 will be used.  In this case having 6 usable disks you have RAID10-3 and RAID6-6 allocations.  In traditional RAID 10 there is a 100% overhead, which means this is only 50% efficient, your system is dual redundant so it is writing 3 copies of the data rather than 2, which helps with data protection, but has a cost of 200% overhead.  So this means you have 1TB of data you want to transfer to this array it will consume 3TB of disk space, this is where many customers get into trouble is migrating data over to the storage array. Through data progression that data will eventually get to RAID6-6 which carries 50% overhead cost so that 1TB you just dropped on the array is using 1.5TB of disk space.  If you were to add 4 more HDD's to the array this would allow the system to create RAID6-10 which would then reduce the overhead cost to 25%. 

 

Volume Space:

So you have ~6.54TB, and you create a 6TB volume, you are overprovisioned, and I would suggest remedying the issue before you get into a situation like emergency mode and have to delete the volume or add disk, which I would recommend adding 4 more disks for better RAID efficiency.  The screenshot shows you are using the "Balanced Profile" this means any new write goes to RAID 10-3 then through snapshots and data progression it will scoot down to RAID 6-6.  Assuming the server used the 6TB it was allocated and assume all of the space was in RAID6-6 this means that 6TB will consume 9Tb of disk space.  Since you cannot shrink a volume I would recommend migrating the volume off deleting the volume then starting with a smaller volume like 3TB you can then choose a "Maximize Efficiency" RAID profile which means it will not use RAID10-3 (although there will still be allocation to R10-3 as that is where metadata is stored), so most of the data would reside in R6-6, and if that 3TB volume fills up it will be consuming 4.5TB of disk space, which you may also have disk space used snapshots, there will be space allocated to R10-3 like mentioned as well.  It is easier to start small and expand the volume than to be stuck in a spot being overprovisioned. 

 

Currently the system has ~3.6TB allocated to RAID 10-3 and ~3TB allocated to R6-6, so this means you can have ~1.2TB of data in R10-3 and ~2TB of data in R6-6, giving you ~3.2TB of usable space, but you would technically still be in emergency mode if you used all that space.

 

Getting back to answer your initial question: 1.61TB is the space used by the OS, the 2.71 appears to be disk space used by "active space" the discrepancy I believe comes from possibly snapshots, or the server needing to run TRIM/SCSI unmap. The statistics tab will give a little more insight as to where the data resides, keep in mind that if data is held in snapshots this will take up space as well, even if it has been deleted form the server, the server may need to run TRIM/scsi unmap to make these numbers match up a bit better.

 

If you have further questions please get in contact with your regional support team, hope some of this helped out.

 

-Dave

 

http://en.community.dell.com/cfs-file/__key/telligent-evolution-components-attachments/13-4491-00-00-20-44-20-59/Understanding_2D00_RAID_2D00_with_2D00_SC_2D00_Series_2D00_Storage_2D00_Dell_2D00_2016_2D002800_3104_2D00_CD_2D00_DS_2900_.pdf  

9 Posts

November 30th, 2017 19:00

Dear Dave,

Thank you for your excellent explanation.

Correct me if I'm wrong, the discrepancy may come from the deletion of files on HOSTs or VMs Guests. So we need to do TRIM/SCSI unmap in the HOSTs and VMs Guests.

Base on the information from the customer, there are no files or information deleted on the Host or in the VM Guests. I will check the statistic again when I do onsite today.

Please advise what should I look for in the statistic for TRIM/SCSI Unmap needs.

Apart from TRIM/SCSI unmap. The space using by snapshot is only around 100GB. Could you help to advise any others factors which may cause the discrepancy?

Please find the snapshot space in the picture

I have found the information in online manual www.dell.com/.../GUID-A5A94C75-37AF-4A50-8B73-782A46B7FB0F.html

 It explain 

  • Active Space on Disk: Displays the amount of data that has not been frozen by snapshots. This data is accessible from the server.

What is this actually means?

Thank you and best regards,

Natthaphol.

29 Posts

December 1st, 2017 04:00

Just from the surface it could be that DSM is mis reporting "Active Space on Disk", I would recommend updating to the latest version (DSM2016R3.20), try setting TRIM to daily if using Windows host, and if the issue still persist call in and create a case.  

2 Posts

July 6th, 2018 01:00

What exactly is meant by "Active Space on Disk"? We can't find any definition. Dell Support is clueless...

We have lots of volume with "Active Space on Disk" lots of times higher than the "Total Disk Space" and "Active Space".

2 Posts

July 13th, 2018 15:00

Okay, there's a bug in DSM.

Sequential controller reboot resets the strange values to suitable values.

March 28th, 2019 07:00

I would like to have access to this document.

No Events found!

Top