Start a Conversation

Unsolved

This post is more than 5 years old

RA

5928

April 7th, 2016 11:00

SSD tier-1 is full

i just saw that storage tier 1 is filled up 99% here is the screen shot for the your understanding.

i also saw the Storage tier configuration there is clearly show some free space ? storage 2nd and 3rd tier also have lots of free space you can see in this image..

tier 1 raid 5-5 is almost filled up whereas the raid 10 having enough space can anyone elaborate what is the mechanism of two raid in tier-1 i assume the raid 10 is for write and raid 5-5 for read?

correct me if i am wrong and also suggest the best possible way to running out of this current stat thanks.

28 Posts

April 26th, 2016 15:00

Without knowing what storage profile you have configured and are actually using (I'm going to assume that you are using the recommended storage profile). The default storage types and profile on a Compellent will configure Tier 1 with RAID 10 and either RAID 5-5 or 5-9 (depending on how many physical disks you have installed in that tier). Same is true for tiers 2 and 3.

From the pictures you posted, the Compellent has allocated pretty much 100% of tier 1 storage to the 2 different RAID types, 10 and 5-5. It appears the Compellent has determined you need more RAID 10 than 5-5 on tier 1, which is normally the case, especially when your tier 1 is SSD.

Now, depending what you replay schedule is and your SCOS version, your Tier 1 RAID 10 data will data progress off down to lower tiers. If you are writing more than about 760GB of data between your replays, which I'm guessing you are, going by your storage tiers, then you will end up writing to Tier 2 RAID 10. (We had a similar issue, as our Tier 1 was not big enough to handle all the new writes between data progression).

There are a couple of solutions to resolve this issue. I believe on the latest SCOS firmware, data progression will move data down before the schedule data progression task starts if it's needed (might what to check with Co-Pilot to confirm this). Not sure if your replays will have to be more frequent for this too happen. Another option is to install more SSDs into Tier 1 to accommodate your daily writes.

Hope this has helped a little. If in doubt, speak to Co-Pilot. They have always been helpful with any issues we have experienced on our system.

52 Posts

August 25th, 2016 04:00

Tier1: 6 disks x372GB=2,2TB = (RAID10) + (RAUD5-5) = (2x 766GB) + (560GB + RAID5overhead) = 1,5TB + 700GB 

Allocated space <> used space

If I am not mistaken:

Tier1 RAID10 is the place where ingoing data is written and active (not frozen) data is read from.

After replay is taken data in Tier1 RAID10 become accessible frozen and is eligible for data progression to Tier1 RAID5-5.

Data progression moves eligible data from T1R10 to T1R5-5  (also from higher tiers to lower or from lower to higher) every 24 hours or if a tier is near full - on demand ie. immediately - replay is taken and data progression moves data.

After data progression proces accessible frozen data from Tier1 RAID5-5 is accessible for reads. New writes go to Tier1 RAID10.

If amount of ingoing data (within 24h) is bigger than tier capacity and a space of given tier gets filled up to 100% (allocated=used), between main DataProgression runs additional replays are taken and data progression on demand runs to move frozen data to the lower tier.

If given tier is near 100% capacity you can plan and run (schedule) additional volume replays. After each replay data progression will move eligible data to lower tier.

No Events found!

Top