Start a Conversation

Unsolved

This post is more than 5 years old

62203

February 7th, 2013 09:00

Poweredge 1900 + PERC5i + 8TB RAID 5 Array + CentOS = Accessable?

We recently acquired a Poweredge 1900 from an old client with a PERC5i controller.  We decided to upgrade it, put CentOS on the machine, and use it as a VM server and file server.

The question is, the Boss Man wants to push this to 8TBs (6 x 2TB w/ HotSpare).  We have CentOS on it right now with a 4TB R5 array (3 x 2TB) and she took CentOS like a champ.  The only issue is, she is barren (nothing on the drive yet) and I saw a relatively concerning forum post here that mentioned over 2TB and the data was not accessible.  Does this still apply?

I feel like I am missing something.  The controller partitions in GPT, the FS is EXT4, the OS is 64bit and supports all the before.  Should we not then have access to the full space?  Well, not really full space, but the partitions over 2TB?

Help me out here.  This is a conceptual thing, as we may not fill to that size for QUITE some time, I just want to know what will happen when that time comes...

7 Technologist

 • 

16.3K Posts

February 7th, 2013 09:00

"I saw a relatively concerning forum post here that mentioned over 2TB and the data was not accessible.  Does this still apply?"

I didn't read through the whole thread ... not sure what exactly your concern is (from that thread).  The PERC 5 (with the latest firmware) can take up to 2TB disks (NOT larger), and it will support VD's (arrays) much larger than that.

"Should we not then have access to the full space ... the partitions over 2TB?"

As long as it is not the BOOT volume.  The 1900 does NOT support UEFI, so you cannot have an array over 2TB that you BOOT from.  

For example:  

If you have 6x2TB disks and configure them in a single RAID 5 (~10TB), it has to be GPT, but you can't boot to GPT without UEFI, and since the 1900 doesn't support UEFI, you will not be able to install/boot an OS on the 10TB RAID 5.  In this scenario, you have two options:

1) Multiple VD's.  Each VD is seen by the OS as a different "disk".  You would create a RAID 5 VD of, say, 100GB to install/boot the OS to, then create a RAID 5 VD with the remaining space (9.9TB), convert to GPT and partition as you like.

2) Separate disks.  Create a 2TB RAID 1 with 2x2TB (or smaller if you wish) to install/boot the OS to, then create a 4x2TB RAID 5 (6TB) with the remaining disks.

7 Technologist

 • 

16.3K Posts

February 7th, 2013 10:00

Thanks Jonathan ... had not heard of specific Linux versions that were capable.

4 Posts

February 7th, 2013 10:00

Close.  I am using CentOS 6.3 I believe, and yes, it boots, and Anaconda actually set up the partitions for me no problem (from a blank GPT Array).  So booting is not an issue.  I just want to be sure that with the current setup I do not push 2.1TB of data in and lose anything.

Sounds like I won't, but I will run a test to be sure.  I wonder how long it would take to fill a 4TB partition using a drive tester of some kind...

4 Posts

February 7th, 2013 10:00

At the moment CentOS is booting from a 50GB partition on the currently active ~4TB array.  From the above I gather I can not boot from a >2TB array, which this is, do you mean >2TB partition?  Or is this the situation where I will not be able to access >2TB of the array?

February 7th, 2013 10:00

Hi there,

I'm not sure which version of CentOS you are using, but 5.x cannot boot from a GPT partition (Source: access.redhat.com/.../15224) so the MBR boot volume is limited to 2TB. If you're booting from the 4TB RAID-5, I assume you are using 6.x and will be ok here.

• The PERC 5/i supports Long LBA commands as specified by SPC-2, establishing a theoretical upper limit on virtual disk size of 2097152 Exabytes

• CentOS 5 and 6 both can handle 1 Exabyte ext3 and ext4 partitions though you may need to use an alternate e4fsprogs build to actually grow it that large (wiki.centos.org/.../Product and access.redhat.com/.../newfilesys-ext4.html), which is not supported by RHEL or CentOS.

• If you do a fresh reinstallation you may need to create the partition to the maximum handled by Anaconda and then grow it after first boot.

I hope that clears some things up, but if there are more questions don't hesitate to let us know!

7 Technologist

 • 

16.3K Posts

February 7th, 2013 10:00

I'm not a Linux guy, so maybe Linux does not need UEFI to boot to a GPT disk, but this should be easy enough for you to test ... can you access/see over 2TB from Linux?  With Windows, you would not see more than 2TB if not GPT and you could not install on GPT unless in UEFI mode ... thought that was pretty universal, but maybe there are some variations with Linux.  If you cannot access/see the full 4TB from your version of Linux, then you will need to choose one of the options presented above.

7 Technologist

 • 

16.3K Posts

February 7th, 2013 11:00

No sweat ... not being a Linux guy myself, it is nice to have someone who is that can chime in ;)

February 7th, 2013 11:00

Thanks for helping out as well, Flash! I didn't meant to post after/over you, I just got caught up in finding all the links to specific details. GPT is actually a subset of the EFI standard and RHEL/CentOS 5.x can handle non-boot GPT volumes ok.

To test that you can use all the bytes on the disk, you might try to fill the disk from /dev/urandom and then check that you can read it back. I am doing an example here where a very small disk image is mounted at "/mnt/old":

1. Create an empty file (since this takes up one or more blocks):

# touch /mnt/old/hash_me

2. Find out how many blocks are now available:

# df

Filesystem                               1K-blocks       Used Available Use% Mounted on

...

/dev/loop0                                   99150       5647     88383   7% /mnt/old

3. Fill up the available blocks:

# dd if=/dev/urandom bs=1K count=88383 | tee /mnt/old/hash_me | md5sum

6d0dac55eeb5b7f02ab030ed7844ab6c  -

88383+0 records in

88383+0 records out

90504192 bytes (91 MB) copied, 10.0285 s, 9.0 MB/s

This computes the hash of the bytes as-we-go so we can read them back as a check.

4. Sync, just in case:

# sync

5. Check the hash of the file:

# md5sum /mnt/old/hash_me

6d0dac55eeb5b7f02ab030ed7844ab6c  /mnt/old/hash_me

6. Clean up:

# rm /mnt/old/hash_me

Since the hashes are equal and we sync'd, I think it is reasonable to assume all the bytes were written and read.

If you want to try this just replace "/mnt/old" with the mount point of your large filesystem (while it is mounted of course!). I would also caution you this is not an ordinary procedure, nor is it officially recommended by any organization or entity, but it worked for me to demonstrate the concept in a controlled test environment and you are welcome to attempt it at your own risk.

I'm not very clear on the internals of ext4 so you may want to choose a slightly smaller number than the total number of blocks available; it would probably be enough to just pick a number that is roundly greater than 2TB or so. In 1K blocks that would be 2147483648 blocks (i.e. the count argument of dd).

Let us know how it goes and if you need help with anything else. Good luck!

No Events found!

Top