Start a Conversation

Unsolved

This post is more than 5 years old

I

642

October 26th, 2010 10:00

Two LUNs on Raid 1/0 or One for Celerra File System

I will create (4) Raid 1/0 Raid Groups (2+2) from 300 GB 15K Drives.

This will be used as a replica target for a 1.5 TB system.  Hence, I will use all drives.

My question is do I carve up two luns and balance per Storage Processor or should I just create one per Raid Group and balance per SP as well?

Thanks,

Damian

366 Posts

October 26th, 2010 11:00

Hi,

Raid 1/0 on Celerra is only supported with 2 disks ( 1+1 ).

Gustavo Barreto.

40 Posts

October 26th, 2010 11:00

Even with CX4 Storage.  I was under the impression we could create 2+2.  I created one for testing purposes and it came up as Raid 10 on FC versus Raid 1?

40 Posts

October 26th, 2010 12:00

FYI: This is a gateway.

Here's one:

id        = 72
name      = d72
acl       = 0
in_use    = False
size (MB) = 274811
type      = CLSTD
protection= RAID10(4)
stor_id   = APM00090301890
stor_dev  = 0C95
volume_name = d72
storage_profiles = clar_r10
virtually_provisioned = False
mirrored  = False
servers   = server_2,server_3
   server = server_2          addr=c112t6l4
   server = server_2          addr=c144t6l4
   server = server_3          addr=c128t6l4
   server = server_3          addr=c96t6l4

366 Posts

October 26th, 2010 12:00

Yes...

The FC disk devices on Celerra systems with captive CLARiiON CX4 storage are configured into either 8+1 RAID5, 4+1 RAID5, 4+2 RAID6, 6+2 RAID6, 12+2 RAID6 or RAID1/0 (2 disk) disk groups.

If you created a Raid 1/0 with 4 drives and the Celerra was able to recognize it and create dvols, this sounds a little strange to me. The newer codes should have avoided it.

Could please you paste the output of these two commands ?

$ nas_disk -i d12 ( use your dvol number )
id        = 12
name      = d12
acl       = 0
in_use    = True
size (MB) = 68427
type      = CLSTD
protection= RAID1
stor_id   = APM00073700689
stor_dev  = 000D
volume_name = d12
storage_profiles = clar_r1
virtually_provisioned = False
mirrored  = False
servers   = server_2,server_3
   server = server_2          addr=c0t1l9
   server = server_2          addr=c16t1l9
   server = server_3          addr=c0t1l9
   server = server_3          addr=c16t1l9

$ /nas/sbin/navicli -h spa getlun 13 -type -disk ( 13 is the converted "stor_dev" value above from hexa to decimal )
RAID Type:                  RAID1

Bus 0 Enclosure 0  Disk 12

Bus 0 Enclosure 0  Disk 13

Gustavo Barreto.

40 Posts

October 26th, 2010 12:00

RAID Type:                  RAID1/0

Bus 0 Enclosure 0  Disk 5

Bus 0 Enclosure 0  Disk 6

Bus 0 Enclosure 0  Disk 7

Bus 0 Enclosure 0  Disk 8

40 Posts

October 26th, 2010 12:00

Sure thing:

Let me recreate

366 Posts

October 26th, 2010 12:00

I am sorry not asking this first, but what's your NAS code version ?

Also, please run the following script :

###########

for i in `nas_pool -query:"IsUserDefined==False" -fields:Members -format:" %L"`
do
for j in `nas_volume -query:"name==$i" -fields:DiskNames -format:" %L"`
do
nas_disk -query:Name==$j -fields:Name,StorageProfile,Protection \ -format:"%s|%s|%s\n"
done
done

###########

40 Posts

October 26th, 2010 13:00

5.6.45-5

40 Posts

October 26th, 2010 13:00

I ran the commands with the following output:

Done (on both commands)

366 Posts

October 26th, 2010 13:00

Hi,

please, two more command outputs :

# nas_storage -c -all

# server_devconfig server_3 -c -s -a

8.6K Posts

October 26th, 2010 15:00

My advice would be to use a supported config of four 1+1 RAID1 groups and stripe across them the same way that AVM would

Even if your NAS code right now allows you to use 4+4 it doesn’t mean future codes will. Also its always better to use configs that lots of other customers use

No Events found!

Top