In the Celerra Manager GUI - what is the size of the clar_r5_performance pool under Storage-Pools? It must be 1.5 TB. In that case, I suspect all the disks are not included in the system defined clar_r5_performance pool.
I find you have empty disk slots on both the enclosures - which may be the cause of this issue as it does not match with any system defined storage configuration template. I understand the backend configuration was done using Custom templates - can you please confirm the same?
If all the disks (D7 - D14) are on RAID 5 FC Disks and the EMC recommendation was followed (LUN ownership balanced across SP, Host IDs are set properly etc) - it may require to extend the current clar_r5_performance pool to include all these disks. It is quite clear that the system defined clar_r5_performance pool is now having only a few disks in it - however, you may provide the output of the command -
nas_pool -info id=3
Lastly - it may be wise to contact EMC support and they can dial in and ensure things are all done correctly before we conclude.
In the Celerra Manager GUI - what is the size of the clar_r5_performance pool under Storage-Pools? It must be 1.5 TB. In that case, I suspect all the disks are not included in the system defined clar_r5_performance pool.
No, the weird thing was that the clar_r5_performance pool under Storage -> Pools was 4 TB (the sum of all available volumes). But when I tried to create a filesystem, it was listed as 1.5 TB.
I've contacted EMC Support yesterday for what appeared to me as another issue. As it turned out, that issue caused this behaviour, and also the behaviour I described in another thread (threadID 75524). According to EMC Support, the active Data Mover wasn't seeing all the volumes, but the standby Data Mover was. Having server_devconfig discover the storage was the solution.
I find you have empty disk slots on both the enclosures - which may be the cause of this issue as it does not match with any system defined storage configuration template. I understand the backend configuration was done using Custom templates - can you please confirm the same?
No, all storage is configured using the CX_All_4Plus1_Raid_5 template. That template is smart enough to skip the empty slots where hotspares would have been and continue with the next enclosure. Otherwise we would have been stuck with 9 idle devices, out of 24 (5 hotspares, 4 unusable). Not a very economic solution .
Anyway, I think I can continue testing now. Thanks for your reply!
nandas
4 Operator
•
1.5K Posts
1
April 23rd, 2008 15:00
I find you have empty disk slots on both the enclosures - which may be the cause of this issue as it does not match with any system defined storage configuration template. I understand the backend configuration was done using Custom templates - can you please confirm the same?
If all the disks (D7 - D14) are on RAID 5 FC Disks and the EMC recommendation was followed (LUN ownership balanced across SP, Host IDs are set properly etc) - it may require to extend the current clar_r5_performance pool to include all these disks. It is quite clear that the system defined clar_r5_performance pool is now having only a few disks in it - however, you may provide the output of the command -
nas_pool -info id=3
Lastly - it may be wise to contact EMC support and they can dial in and ensure things are all done correctly before we conclude.
Hope this helps.
Thanks,
Sandip
Gertjan_OL
10 Posts
0
April 24th, 2008 00:00
clar_r5_performance pool under Storage-Pools? It must
be 1.5 TB. In that case, I suspect all the disks are
not included in the system defined
clar_r5_performance pool.
No, the weird thing was that the clar_r5_performance pool under Storage -> Pools was 4 TB (the sum of all available volumes). But when I tried to create a filesystem, it was listed as 1.5 TB.
I've contacted EMC Support yesterday for what appeared to me as another issue. As it turned out, that issue caused this behaviour, and also the behaviour I described in another thread (threadID 75524). According to EMC Support, the active Data Mover wasn't seeing all the volumes, but the standby Data Mover was. Having server_devconfig discover the storage was the solution.
enclosures - which may be the cause of this issue as
it does not match with any system defined storage
configuration template. I understand the backend
configuration was done using Custom templates - can
you please confirm the same?
No, all storage is configured using the CX_All_4Plus1_Raid_5 template. That template is smart enough to skip the empty slots where hotspares would have been and continue with the next enclosure. Otherwise we would have been stuck with 9 idle devices, out of 24 (5 hotspares, 4 unusable). Not a very economic solution
Anyway, I think I can continue testing now.
Thanks for your reply!