Unsolved
This post is more than 5 years old
9 Posts
0
958
September 6th, 2010 05:00
How to delete disks from one single DM
I'm in the process of extending out Celera. Currently we have 3287 LUNs in each of our 8 DM. In order to extend it I would like to delete these disks vrom server 8 and 9. This would allow me to add another ~860 LUNs to server 8 and 9 and not running over the Limit os 4096 Luns / DM.
Is there a nas_disk option to specify which DM needs to clear its binding and tables?
Our DART code is version 5.6.48-7.
Thanks for your help.



gbarretoxx1
366 Posts
1
September 6th, 2010 05:00
Hi,
All Data Movers must reach the same volumes in order to failover properly.
If you remove disks from some DM's only, you will have issues during the failover.
But, answering your question, there is no option to remove the disks from a single DM.
Why don't you use bigger LUNs to minimize the number of LUNs on the backend ?
Gustavo Barreto.
engelw
9 Posts
0
September 6th, 2010 06:00
Hi Dear,
Thank you very mutch for your answer. The issue is that we already use large LUNS (55GB). we have currently 6 Datamovers in Production. There is Server_7 and Serevr_8 not in production yet.
If we completly separate this two servers, one for production and the other as a failover, we still can dedicate a heathy number of luns to allow growth of this Celera. Unfortunatly Server 8 and 9 do see and have created the disks as all other 6 servers did. This avoids the 'splitting' of the servers into two independant servergroups.
Question: if we unzone the all disks from Server_7 and Server _8, do the cleanup the tables?
engelw
9 Posts
0
September 6th, 2010 06:00
We are using DMX 4 systems. Our Implementation specialst suggested to not use any metaframe. Thats where we are currently with 1680 TB worth of storage allocated by this NSX. If you say not suggested I accept that since we already have had the requested configuration in the past. We could live with that.
However, what do you mean by manually cleanup of the tables? Arent there any commands?
engelw
9 Posts
0
September 6th, 2010 06:00
Pardon, I have two typos, 1. metadevice instead of metaframe and second we have 168 TB not 1680.
Regards
Willi
gbarretoxx1
366 Posts
1
September 6th, 2010 06:00
Hi,
Technically you could separate these two DM's, but it's not recommended...
Also, if you remove the zonings on the switch, the bind tables will be still there.
A manual clean would be needed.
Another thing is, 55GB LUNs are not really big. If your NAS code version is below 5.6.44, we support LUNs up to 2 TB. If you have a newer code, the limit is 16 TB.
We usually split the Raid Group in two luns, so we use larger LUNs than you are using.
Gustavo Barreto.
gbarretoxx1
366 Posts
0
September 6th, 2010 07:00
Hi,
metavolumes should be avoided when using DMX on the backend, but this type of environment is one of the exceptions.
It's better to use metavolumes than split the configuration like that.
Gustavo Barreto.
engelw
9 Posts
0
September 6th, 2010 08:00
This Question seems to be hard to answer since there is no customer appliable solution there.
engelw
9 Posts
0
September 6th, 2010 08:00
Gustavo,
Best thanks for your answers. You added very helpful hints.
Kind regards
Willi