4 Operator

 • 

8.6K Posts

September 8th, 2007 03:00

thats easy :-)

just create a virtual data mover, create your new CIFS server on that VDM and mount its data file systems under that VDM.

This way the CIFS server is constrained to that VDM's file systems and C$ can only access whats below.

You can even give that CIFS servers Administrator account to your Windows guys and they can only create new shares within that VDM

I think there also is a procedure for moving a CIFS server from a DM to a VDM but you would have to ask support about it.

9 Legend

 • 

20.4K Posts

September 8th, 2007 06:00

is moving to VDM my only option ? can i do something with NTFS permissions ? If i connect to the CIFS server using "Computer management", click on shares and select to view properties on C$ share, it gives me an error message: "This has been shared for administrative purposes. The share permissions and file security cannot be set"

Moderator

 • 

285 Posts

September 8th, 2007 07:00

Unfortunately, it is. On Celerra, C$ shows all filesystems from the Data Mover root, and there is not any way to hide them. Your only option is to create a VDM. That way, when you enter C$ on the VDM, you will only be able to see the filesystems mounted to the VDM.

9 Legend

 • 

20.4K Posts

September 8th, 2007 07:00

from the top of your head can you think of limitations that apply to VDM and not to datamover. For examle right now i use NDMP,tree quotas,put multiple cifs servers on one physical network interface. How many VDM per datamover can i have? I can still grow file systems on the fly utilizing AVM ?

i know these are elementary questions ..your comments/suggestions are much appreciated.

4 Operator

 • 

8.6K Posts

September 9th, 2007 18:00

Please see the "Configuring Virtual Data Movers for Celerra" manual for a full description.

It does list:

Restrictions
This section lists restrictions for Virtual Data Movers.

◆ In addition to CIFS servers created within a VDM, a global CIFS server is
required for antivirus functionality. A global CIFS server is a CIFS server
created at the physical Data Mover level.

◆ A default CIFS server and CIFS servers within a VDM(s) cannot coexist on the
same Data Mover. A default CIFS server is a global CIFS server assigned to all
interfaces, and CIFS Servers within a VDM require specified interfaces. If a
VDM exists on a Data Mover, a default CIFS server cannot be created.

◆ VDM supports CIFS and Security=NT mode only.

◆ The VDM feature does not support NFS, UNIX and SHARE mode, iSCSI,
Celerra Data Migration Service (migration file systems mounted within VDMs),
or resource allocation (CPU, memory, etc.) per VDM.

◆ A full path is required to backup VDM file systems with NDMP backup and
server_archive. A server archive example is server_archive
-w -f /dev/clt4l0/ -J /root_vdm1/ufs1, and an
NDMP example is /root_vdm1/ufs1.



Not really restrictions there - but a lot of benefits

The only annoying thing is that when you NFS export a file system that is
mounted on a VDM using the Celerra Manager GUI it wont list it in the drop-down box - you have to manually enter the correct path like /root_vdmX/....

The official (soft) limit for the number of VDM's per DM is 29

9 Legend

 • 

20.4K Posts

September 9th, 2007 19:00

good stuff ...i have a question about this statement:
****************************
A default CIFS server and CIFS servers within a VDM(s) cannot coexist on the
same Data Mover. A default CIFS server is a global CIFS server assigned to all
interfaces, and CIFS Servers within a VDM require specified interfaces. If a
VDM exists on a Data Mover, a default CIFS server cannot be created.
***************************

how do i identify default CIFS server, i ran server_cifs server_2 ..and see nothing that has to do with default CIFS server ? Does it mean that somebody could create a default server that would use all network interfaces ?

*************************
The VDM feature does not support NFS, UNIX and SHARE mode, iSCSI,
Celerra Data Migration Service (migration file systems mounted within VDMs),
or resource allocation (CPU, memory, etc.) per VDM.
*************************

does it mean that i cannot export NFS from a VDM ? How about fs_copy can it be used on file systems being utilized by VDMs?

Thanks

4 Operator

 • 

8.6K Posts

September 10th, 2007 02:00

anything you could restrict someone with Administrative Rights on that CIFS server could undo.

Even if you were to delete the C$ share or restrict it he could just create himself another C$ like share using mmc

4 Operator

 • 

8.6K Posts

September 10th, 2007 02:00

good stuff ...i have a question about this
statement:

how do i identify default CIFS server, i ran
server_cifs server_2 ..and see nothing that has to do
with default CIFS server ? Does it mean that somebody
could create a default server that would use all
network interfaces ?


yes, depends on how you created your (first) CIFS server

Default CIFS Server:
The CIFS server created when you add a CIFS server and do
not specify any interfaces (with the interfaces= option of the server_cifs
-add command). The default CIFS server uses all interfaces not assigned to other
CIFS servers on the Data Mover.


does it mean that i cannot export NFS from a VDM ?


no - that wording is just a bit confusing.

It just means that the CIFS/VDM special features arent available for NFS

For example if you run a showmount -e you would still see all the exports for the whole DM listed and not just the VDM
(unless you are using NFS export by VLAN, but that doesnt have anything to do with VDMs)

Or that the NFS exports are still kept at the DM level - so if you do a VDM failover you would have to manually (re)export on the remote side

plus you have to manually enter the NFS export path in the GUI

How about fs_copy can it be used on file systems being utilized by VDMs?


yes, of course you can.

As a plus if you want a CIFS move or restart just fs_copy the VDM rootfs as well.

A VDM is just a container that holds some CIFS config info (like servers, shares, local groups and users,...) that would otherwise be kept on a DM level

It was primarly designed to be used with Celerra Replicator so that you can not just replicate the data but also the CIFS config.

If you are using for example two Celerra's with Replicator and VDMs you can just failover the VDM and its data file systems. When the VDM comes up on the other side it will have all the CIFS config it needs. It will register itself with DNS/WINS using the original CIFS server name (but new IP address) so that Windows servers just have to connect again to work on the failover hardware.

Take a look at the concepts section of the VDM and CIFS manuals. In a few pages they explain it better than I could

4 Operator

 • 

8.6K Posts

September 10th, 2007 03:00

I cant think of any drawbacks for using VDM's

they are quite a nice feature that other vendors cant do or charge extra licenses :-)

see attached for the concepts

1 Attachment

9 Legend

 • 

20.4K Posts

September 10th, 2007 08:00

thanks for all the feedback. I have to agree that other vendors like netapp charge for their version of VDM (NetAPP calls it Multistore), but they don't have idle "standby" datamovers wasting power, cooling, floor space ;)
..so it all depends on your perspective.

Moderator

 • 

285 Posts

September 10th, 2007 09:00

Just to clarify NFS usage on VDMs.

. "VDMs do not support NFS" just means that we cannot treat a VDM as a physical Data Mover and explicitly export filesystems from it. The VDM doesn't run any programs and is unable to process any job by itself alone, so the VDM cannot parse RPC calls from the Unix/Linux client. That's why VDMs do not support NFS. VDMs are designed to enhance CIFS management. The concept behind the VDM is not "splitting physical resources" like CPU and memory into each VDM (like a VM does), but splitting the CIFS configuration information away from the base Data Mover level. More specifically, the common information that needs to be shared by all the VDMs is placed under the root filesystem of the physical Data Mover, while the local information that is only required by a single VDM is placed under the root filesystem of that VDM.

When you export the filesystem by running "server_export server_3 -P nfs -n fs01 /root_vdm_1/fs01", you are actually bypassing the VDM and operating at the physical Data Mover level. The physical Data Mover simply treats /root_vdm_1/fs01 as a path name, while at the same time *not* treating /root_vdm_1 as the root filesystem of a VDM.

What all this means is that you can still access the filesystems mounted to a VDM via NFS, but NFS is not supported by the VDM, and if the VDM is moved or failed over, it will not carry the NFS configuration with it, since that information is at the physical Data Mover level. So after a replication failover operation, NFS exports will need to be rebuilt. The base configurations of the two Data Movers (source and target) should also be similar to facilitate access once you fail over.

HTH
-bill

9 Legend

 • 

20.4K Posts

September 10th, 2007 09:00

Thanks Bill ...so any practical reason to use NFS with VDM ..or as you pointed out ..CIFS is the more beneficiary of using VDM?

Thanks

4 Operator

 • 

8.6K Posts

September 10th, 2007 09:00

if you are not using CIFS there is no reason to use VDMs

9 Legend

 • 

20.4K Posts

September 10th, 2007 10:00

great ...btw ..how did you attach a file to a post ?

Moderator

 • 

285 Posts

September 10th, 2007 11:00

You're talking about two different backends. "symm_std" is on a Symmetrix backend, while "clar_r5_performance" is on a CLARiiON. You can use either one of those to build your VDM on.

Even thought it's "just a VDM," I still, in good conscience, would not recommend putting your VDM on "clarata_archive."
No Events found!

Top