Unsolved
This post is more than 5 years old
3 Posts
0
5758
January 19th, 2012 01:00
vnx DM, interface, device, VDM confusion
I have built several devices "FSN" tryint go get the hang of this box. This file stuff is new to me. i have successfully built CIFS server joined to domain and exported Cifs shares.
I have since torn everything down and build I default cifs server in the hopes to get cava working. I setup a VDM (not sure why except it seems best practice) however I am unable to make a cifs server. It says that there are no devices available on server_2. I deleted the default cifs server and the FSN device and interface to try to build a cifs server directly on the VDM and run into the same problem. The whole interface/device thing has me stumped.
My topology is as follows. My 10 Gig ports on my slots are wired to alternating UCS 10G appliance ports on my UCS. So the left DM slot port 1 goes to UCS fabric A and port 2 goes to UCS Fabric B. The other DM slot is split like that also.
I dont really understand what happens to the 10G ports on the standby datamover during normal production.
I would like some advise from some of you brilliant people.
Am I wired up correctly?
Would you use FSN in this situation?
Am i correct by using cifs server directly on the DM? if so what device should I tie this interface to?
How do I go about assigning an interface to a VDM?
0 events found


dynamox
11 Legend
•
20.4K Posts
•
87.4K Points
0
January 19th, 2012 04:00
you are wired correctly, do you have vPC between the two UCS Fabrics ? If you do then you could take your two 10G interfaces and create a trunk interface, essentially creating a two 10G lane highway. If the two fabrics do not have vPC then you will need to setup an FSN ..just like you did previously. Primary and Standby datamover have to be wired and configured identically.
So right now you are just trying to create a CIFS server inside of a VDM ? Can you ssh into the control station and post output from command:
server_ifconfig server_2 -all
Rainer_EMC
6 Operator
•
8.6K Posts
0
January 19th, 2012 07:00
Which VNX OE version are you using ?
adamembrey
3 Posts
0
January 19th, 2012 10:00
7.0.40-1
adamembrey
3 Posts
0
January 19th, 2012 10:00
I dont think they do VPC between them.
So since you cant configure the standby datamover you are just talking about cabling right?
I have created a Cifs server on the Physical DM. I have created 3 interfaces tied to the same device. All of them keep getting tied to the Cifs server on teh physical DM. I have tried creating a new cifs server on the VDM however I it tells me there are not interfaces.
server_2 :
el30 protocol=IP device=mge0
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:39:87:31 netname=localhost
el31 protocol=IP device=mge1
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:39:87:32 netname=localhost
loop6 protocol=IP6 device=loop
inet=::1 prefix=128
UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
new protocol=IP device=fxg-1-0
inet=10.187.150.66 netmask=255.255.254.0 broadcast=10.187.151.255
UP, Ethernet, mtu=1500, vlan=150, macaddr=0:60:16:32:32:40
itfs protocol=IP device=fxg-1-0
inet=10.187.150.65 netmask=255.255.254.0 broadcast=10.187.151.255
UP, Ethernet, mtu=1500, vlan=150, macaddr=0:60:16:32:32:40
default protocol=IP device=fxg-1-0
inet=10.187.150.64 netmask=255.255.254.0 broadcast=10.187.151.255
UP, Ethernet, mtu=1500, vlan=150, macaddr=0:60:16:32:32:40
Rainer_EMC
6 Operator
•
8.6K Posts
1
January 19th, 2012 10:00
Its not that difficult – a device is Layer 2 – either a physical Ethernet port like cge0 or a trunk/FSN – it doesn’t have an IP config.
On a device you can create one or multiple interfaces – thats basically an IP configuration that’s using the underlying device.
Usually they would be in the same subnet / broadcast domain – unless you have setup VLAN tagging on your device and switch.
A CIFS server has to have at least one interface.
An interface has to be associated with one CIFS server – if you need multiple CIFS servers you need multiple interfaces.
A CIFS server can use multiple interfaces though – just not vice versa – a interface can only be configured in one CIFS server (consider it a CIFS endpoint)
Most customers are using just one CIFS server.
Reasons for multiple CIFS servers are:
- Multiple AD domains
- Multiple customers or tenants – if they shouldn’t “see” the others data typically combined with VDMs
- Migration from multiple Windows servers and wanting to preserve names, esp. with shares that have the same name but different content
- Need different localgroup/user or other settings that are per CIFS server like AV scanning
- Multiple active data movers
The networking, HA networking, CIFS and VDM manuals have more details.
Rainer