Unsolved

This post is more than 5 years old

1 Rookie

 • 

29 Posts

5432

February 14th, 2012 11:00

Why does Data Mover even care about DNS?

A little background first.  My company purchased two VNX 5300's that were installed by a consultant.  He recommeded that we isolate disk I/O traffic to its own dedicated network which we did.

After everything was setup and configured both boxes started kicking off errors every week that the configured DNS servers were not accessable which they shouldn't be if they are attempting to access them from the isolated, and dedicated disk I/O network.  We ignored these for a while, but I got burned when an actual legitimate error was posted and not realized immediately as it was thought to have been the old DNS error.

I am still pretty green to EMC SAN's and I find some the terminology vague and contradicting so forgive me if I butcher some of these questions.

  • When I log into Unisphere and go to network Settings For File, the DNS tab references my Data Mover.  Why this component when I would have expected the Control Station.

  • Based on the fact that the Data Mover is failing in accessing our DNS server, have we possibly misconfigured a component or not setup our dedicated I/O network correctly?

  • What piece of the VNX fabric even utilizes DNS resolution and why?  Email notifications?

I realize I am rambling a bit, but appreciate any clarification that some of the veterans can provide.

Thanks.

1 Rookie

 • 

29 Posts

February 14th, 2012 12:00

Of course...how could I forget about CIFS.  We are only using NFS/iSCSI at the moment so that option totally slipped my mind.  Makes total sense.

Ernes, would you mind expanding a bit when you mention:

"You create an interface on the data mover ports and that interface needs to be able to get to the DNS."

My DataMover contains several blades.  One is a four port copper blade that is dedicated to our NFS traffic, while the other appears to contain 3 fiber and one copper where only two of the fiber run to our Storage Processor.  Are you saying it would be possible to dedicate one of those interfaces for our production network where our DNS resides, or were you referring to some internal logical interface?

Thanks again for very helpful clarifications.

1K Posts

February 14th, 2012 12:00

Control Station is used just for management purposes. In order for you to manage the data movers you need a control station. Everything else, besides management, is handeled by the data movers. If you are creating a CIFS server then the data mover needs to be able to get to your DNS servers. You create an interface on the data mover ports and that interface needs to be able to get to the DNS.

9 Legend

 • 

20.4K Posts

February 14th, 2012 12:00

what Ernes said plus ..you need properly working DNS in order to join your CIFS server to active directory. Datamover will use DNS to find local domain controller to create a computer account for your CIFS server also update DDNS (if your active directory allows for that)

1 Rookie

 • 

29 Posts

February 14th, 2012 14:00

server_2 : PCI DEVICES:

On Board:

  PMC QE8 Fibre Channel Controller

    0:  fcp-0-0  IRQ: 20 addr: 5006016046e05c67

    0:  fcp-0-1  IRQ: 21 addr: 5006016146e05c67

    0:  fcp-0-2  IRQ: 22 addr: 5006016246e05c67

    0:  fcp-0-3  IRQ: 23 addr: 5006016346e05c67

  Broadcom Gigabit Ethernet Controller

    0:  cge-1-0  IRQ: 26

    speed=auto duplex=auto txflowctl=disable rxflowctl=disable

    Link: Up

    0:  cge-1-1  IRQ: 27

    speed=auto duplex=auto txflowctl=disable rxflowctl=disable

    Link: Up

    0:  cge-1-2  IRQ: 28

    speed=auto duplex=auto txflowctl=disable rxflowctl=disable

    Link: Up

    0:  cge-1-3  IRQ: 29

    speed=auto duplex=auto txflowctl=disable rxflowctl=disable

    Link: Up

Slot: 4

  PLX PCI-Express Switch  Controller

    1:  PLX PEX8648  IRQ: 10

Output 2:

server_2 :

Virtual devices:

trunk    devices=cge-1-0 cge-1-1 cge-1-2 cge-1-3  :protocol=lacp

fsn    failsafe nic devices :

trk    trunking devices : trunk

9 Legend

 • 

20.4K Posts

February 14th, 2012 14:00

can you please post output from these two commands:

server_sysconfig server_2  -pci

server_sysconfig server_2  -virtual

9 Legend

 • 

20.4K Posts

February 14th, 2012 15:00

it looks like you have 4 x 1G port and 4 fiber channel ports ..2 of those FC ports are use for connectivity to SPA and SPB and two more are the aux (auxilary) ports ?

1K Posts

February 14th, 2012 16:00

You could create a new interface in order to get out to where your DNS server resides. If that's on a different VLAN then when you create the new interface you can specify a VLAN ID (i.e. VLAN tagging) if that is required in your network.

For example, let's say you created an LACP trunk on cge0 and cge1 ports. You create an interface called NFS on that trunk and assign it 192.168.1.2/24 address. If your DNS is on 192.168.2.x/24 network then that interface will not be able to get to your DNS server unless you configured LACP trunk on your switch to allow it. What you can do is create another interface on that same trunk with IP 192.168.2.2 address but specify a VLAN ID when you create that interface, if VLAN tagging is enabled on the switch for the LACP trunk. Now you will be able to get to your DNS server and you can use that interface for CIFS, if you want to.

Hope this helps.

4 Operator

 • 

2K Posts

February 14th, 2012 18:00

Probably simply a reminder, but in consideration of communication originating from the datamover, also it is likely you will need to supply default/static route(s).  In Unisphere this is configured via the following breadcrumb trail:

Settings -> Settings For File -> Routes (tab)

From the output regarding the port configuration on the datamovers

[...]

0:  fcp-0-0  IRQ: 20 addr: 5006016046e05c67

0:  fcp-0-1  IRQ: 21 addr: 5006016146e05c67

0:  fcp-0-2  IRQ: 22 addr: 5006016246e05c67

0:  fcp-0-3  IRQ: 23 addr: 5006016346e05c67

[...]

As seen physically on the back of the datamover, from bottom to top, fcp-0-0 and fcp-0-1 are the datamover initiator ports that are directly connected to the (on-board) SP A and B front-end ports.  You can also see these datamover initiator WWPN's (logged in and registered) under "Connectivity Status":

1) Hosts -> Connectivity Status

2) Under "Host Initiators", expand the corresponding host object: "Celerra_" to expose the individual initiator records

3) You should see both the WWPN's for fcp-0-0 and fcp-0-1 (as well as fcp-0-0 and fcp-0-1 for datamover 3)

Remember... from the captive array's perspective, on the integrated models, the datamovers are simply a (custom) Linux host directly attached to the SP's.  Finally, if you ran the same commands for server_3 (or ALL), you would see under the registered host object in "Connectivity Status" the other corresponding initiator records for datamover 3.


> My DataMover contains several blades.  One is a four port copper blade that

> is dedicated to our NFS traffic, while the other appears to contain 3 fiber and

> one copper where only two of the fiber run to our Storage Processor.  Are you

> saying it would be possible to dedicate one of those interfaces for our production

> network where our DNS resides, or were you referring to some internal logical interface?

As for the remaining ports: fcp-0-2 and fcp-0-3, as Dynamox mentioned these are auxiliary FC ports available for connectivity to tape libraries to support (2-Way) NDMP configurations.  Also, depending on need, model, and requiring Engineering approval via an RPQ, the only other connectivity that I am aware of that is supported for the auxiliary ports would be additional direct connections to the SP's if the additional bandwidth was needed and approved.  As you've seen by now, each datamover by default on the VNX has two 8Gbps connections to the captive array: 1x (fcp-0-1) to a port on SPA and 1x (fcp-0-2) to a port on SPB which should suffice for most environments.

4 Operator

 • 

2K Posts

February 14th, 2012 18:00

> As you've seen by now, each datamover by default on the VNX has two 8Gbps

> connections to the captive array: 1x (fcp-0-1) to a port on SPA and 1x (fcp-0-2)

> to a port on SPB which should suffice for most environments.

Ooops... small typo, but correctly represented throughout except in the last line.  Should be fcp-0-0 and fcp-0-1 respectively.

1 Rookie

 • 

29 Posts

February 15th, 2012 06:00

Wow Christopher, thank you for the amazingly thorough response.  That helps to clear up some confusion on my part concerning interfaces immensley.

Considering that the four port copper blade has been fully dedicated to NFS it might be easiest to see if I could add a second blade to dedicate at least partially to CIFS?

I sincerely appreicate everyone's time on this.  All of you have been a huge help.

1K Posts

February 15th, 2012 07:00

The only reason you would do this is if you are concerned about the network performance CIFS and NFS might use on the same trunk. It really depends on how heavily that trunk is being utilized by NFS. Most people I work with decide to use the same trunk for CIFS and NFS because they know that CIFS will not be utilizing a lot of network resources and thus it is OK to use the same trunk. Some people on the other hand like to use seperate devices (i.e. cge ports) for CIFS and NFS since they don't want to take any chances.

You could definitely purchase a new blade and partially dedicate it to CIFS. You can also deploy CIFS on the current trunk you have and test it to see if it's going to cause any NFS disruptions before you fuly deploy CIFS on the same trunk as NFS.

1 Rookie

 • 

29 Posts

February 15th, 2012 08:00

Ernes Taljic wrote:

You can also deploy CIFS on the current trunk you have and test it to see if it's going to cause any NFS disruptions before you fuly deploy CIFS on the same trunk as NFS.

Ok this is where I might be getting confused again.  The interfaces for NFS are distributed across a pair of switches that exist in an isolated dedicated network separate from our production network.  My understanding is that each pair of copper connection is setup LACP to those switches.

Would I not have to:

Steal one of the interfaces currently being used for NFS?

Expose our dedicated NFS network to the production network in order for DNS to function?

1K Posts

February 15th, 2012 08:00

Is this how your current network is set up:

- LACP trunk on cge1-0, cge1-1, cge1-2 and cge1-3 ports? Are all of those ports in one LACP trunk?

- How is FSN configured in your environment?

I'll reply to your question once I get your current networking configuration on the data movers.

1 Rookie

 • 

29 Posts

February 15th, 2012 09:00

It does appear that all four ports are members of the LACP trunk.  The four ports are divided between two stacked switches.

Concerning FSN, I have to admit that I had to look it up, but after some reading I don't see that we have FSN configured period?

./server_sysconfig server_2 -virtual

server_2 :

Virtual devices:

trunk    devices=cge-1-0 cge-1-1 cge-1-2 cge-1-3  :protocol=lacp

fsn    failsafe nic devices :

trk    trunking devices : trunk

Man do I feel like I am opening up a can of rabid worms.

1K Posts

February 15th, 2012 10:00

Take a screenshot of the network in Celerra manager and post it here. On the letft-hand side select Networking and take a screenshot of the interface tab on the right-hand side. Also, select the Devices tab on the right-hand side and take a screenshot of that too.

I want to compare the screenshot with the output of server_sysconfig

Top