This post is more than 5 years old
42 Posts
0
2057
Can't get 10Gb NAS connectivity
We are preparing to migrate our primary NAS file system from a VNX7500 to a newer Isilon cluster. VNX file version is 7.1.76-4 and the array is no longer under EMC support.
Current NAS interface connections are just the 1Gb copper Ethernet ports. The datacenter deployed 10Gb relatively recently. To expedite the 30TB data migration, I would like to enable the 10Gb fiber ports this VNX has. However, we are having trouble getting the network to see the VNX 10Gb ports.
For simplicity, it is a standard configuration of Device fxg-1-0 (for both A and B data-mover sides) with the same Interface IP network configuration as the 1Gb connections. No virtual device configurations for FSN, truncking, and no VLAN Tagging.
There are link lights, the Network team has verified switch port configurations, and no port errors indicating week signal. However, the switches never see the VNX 10Gb port and it does not register in the arp table.
We have referenced "Configuring and Managing Networking on VNX" for this configuration, and searched community discussions. We have tested with Device fxg-1-0 and fxg-1-1 (A and B sides), and even a different IP network.
I would appreciate any suggestions as to what I could have missed or misconfigured in this.
Thank you,
Bryan
bryan_washburn
42 Posts
0
March 9th, 2018 12:00
So my previous post was completely wrong. Using the 10Gb an 1Gb ports at the same time is supported and works fine.
It turns out that the ports were just ‘hung’. We simply had to reset them. {Frustrating}
These are now responding and supporting our CIFS server.
Thank you for your review and input on this.
Rainer_EMC
8.6K Posts
0
March 5th, 2018 03:00
check that you use the correct cables - we only support multi-mode
wrong cables is a normal mistake
if you open a SR then customer service should be able to provide you with .server_config command for fxg status
bryan_washburn
42 Posts
0
March 5th, 2018 11:00
Thank you for the suggestion.
We use only multi-mode in our fabric. However, I did just replace the cables with brand new ones. No change.
With no maintenance on this array anymore (previously mentioned), I don’t believe opening an SR will be of use.
However, I will see about getting a network configuration export posted for community review a little later.
Thank you,
Bryan
bryan_washburn
42 Posts
0
March 7th, 2018 21:00
Please correct me if I have details wrong, but as I understand it, it is a “one or the other” port group configuration.
The 10Gb ports cannot just be brought online along with the 1Gb ports.
The data movers need to be migrated from the 1Gb to the 10Gb interfaces.
This is a disruptive process as well.
As this is beyond my skill set with this array, we will pull in professional services to support this process.
Bryan
Rainer_EMC
8.6K Posts
0
March 8th, 2018 01:00
yes there is a fixed number of network configs supported
BUT if the interface gets recognized and shown in server_devconfig and server_ifconfig then it needs to work
if you do watch the DM log using server_log and remove/attach the cable you should see link up/down messages
Its als worth looking at the KB for troubleshooting info and commands
Rainer_EMC
8.6K Posts
0
March 12th, 2018 02:00
didnt a data mover reboot reset them ?
bryan_washburn
42 Posts
0
March 12th, 2018 11:00
Believing the problem was in the configuration, we did not realize the 10Gb ports were just unresponsive. We did not consider rebooting data movers, as everything else appeared to be working fine.