Start a Conversation

Unsolved

N

9 Posts

3052

April 22nd, 2018 22:00

Problems initializing two newly bought SANs (fail to initialize eth0)

Hi,

here is my current setup:

3 SANs in a pool, PS6000, PS4100, PS4210 and two dedicated Powerconnect 7024 switches .

I had the opportunity to buy two other SANs from another company: PS6110X, PS4110X.

I tried to initialize them both using the remote setup wizard and serial cable. Remote setup wizard doesn't find the units. With the serial cable through Putty i get to the point where you give name and ip address you want to use, then it fails saying that it cannot initialize eth0.

I'm currently waiting for a quote to add support to those units but in the meantime if you have any suggestion it would be really appreciated.

Thanks

April 22nd, 2018 22:00

When you get that error (eth0 can't be initialized), one possible reason is that eth0 is not connected to the switch or the link is down.  Check the switch to make sure that the link is in an up state.  Depending on your hardware model, you have the option to enter in eth1 instead of the default eth0 (assuming you have a second SAN port on that hardware).  

5 Practitioner

 • 

274.2K Posts

April 23rd, 2018 06:00

Hello,

 The 7024 is not a 10GbE switch as I recall.  The 6110/4110's don't negotiate down to GbE.  So I suspect that's your issue.  Also you can't use 10GbE ports on GbE switch for them.  Those uplink ports aren't designed for iSCSI traffic.

 You don't want to mix 10GbE only arrays with GbE arrays either in the same group.  Or at at least not the same pool. 

 Lastly until you get the support in place, you can't mix supported and non-supported arrays in the same group.

 Once you do get a 10GbE switch I would suggest creating a new new group just for 10GbE.

 Regards,

Don

 

April 23rd, 2018 07:00

Oh thanks for the explanation. I will get a 10GbE switch and build a second group or pool.

 

Can a member be transferred from a pool to another?

I guess that my hosts will need to have 10GbE nics as well or will they still be able to communicate with 10GbE SANs?

is there any litterature available for the bests practices in a setup similar as mine?

5 Practitioner

 • 

274.2K Posts

April 23rd, 2018 08:00

Hello,

 You don't have to have 10GbE hosts for 10GbE arrays but it is best practice.  Much depends on the buffer capacity of the switch as a 10GbE array can over run a GbE host.  Once buffers are saturated you can start dropping packets and forcing retransmits. 

 It is possible to build a mixed 10GbE and GbE group.   There is a PDF for that.  However, in my experience I found it much easier to create different groups.  The exception being when you are moving entire group to 10GbE, then being able to move volumes to new 10GbE group is very helpful. 

http://en.community.dell.com/techcenter/extras/m/white_papers/20384458/download

re: Member. You can transfer a member to a new pool within a group as long as there is enough free space to hold that data for the member being moved.  You can't move a member and keep the data on it.  Data stays int the groups. 

Did you mean move a member between groups?   If so, then no.  You can replicate volumes between groups.  Or if you have VMware you can use Storage Migration to do it live.  (Or HyperV Live Migrate)

If you do decide to create a mixed group, make sure you have enough inter switch bandwidth between the GbE and 10GbE switches. 

 Regards,

Don

 

 

 

April 24th, 2018 08:00

Ok so right now, my setup is:

3 hosts with 1gb nics

2 decicated 1gb iscsi switchs stacked

3 SANSs connected to 1gb switchs

 

I have another switch on which i have the hosts connected to and the management ports of the SANs connected to which is for my admin network.

 

I bought two SANs that cannot run on 1gb.

Can i do it like that:

Add a 10Gb switch dedicated to my 10gb iSCSI traffic.

Connect the two new 10gb SANs to this switch.

Add a 10Gb nic in each of my hosts and connect the 10Gb ports to the new Switch.

 

I would setup a new group of a new pool for the 10Gb SANs (i'm not sure that i understand the difference properly)

 

Would that setup work? Would it allow me to keep using the 1gb setup as it is currently and enable the 10gb sans which would host seperate volumes?

 

Or should i replace the 1gb switchs and run everything on 10gb switches? The first option is what looks easier to setup to me with no real downtime, except while adding a 10gb nic to the hosts.

5 Practitioner

 • 

274.2K Posts

April 24th, 2018 09:00

Hello,

 A group is the starting point, it provides the connection and management point.  Within a Group you can have up to four POOLS.  A single member can only be in ONE pool at a time.  A member can't be split across pools.

 To keep things clean and manageable, I would strongly suggest creating a new Group with the 10GbE arrays on that 10GbE switch.  They should be on a different IP subnet as well.  Then you can have both GbE and 10GbE NICs in a server and correctly access both arrays.  You need to setup your volume ACLs so that they are using either IP address or different CHAP accounts.  Otherwise you might have 10GbE NICs connecting to GbE arrays.

 If you don't need your 10GbE server to connect to the older GbE arrays that would be better. Prevents any chance of over running the GbE arrays.  But sounds like you do need both.

 So if you carefully manage access you can do that configuration.  Also keeps support and unsupported arrays from being in the same group.  Once you have a contract in place you can upgrade array and drive firmware (if applicable) and use all the bundled software.

 If you haven't yet installed SANHQ I suggest you do so.  It's a great monitoring tool for EQL arrays and a great help to support for triage. It can also periodically send diagnostic data to Dell and create cases automatically for failed drives, controllers, etc...

 Regards,

Don

 

 

5 Practitioner

 • 

274.2K Posts

April 24th, 2018 10:00

Hello,

 Re: VMware.  That makes things a little more 'interesting'.   What version of ESXi are your running?  If not 6.5+ then you will have to use Broadcom iSCSI HBAs for the 10GbE NICs.  5.x ESXi won't route iSCSI once you bind the VMKernel ports.  Which you have to do for EQL in order to get MPIO to function.  So putting all the NICs with the SW iSCSI adapter causes adapter rescans to take a LONG time.  It also tends to greatly increase boot time.

 With a HW iSCSI adapter you can put the arrays on different IP subnets.  The S/W iSCSI adapter for the GbE arrays and the HW iSCSI adapter for the 10GbE.  Make sure you update the firmware on the NICs as well.

 I have not tired to do this in 6.x. yet.  My lab isn't setup for it. 

 re: VSM.  VSM can handle multiple EQL groups w/o a problem.  You might need another NIC in the VSM to access the 10GbE IP subnet.  Unless you have routing between them. You can't have more than one VSM connected to the same vSphere Server. 

Regards,

Don

 

 

April 24th, 2018 10:00

Hello Don,

thanks for your reply, your support is very appreciated.

You are right, i unfortunately have to keep the hosts connecting to both the 1gb and 10gb arrays.

So if i understand correctly i will be ok if:

configure the 10GbE arrays in a separate group using another IP subnet for the 10GbE network and different CHAP accounts, correct?

 

And let's say i use vmware with VSM 4.7.0, should i deploy a new instance to be able to manage two storage providers or once appliance can handle more than one group?

5 Practitioner

 • 

274.2K Posts

April 24th, 2018 11:00

Hello,

 Yes creating a new vSwitch is good choice.

 You are only going to need 2x 10GbE NICs.  More than that won't provide better performance and takes memory away from the arrays. 

 Qlogic bought Broadcom so you fine there.  

 Good luck!

 Regards,

Don

April 24th, 2018 11:00

Yes i guess i should have specified it in the first place. The Hosts and the SANs are for VMware only. I am using ESXi 6.5.

The hosts have 8 network ports each and we are using 3 vSwitch.

vSwitch0 has 3 physical ports per hosts connected to it (admin network) -> 1 VMkernel adapter

vSwitch1 has 3 physical ports per hosts connected to it (iSCSI) -> 4 VMkernel adapter

vSwitch2 has 2 physical ports per host connected to it (vmotion) -> 2 VMkernel adapter

 

I have already bought 10GbE nics for my host which are QLogic 57810T

They have two port each. Should i create vSwitch3 for 10GbE iSCSI and assign these ports to it?

 

 

August 17th, 2018 06:00

Ok so here is where i'm at at the moment:

a two port 10gbe nic has been installed to my three existing hosts

A 10gb switch has been installed and configured

the two new san's are installed and configured.

 

I'm am currently at the vmware configuration. I am unsure about what i should do about vswitches and vmkernels.

 

I was about to create a new vswitch and assign the host's two new 10gbe ports and create a vmkernel for isci. Would that be it? 

I guess i need to set up a new switch for vmotion or will it go through my existing vswitch for vmotion that uses my 1gb network?

 

Any pointers would be greatly appreciated, i'm so close to being done :)

 

1 Rookie

 • 

1.5K Posts

August 17th, 2018 07:00

Hello,

 You are going to want to set up a vSwitch w/two 10GbE NICs/VMkernel ports for ISCSI.

They will have to be bound to the iSCSI software adapter.

 This is a PDF on setting up best practices

http://en.community.dell.com/techcenter/extras/m/white_papers/20434601/download

This is how to set up iSCSI with VMware and EQL

https://www.dell.com/community/EqualLogic/Tech-Report-TR1075-configuring-ESXi-with-PS-Series-storage-now/td-p/4695829

 

Regards,
Don

August 17th, 2018 07:00

Thanks for the fast reply. I'll be doing this right now and post results.

August 17th, 2018 15:00

Ok so i followed the procedure, since i already had a iscsi software adapter i edited it to add the two new ports on the host in the network port binding section. In the Target tab i added the ip of my new equallogic group.

I then tried to rescan everything. I'm not sure that it works though. Kinda stuck there at the moment.

1 Rookie

 • 

1.5K Posts

August 17th, 2018 16:00

Hello,

 Did you create a volume on the Group and allow it access to your servers?

 In the setup PDF I believe there is a section that discusses how to do that. 

 Do you have a support contract for these arrays?

 Regards,

Don

 

No Events found!

Top