Highlighted
Tqadi
Silver

i have two servers R520 connected to two switches power connect 6224 connected to PS4200 storage

and another replica storage connected to the main storage lake in the figure belwo

Figure 1. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS4000 Storage Array

 

i am using the one 10G link from each server connected to the 10G ports on switch. also 10 G links from switch1 to module 0 and switch2 to module 1. and the other links from servers to switches and from switches to storage is 1G? i faced problems with this scenario that server two cannot see the storage because its connected to module 1 (passive).

what i am going to do is connecting the 10G links from both switches to module 0 (active) and the 1 g links from switch to module 1 (passive).

correct me if im wrong and any suggestion for this connection since its limited to two 10G modules on each switch. one for the server and the other for storage controller.????????

0 Kudos
14 Replies

RE: hi all

Jump to solution

1. PC6224 are not the best iSCSI switches on this planet.Yes we have use them too.

2. The 10G Module for PC6224 are made for uplinks to other switches and not for connecting Server/Storage. I dont expect they come with buffers and so..... Yes we have use them to connect ESX Hosts to it... but it was only some DR setup and not a real life production environment.

3. A EQL ist an Active/Standby controller model which means the Standby CM (Controller Module) offer Cache/CPU but no NetworkIO as long as it acts as the Standby.  (We leave the feature called Vertical Portfailover away at this point)

If you use 2 phys. Switchtes you need an ISL between them or use the Stacking feature* for creating one logical switch. A Server is never connected directly to a EQL CM but always to one ore more switches. With the ISL in place any server can reach ETH0 and ETH1 from the active CM.

* The PC6224 offer 2 slots on the back and in slot1 you can insert a Stacking Module. The 2nd. slot than can takes your 10G Module.


Regards,
Joerg

RE: hi all

Jump to solution

Hello, 

Joerg is completely correct.  This is not a supportable configuration.  

Going 10GbE servers to GbE  is a 10:1 oversubscription.   If you have a 4210 then those 10GBE ports will negotiate down to GbE. 

Performance and stability will suffer, especially under load.   You must get a proper 10GbE switch, or drop the 10GbE servers to GbE instead.    With only two ESXi servers your performance should be fine.

 Regards, 

Don 

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

Tqadi
Silver

RE: hi all

Jump to solution

hi joerg,

thank you for your reply.

1. yeah i know that this switch is not the best san switch but that what was available that time.

3. thank you for this my question is if i connect server 2 to switch 2 and the switch 2 to ctrl 1 (standby) so server 2 will not connect to storage right ? actually this is my current setup but i will connect this link to the other port of ctrl 0 so both servers will be connected to ctrl 0 port 0 and 1. correct me if im wrong.

about the switches i am doing lag between them i have problem of server 2 that cannot connect to storage or source. so do u recommend to do isl between them or connect the two servers to the same switch to communicate with the storage until to get stacking module? and could you please send me documents about isl on PC6224.

0 Kudos

RE: hi all

Jump to solution

ISL means InterSwitchLink. Its a connection between switches. If you use a single cable or a multible by creating a LAG it doesnt matter (for a support configuration the ISL have to be 70% of all combined active EQL ports). The stacking option makes life easier when you start on a greenfield environment (downtime needed) and if offer 2x12Gbit bandwith and if you setup it once it workings for ages.

We use PC5448 with a 4x1GbE LAG as ISL when we start with PS5000 years ago.

Iam not sure is you do the cabling right. Within the box there is a poster with a cabling plan... you can download it from eqlsupport.dell.com/.../download_file.aspx as well.

1. Your Servers need at a minium 2 ncis for iscsi. Each nic port goes to a different Switch

2. You active CM have to nic ports. One named ETH0 and the 2nd. is ETH1. Each goes to different switch

If everthing is setup correctly every port on an Server can "ping" every ETHx on the active CM. Every Server can "ping" every ETHx

- Dont connect all Server ports to the same Switch

- Dont connect ETH0 and ETH1 of a CM to the same Switch

- Check if your LAG is working

CM0/EHT0 -> Switch0
CM0/EHT1 -> Switch1

CM1/EHT0->Switch1
CM1/EHT1->Switch0

With this setup the vertical port failover feature can kicks and will offer 100% bandwith from the EQL site even if a Switch goes down. The feature is may confusing when setting up a environment when cabling isnt in place when powering on the units or plugin cables during the units is already up and running.

Regards,
Joerg

0 Kudos
Tqadi
Silver

RE: hi all

Jump to solution

hi joerg,

thank you so much for this, so to get them stacked is the best solution. i will think of it later..

but on the two switches i have only two 10G modules one to the server and the other to the storage..

can i make the other connections from servers to switches with the 1G 

and from the switches to the other  equallogic controllers ports with 1G.? like below

CM0/EHT0 -> Switch0  >>>10G
CM0/EHT1 -> Switch1  >>> 1G

CM1/EHT0->Switch1 10G
CM1/EHT1->Switch0 1G

i have two nics 1G for the servers available so i can use them.

the switch config of the lag is as follow? correct me if i miss anything.

Switch-01#show run
!Current Configuration:
!System Description "PowerConnect 6224, 3.3.13.1, VxWorks 6.5"
!System Software Version 3.3.13.1
!Cut-through mode is configured as disabled
!
configure
vlan database
vlan 101
vlan routing 101 1
exit
hostname "Switch-01"
stack
member 1 1
exit
ip address none
ip routing
interface vlan 101
name "iSCSI"
routing
ip address 192.168.2.1 255.255.255.0
exit


username "admin" password fd6e7ea7e78feab099aa72ccb6555922 level 15 encrypted
!
interface ethernet 1/g1
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g2
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g3
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g4


spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g5
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g6
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g7
spanning-tree portfast
mtu 9216
switchport access vlan 101


exit
!
interface ethernet 1/g8
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g9
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g10
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g11


spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g12
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g13
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g14
spanning-tree portfast
mtu 9216
switchport access vlan 101


exit
!
interface ethernet 1/g15
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g16
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g17
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g18


spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g19
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g20
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface ethernet 1/g21
channel-group 1 mode auto
exit
!


interface ethernet 1/g22
channel-group 1 mode auto
exit
!
interface ethernet 1/g23
channel-group 1 mode auto
exit
!
interface ethernet 1/g24
channel-group 1 mode auto
exit
!
interface ethernet 1/xg1
switchport access vlan 101
exit
!
interface ethernet 1/xg3
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit


!
interface ethernet 1/xg4
spanning-tree portfast
mtu 9216
switchport access vlan 101
exit
!
interface port-channel 1
switchport mode trunk
switchport trunk allowed vlan add 101
mtu 9216
exit
exit

0 Kudos

RE: hi all

Jump to solution

Just to make sure we're all on same page.  Do not connect 10GbE ports from either the server or to the array controllers.  All the servers and EQL array ports will need to be connected to the GbE ports only. 

Also there is newer firmware for that switch. 

Don 

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

0 Kudos
Tqadi
Silver

RE: hi all

Jump to solution

this is my only solution till i get the 10G switch...can i do it or no ?

and what is the affect for that the 1G will be the backup links in case of failure.

now i am connecting sw0 10g to Module 0 port 0

                               sw1 10g to module 1 port 0 

can i put the 10G of sw1 to module 0 port 1 ... and the 1G from each switch to module 1 port 0,1 

in case of vertical failover i will have 1G not 10 G which its better than nothing?

doable or not?

0 Kudos

RE: hi all

Jump to solution

Question: Is that a VMware vSphere driven solution or what?

For a ESXi host you have to configure a nic binding to the software iscsi modue and this cant handle something like 1GB + 10GB port setup. Also in a ESXi swiscsi setup you dont have an active or standby nic.


I suggest to connect all EQL and Server (iscsi) port to 1G ports on both switch.

- Use the 10G for the ISL (LAG)

- Use the 10G for connecting the Server but only for LAN, vMOTION, FT like traffic and NOT for Storage

Regards,
Joerg

0 Kudos
Tqadi
Silver

RE: hi all

Jump to solution

its hyper-v windows server 2012 R2.

in 1G and 10G on servers solution  i can make NIC teaming for the both ports active the 10G standby the 1G

i want to make the 10G i have useful?

i will make the 1G connections last option. Smiley Sad

do i need any other config on the switches please is that ISL or no ?

0 Kudos