129 Posts

April 10th, 2013 09:00

Thanks Don,

Separate subnets will not be an issue as there will be dedicated 10GB host adapters for connection to the MD array (EQL already has its own) although they will all have to be connected to the same 8024F switch stack, which presumably does no routing and will direct iscsi traffic to the correct ports on the relevant subnets?

4 Operator

 • 

9.3K Posts

April 10th, 2013 09:00

I do want to point out that the MD SAN should be set up with different subnets (plural) from the EQL SAN.

The MD SAN will have these factory default IP addresses:

Controller 0 iSCSI port 0: 192.168.130.101

Controller 0 iSCSI port 1: 192.168.131.101

Controller 1 iSCSI port 0: 192.168.130.102

Controller 1 iSCSI port 1: 192.168.131.102

You can definitely change these, but don't put them all in 1 subnet and/or use the same subnet as the EQL SAN.

I'd also suggest to indeed use separate network ports/cards for the MD's iSCSI connectivity.

129 Posts

April 10th, 2013 12:00

The 8024F switches were implemented and configured by Dell at purchase of the EQL setup. If we buy the installation service for the MD unit would configuring vlans on the switches constitute part of the installation?

129 Posts

April 11th, 2013 00:00

Don,

Sorry but I have just realised that there would not be separate iscsi adapters in the hist esx servers. They may have to connect via the same physical adapters as the eql connections. IS that an issue?

129 Posts

April 11th, 2013 11:00

Thanks I thought it would be problematic. The problem is that there aren't enough switch ports for the additional MD and the extra host adapter reqed.

I am hoping that they will buy a PS6110X with 10K 900GB disks to make up the storage deficit. Much simpler and elegant!

129 Posts

April 11th, 2013 12:00

Don,

Over the next couple of years we will need to add several more 6510 or equiv to the setup. Using larger disks will obviously mean fewer enclosures and consequently our switches may last a little longer. Do you know if/when 4TB disks will be available for the 6510 (or other EQL series)?

Also when it comes to adding switch capacity I am looking for advice on a strategy. Currently there are  40 usable ports across the 2 switches with 4 ports per switch used for the lag, but if we add a third in fully interlagged/redundant mode I imagine that would require 8 ports per switch for lags giving a new total of  only 48 usable ports across the 3 switches. Is this correct thinking, cos it seems we would be better to just replace with 2 newer 8100 32 port switches. These look like they have dedicated uplink/stacking ports which don't take away from the "usable pool" ??

129 Posts

April 11th, 2013 13:00

Another question, :)

Can you mix different size and speed disks in the same PS6510 enclosure e.g could you have a 6510 with 24 10Krpm SAS disks (600GB/900GB) and 24 3TB 7200 disks.

4 Operator

 • 

2.4K Posts

April 11th, 2013 13:00

Iam sure dell will sell you two PC8164 if you ask ;)

The smaller 8132 have support for one optional module which than can be used for stacking the switches or to adding additional ports. If needed you can order the 2xQSFP+  module which gives you 2x 40GbE. With a special cable the QSFP+ can be split into 4x10GbE if more ports are needed. Otherwise it gives you a nice stack.

The PC8164 can takes 2 module so you have 48 10GbE + 2x 2x4 10GbE = 64 10GbE Ports per Switch.

Regards,

Joerg

4 Operator

 • 

2.4K Posts

April 11th, 2013 13:00

No. The hybrid arrays with comes with SSD+HDs are the only models which supports different type of disks/medias within a chassis. With normaly one raid level per array this doesnt works well if you have different disk sizes except the hybrids. Take a look to the PS6510ES.

If you have an older eql it might be possible that  dell support send you a  disk replacement which contains a disk with a larger capacity but its not normal.

Regards,

Joerg

No Events found!

Top