Start a Conversation

Unsolved

This post is more than 5 years old

18083

May 3rd, 2011 05:00

MD3000i iscsi sessions

Hi All,

I was wondering what the advantage of multiple iscsi seesions are?

We have a two node failover print cluster that uses a MD3000i as part of the shared storage requirements.  As it is a cluster only one node has access to the 2 LUNS we created at any one time. 

I have noticed is that one node has 4 iscsi sessions to the LUNS while the other only has 1.

We had a problem with the drives not failing over properly and was wondering if it was iscsi session related.  Would a different number of sessions on each node cause any problems?

Any help would be appreciated

Thanks

 

 

 

 

 

4 Operator

 • 

9.3K Posts

May 3rd, 2011 10:00

At the very least each server should have one iSCSI session to controller 0 and one iSCSI session to controller 1. Optimally each server would have one iSCSI session to each of the iSCSI ports on the MD3000i (so 4 in total). You would also be using 2 subnets (that aren't used elsewhere in your LAN) for the iSCSI ports.

6 Posts

May 9th, 2011 08:00

Thanks for the reply,

We are using two subnets 192.168.130.* and 192.168.131.*, the defaults for the md3000i.  I've had a closer look at the iscsi initiator properties and its definately a bit strange

The first node has 5 portal groups listed, 0,1,2,3 and 4.  Group 0 points to addresses 192.168.130.101 & 192.168.131.101.  This is what I would expect.

However groups 1 to 4 all point to  addresses 192.168.130.102 & 192.168.131.102.  Is this correct, doesnt look right to me,

The second node has exactly the same portal configuration

The first node has one session that points to 192.168.130.102 while the second node has 4 sessions that all point to the same portal group 192.168.130.102.

We are using multipath so not sure if this is having some affect.

Any help appreciated

Thanks

 

847 Posts

May 19th, 2011 09:00

I think your config should of worked,  but not optimal at all.....

 

All your hosts should be seeing all the ports at the same time.   I suspect the network switches are setup wrong.   Your config looks like what you would do if you had no network switches and they were direct connected to the unit.   We ran like how I think you are running for a long time with two VMware hosts and failovers worked for us.

6 Posts

May 20th, 2011 04:00

Decided to reconfigure our iscsi connections.
I manually created  4 sessions on each node, one session per ip address, so I now have 2 active connections and 2 standby connections on each node.

This looks much better and I planning on some failover testing next week

6 Posts

August 1st, 2011 16:00

Hi Again,

Is there any way of having 4 active connections instead of 2 active and 2 passive connections from each cluster node or is this the correct setup for a duplex md3000i.  I have 2 luns, 1 on each each controller.

4 Operator

 • 

9.3K Posts

August 2nd, 2011 09:00

This array is an active/passive design. This means that a virtual disk is only usable (readable+writable) by a single controller at a time. As each controller only has 2 iSCSI ports, you'll only have 2 active connections to any given virtual disk at a time. The connections to the other controller (for that one virtual disk) are standby for when the other controller were to fail or you re-assign the virtual disk to that controller.

847 Posts

August 2nd, 2011 13:00

Dang,   you think your maxing out the 2 Nic's performance wise?     That's heavy for sure.   I'd bank your saturating the controllers before that actually happens, assuming a decent number of SAS drive spindles.

6 Posts

August 4th, 2011 01:00

Thats what I thought.  Its just that Im running a 2 node print cluster using the MD3000i for storing the spool files and the printing performance is terrible. Takes forever to spool print jobs.  Just wondered if there was any way I could eliminate the md3000i as the problem.  Is it OK to defrag a virtual disk via the Windows defrag tool? Windows says the disk is 28% fragmented

Thanks

847 Posts

August 4th, 2011 08:00

So if you simply copy a file to one of the cluster nodes?     How fast is it?

Print clusters generally don't need very much throughput to perform decent.  

6 Posts

August 8th, 2011 01:00

Copying files directly seems OK but spooling performance is slow.  I have another standalone print server which spools jobs so quickly you can barely see them hitting the print queue before they vanish, while the cluster jobs seem to take a few seconds to spool and print.  The only difference is where the queue is held, cluster holds queues on the MD3000i while the standalone server holds its queue on a second internal drive.  Are there any tools for testing san performance? Would it be OK to defrag the drive?

847 Posts

August 8th, 2011 08:00

As a test?  Can you PtoV the stand alone print server?

This seems application specific here. 

 

I'll bet it runs fine.

4 Operator

 • 

9.3K Posts

August 8th, 2011 08:00

You can defrag the drive just fine.

To test performance, look for any disk IO testing software. There's MB/s that people often (blindly) look at, but there's also the often more important IO/s (IOPS).

847 Posts

August 8th, 2011 08:00

Ooops..  Sorry I always assume a virtualized environment.   I realize that is probably not your case here.

6 Posts

August 9th, 2011 08:00

Anyone else using the MD3000i with a print server cluster?

No Events found!

Top