SCSI cards won't work. You need to use a cluster-certified scsi raid controller. Not all raid controllers are cluster certified (in the PERC3 series I think only the PERC3DC is certified).
As far as the quorum disk taking 2 x 36GB drives (I'd highly recommend some sort of redundant drive setup even for your quorum, as it's a pain to see your cluster go down because of a single drive failure (even if no actual data for your applications is on there)), you could consider using that storage space for other tasks as well, tho there are limitations (the physical disk (or virtual in raid setups)) with the quorum on it won't support VSS (W2K3) I think, even if you partition the quorum on it's own partition from the other partition.
I took a look at the EMC AX100i and it appears to be able to do what I want using iSCSI. However, I'm not sure whether I need the 1P or 2P unit. If I cluster both servers using iSCSI with this device will I need the 2P version, or is that only necessary when using the Fibre Channel connections?
The dual storage processor version of the AX100(i) allows for more redundancy as a failure of 1 SP/powersupply won't take down the system, because the other SP can keep the system (data storage device) up and running.
One thing to keep in mind with a cluster and a Dell|EMC SAN (FC-series, CX-series and AX-series) is that all (both) cluster nodes need access to the same storage processor(s). So in a cluster setup, you cannot plug 1 server into SPA and the other into SPB and expect it to work properly. You'll need a gigabit switch for sure. This switch would be needed for both a single SP setup (means only 1 iSCSI port so you can obviously not connect 2 servers directly to it) as well as a dual SP setup as a LUN (virtual drive) is only visible/accessible on 1 SP at a time, and both servers need access to it at the same time (they may not write to it, but they do need to fully see it/them). A gigabit switch is pretty inexpensive nowadays if you don't already have one. Dell offers an 8-port 2708 gigabit switch for $99 (price as of today in the small business department).
The AX100(i) would allow you to make a disk pool and 'partition' (called virtual drive in the AX100 management software) this into seperate pieces so that you can have a 1 GB quorum drive available to the hosts and use the other space in the diskpool for another 'partition' for shared storage space. The hosts would see these 2 partitions as completely seperate drives.
Another thing: you mentioned the AX100i, but then talk about the fibre channel connections. The AX100i is not a fibre channel device. It's an iSCSI device, which means it uses a Gigabit ethernet card in the server. You can buy a specialized iSCSI card that offloads some of the work to the processor on the card or you can use a regular (integrated) gigabit card in combination with the Microsoft iSCSI initiator software. Thislatter option has some limitations (check Microsoft KB about iSCSI and shares and iSCSI and dynamic disks) to keep in mind.
If you want to look into a fibre channel device, look at the AX100 (not the 'i' version).
Thanks. That's the best response I've gotten so far and I think it answers my basic question about using the AX100i as a quorum drive for two clustered servers.
MY suggestion is go for RAID 1, because quorum is very important for windows 2003 clustering if this fails your whole cluster configuration go for task .
Dev Mgr
4 Operator
•
9.3K Posts
0
July 19th, 2005 14:00
As far as the quorum disk taking 2 x 36GB drives (I'd highly recommend some sort of redundant drive setup even for your quorum, as it's a pain to see your cluster go down because of a single drive failure (even if no actual data for your applications is on there)), you could consider using that storage space for other tasks as well, tho there are limitations (the physical disk (or virtual in raid setups)) with the quorum on it won't support VSS (W2K3) I think, even if you partition the quorum on it's own partition from the other partition.
-DC-
3 Posts
0
July 19th, 2005 17:00
Dev Mgr
4 Operator
•
9.3K Posts
0
July 20th, 2005 13:00
One thing to keep in mind with a cluster and a Dell|EMC SAN (FC-series, CX-series and AX-series) is that all (both) cluster nodes need access to the same storage processor(s). So in a cluster setup, you cannot plug 1 server into SPA and the other into SPB and expect it to work properly. You'll need a gigabit switch for sure. This switch would be needed for both a single SP setup (means only 1 iSCSI port so you can obviously not connect 2 servers directly to it) as well as a dual SP setup as a LUN (virtual drive) is only visible/accessible on 1 SP at a time, and both servers need access to it at the same time (they may not write to it, but they do need to fully see it/them). A gigabit switch is pretty inexpensive nowadays if you don't already have one. Dell offers an 8-port 2708 gigabit switch for $99 (price as of today in the small business department).
The AX100(i) would allow you to make a disk pool and 'partition' (called virtual drive in the AX100 management software) this into seperate pieces so that you can have a 1 GB quorum drive available to the hosts and use the other space in the diskpool for another 'partition' for shared storage space. The hosts would see these 2 partitions as completely seperate drives.
Another thing: you mentioned the AX100i, but then talk about the fibre channel connections. The AX100i is not a fibre channel device. It's an iSCSI device, which means it uses a Gigabit ethernet card in the server. You can buy a specialized iSCSI card that offloads some of the work to the processor on the card or you can use a regular (integrated) gigabit card in combination with the Microsoft iSCSI initiator software. Thislatter option has some limitations (check Microsoft KB about iSCSI and shares and iSCSI and dynamic disks) to keep in mind.
If you want to look into a fibre channel device, look at the AX100 (not the 'i' version).
-DC-
3 Posts
0
July 20th, 2005 14:00
Yog
7 Posts
0
September 21st, 2005 08:00