dave_jaffe's Posts

dave_jaffe's Posts

Dave_T- You are asking the right questions. I have the answers for physical Windows 2003 servers (see http://delltechcenter.com/page/MPIO+to+PowerVault+MD3000i+with+Microsoft+iSCSI+Initiator) but I... See more...
Dave_T- You are asking the right questions. I have the answers for physical Windows 2003 servers (see http://delltechcenter.com/page/MPIO+to+PowerVault+MD3000i+with+Microsoft+iSCSI+Initiator) but I'm not up on the latest with ESX so I will ask one of our VMware folks to weigh in. Dave
Acrobat- I replied to this post in the MD3000i and VMware section, where I see you are on another thread on the same topic. Dave
Hi Acrobat- Thanks for the nice words about my paper. Too bad it didn't solve your problem! It sounds like you are doing things correctly. I'll probably end up handing you off to the VMware team... See more...
Hi Acrobat- Thanks for the nice words about my paper. Too bad it didn't solve your problem! It sounds like you are doing things correctly. I'll probably end up handing you off to the VMware team here at Dell but first let me answer one of your questions around how to hook up a VM to iSCSI storage. There are basically two ways to do it, guest-attached, in which the guest (eg. your Windows 2003 VM) attaches directly to iSCSI and ESX-attached, in which ESX creates a datastore on the iSCSI storage and then the VMs use that storage for their local disks. Both methods can be used concurrently and there are pluses and minuses of each. (ESX-attached is simpler from the point of view of the VM creator - he or she need not worry about iSCSI initiators, etc. - whereas with guest-attached all the backup tools you use on a physical server can be employed directly in the VM). It sounds like you have created a Windows 2003 VM with the OS on an ESX datastore (not iSCSI) and now you wish to create an additional virtual disk on MD3000i iSCSI storage. Then you went ahead and installed the MS iSCSI initiator into your VM and tried to connect to a LUN you have already created on the MD3000i. This is when you got the failure during disk initialization. This should have worked exactly the way it worked with your physical Windows server. The fact that Windows Disk Management saw the iSCSI LUN indicates your networking is correct. I would compare the physical connection to the virtual connection for any differences. Then it appears that you attempted to attach the iSCSI LUN to ESX and, again, it sounds like you made all the right moves. Again, I would go through my paper and double check you have all the permissions correct. If you can't get these two approaches to work please write back and I'll ask our VMware folks to weigh in. Dave
Here's the word from our technical guys: We soon plan to support MD3000i with RHEL5.1. Red Hat has stated that the RHEL 5 clustering will work with Xen, however, since Dell has not tested the RHEL 5... See more...
Here's the word from our technical guys: We soon plan to support MD3000i with RHEL5.1. Red Hat has stated that the RHEL 5 clustering will work with Xen, however, since Dell has not tested the RHEL 5 clustering with Xen and with the MD3000i, we can't confirm that this all works together.
MarSan, VMware announced ESX 3.5 this week which supports the MD3000i as well as our new PowerEdge 1950 III, 2950 III and 2900 III servers.
I've asked some of our Xen experts to respond to your question. They are out this week but you ought to see a reply here in a few days. Thanks, Dave
In direct-attached mode, the MD3000 and MD3000i support only two clustered servers (4 servers may be attached but not clustered) so to have two 2-node clusters you will need the third configuration yo... See more...
In direct-attached mode, the MD3000 and MD3000i support only two clustered servers (4 servers may be attached but not clustered) so to have two 2-node clusters you will need the third configuration you mentioned, the MD3000i in IP mode. For best isolation you will want to create several VLANs on your gigabit switches for data, interconnect and management subnets. Dave
Irish Dude- Glad I could help. The storage savings of RAID5 depends on the number of disks in each RAID set and hence the ratio of data drives to parity drives. For example, with a 5-disk RAID5 set... See more...
Irish Dude- Glad I could help. The storage savings of RAID5 depends on the number of disks in each RAID set and hence the ratio of data drives to parity drives. For example, with a 5-disk RAID5 set you have 4 data drives and 1 parity drive so you have 80% data utilization, whereas with a 9-disk RAID5 set you would have 89% (actually the parity bits are spread across all disks but the data utilization calculation is accurate). RAID 1/0 is 50% utilization. So lets say you needed 40 data drives for your Exchange database. With RAID 1/0 this would require 80 disks (6 disk enclosures). With 5-disk RAID 5 you would need 50 drives total (4 enclosures) and with 9-disk RAID5 you would need 45 disks (which would be 3 enclosures but you wouldn't have any hot spares). Two MD1000s can be daisy chained off the MDxxxx that is connected to the server, whether it be an MD3000, MD3000i, or another MD1000 directly connected to the server via a PERC5/E card. Thus 6 MD1000s can be hung off a PERC5/E (which has two ports) but you can put a second PERC5/E in the system to get 12 MD1000s directly attached. Hope this helps, Dave
Hey MarSan, good question. I hate to give you an "official" answer, but since VMware is not our product what I can tell you is that the MD3000i will be supported by ESX3.5 and ESX3.5i (embedded) and p... See more...
Hey MarSan, good question. I hate to give you an "official" answer, but since VMware is not our product what I can tell you is that the MD3000i will be supported by ESX3.5 and ESX3.5i (embedded) and please check with your VMware rep as to the schedule for these releases. Hope that helps!
The MD1000, 3000 and 3000i all use the same disk array (max: 15 disks). The only difference is the controller card(s) in the rear of the chassis. These determine the character of the array and, theref... See more...
The MD1000, 3000 and 3000i all use the same disk array (max: 15 disks). The only difference is the controller card(s) in the rear of the chassis. These determine the character of the array and, therefore, how to connect to it. The MD1000 is a pure JBOD (just a bunch of disks). To connect to it you would need a PERC (PowerEdge RAID Controller) in the server, such as the PERC5/E. (You could connect with a SAS5/E card but you would not have RAID). You can connect two MD1000s to each PERC5/E, then daisy chain 2 more MD1000s off that. This is probably your best option for cost-effective direct attached storage, such as you would use with Exchange CCR. And yes, you would need 6 of these for 85 drives (and still have some drives left over for hot spares). The MD3000 controller card contains a built-in RAID controller so you would connect it via SAS cables to a SAS5/E card in the server. You can expand each MD3000 with two MD1000s. The MD3000 is your best bet for attaching several servers or a cluster to shared storage. (But keep in mind that, while Exchange is clustered with CCR, the storage is not shared between the nodes.) Finally, the MD3000i contains a built-in RAID controller/iSCSI target so you connect to it with Ethernet cables. The MD3000i can also be expanded with two MD1000s. This is the most cost-effective SAN we offer. You could use this for your CCR nodes and for other applications at the same time. Hope this helps, Dave
Hey Irish Guy- Great to hear from you and thanks for the nice comments. If you do a CCR you will want to put the active and passive nodes on their own MD3000i arrays. Or one on iSCSI and one direct... See more...
Hey Irish Guy- Great to hear from you and thanks for the nice comments. If you do a CCR you will want to put the active and passive nodes on their own MD3000i arrays. Or one on iSCSI and one direct-attached. (We used a single MD3000i in the paper because that's all we had!) After you complete your research please let us know what you are going to recommend to your management. Thanks, Dave