Hi Dave, Read a lot of your articles recently and really enjoyed them. I am really interested in the MD3000i storage array device for a possible setup of Exchange 2007 in the future. I like the expansion that it offers (just like in your article....two exchange 2007 servers in CCR setup...database, web etc.). My goal right now is to do the underlying research so I can present this in the near future. What is especially helpful is the price tag on these. Makes it easier selling point and is a lot cheaper than say, using our NetAPP and the peformance is pretty comparable.
Great to hear from you and thanks for the nice comments. If you do a CCR you will want to put the active and passive nodes on their own MD3000i arrays. Or one on iSCSI and one direct-attached. (We used a single MD3000i in the paper because that's all we had!) After you complete your research please let us know what you are going to recommend to your management.
I will do that. I've been toying around with the Exchange 2007 advisor tool. Nice handy utility. I have entered quite a few different configs to try and get a good idea of what we would need. One thing the tool keeps advising to use for storage is the PowerVault MD1000. Suggested setup was like 85 hard drives and 6 disk array enclosures. Is that basically saying I would need 6 MD1000 devices?
Any suggestions on that? I thought the MD3000 could be a starting point, then I could add MD1000 for extra space.
The MD1000, 3000 and 3000i all use the same disk array (max: 15 disks). The only difference is the controller card(s) in the rear of the chassis. These determine the character of the array and, therefore, how to connect to it.
The MD1000 is a pure JBOD (just a bunch of disks). To connect to it you would need a PERC (PowerEdge RAID Controller) in the server, such as the PERC5/E. (You could connect with a SAS5/E card but you would not have RAID). You can connect two MD1000s to each PERC5/E, then daisy chain 2 more MD1000s off that. This is probably your best option for cost-effective direct attached storage, such as you would use with Exchange CCR. And yes, you would need 6 of these for 85 drives (and still have some drives left over for hot spares).
The MD3000 controller card contains a built-in RAID controller so you would connect it via SAS cables to a SAS5/E card in the server. You can expand each MD3000 with two MD1000s. The MD3000 is your best bet for attaching several servers or a cluster to shared storage. (But keep in mind that, while Exchange is clustered with CCR, the storage is not shared between the nodes.)
Finally, the MD3000i contains a built-in RAID controller/iSCSI target so you connect to it with Ethernet cables. The MD3000i can also be expanded with two MD1000s. This is the most cost-effective SAN we offer. You could use this for your CCR nodes and for other applications at the same time.
Thanks Dave. That helps out a lot. So if I were to do CCR clustering for my future Exchange 2007 setup, I would need a total of 12 disk enclosures (doing some testing on the Dell Exchange 2007 advisor tool....6 for active, 6 for standby) if I was doing RAD 1/0. If I were to go RAID 5, I would need a total of 8 disk enclosures, 4 for active storage, 4 for standby CCR storage. That correct?
Also, looks like the MD1000 would be a great fit for DAS storage for Exchange 2007. They can all be 'stacked' together, so the scalability is nice. Is there a limit on the number of MD1000 that you can 'daisy chain' together?
Glad I could help. The storage savings of RAID5 depends on the number of disks in each RAID set and hence the ratio of data drives to parity drives. For example, with a 5-disk RAID5 set you have 4 data drives and 1 parity drive so you have 80% data utilization, whereas with a 9-disk RAID5 set you would have 89% (actually the parity bits are spread across all disks but the data utilization calculation is accurate). RAID 1/0 is 50% utilization. So lets say you needed 40 data drives for your Exchange database. With RAID 1/0 this would require 80 disks (6 disk enclosures). With 5-disk RAID 5 you would need 50 drives total (4 enclosures) and with 9-disk RAID5 you would need 45 disks (which would be 3 enclosures but you wouldn't have any hot spares).
Two MD1000s can be daisy chained off the MDxxxx that is connected to the server, whether it be an MD3000, MD3000i, or another MD1000 directly connected to the server via a PERC5/E card. Thus 6 MD1000s can be hung off a PERC5/E (which has two ports) but you can put a second PERC5/E in the system to get 12 MD1000s directly attached.
Hi Dave I just purchased: One MD3000i with 300gig - disks and a MD1000 loaded with 15 1TB drives. Also got a couple of brand new Dell PowerEdge 2950 servers, and VMEnterprise for them. First we set up the MD3000 and set up an iSCSI network. Installed the iSCSI initiator on an existing win2003 server. All went well. I was able to make myself a couple of LUNs on the MD3000, then load them onto one of my existing server - Worked perfectly. Then installed all the VM stuff on the new servers. No dramas there either - all went well. Downloaded the converter tool and took images of a couple of PCs to test things out. Then installed a new virtual 2003 server, and an Ubuntu server. - All OK I then created a LUN and tried to load it up on my Virtual 2003 Server. The server saw the disk and loaded the Dynamic Disk Initialisation Wizard. Went through the options only to have an error - something like 'failed due to an internal error'. I believe this is because I need to create a Datastore on my ESX Server and then assign disk from that datastore to my virtual servers. Is this correct? Anyway..... I downloaded your article "Dell iSCSI Storage for VMWare" and read it cover to cover - Great article. Then created a LUN to assign to the ESXServer. I have put the IP addresses of the iSCSI storage array into the Dynamic Tab, in the details of my storage initiator. I followed your instructions step by step and used the same theory using the MD3000 instead of the other storage arrays you mention in your article. When I'm in Virtual Center - ESX Server - Config - Storage Adaptors - I hit rescan - but I get nothing. On the MD3000i it doesn't automatically detect the host either - I've tried adding the iSCSI identifier manually, and it does ad one, but still doesn't let me see the disk. I've checked my network settings repeatedly, to no avail. Any advise to help get this MD3000 kicking? Thanks.