10. Yes; this isn't possible. No computer that uses a bios can boot to a (virtual) disk larger than 2TB (2048GB or 2,097,152MB). Also, VMware ESX/ESXi cannot use any (virtual) disks over 2TB, and the VMFS filesystem is maxed at using files up to 2TB as well. You can extend the filesystem over multiple (2TB) (virtual) disks to a max of 32 components (original disk and 31 extents) to yield a max of 64TB, however, the filesystem still limits a single file to no more than 2TB. Using extents isn't the safest choice, because if 1 of the components fails for whatever reason (e.g. double drive failure), all the data is lost (not just the data on that component).
9. Not with ESXi, and it may be possible with ESX (SSH into the management Linux or use the management linux from the keyboard), but I don't think it's supported. You may want to check VMware's site for documentation on backup practices. I don't know if this link will get you what I'm looking at, but check it. Check out vRanger by Vizioncore as one possible backup solution, or you can back up from within each VM itself.
8. ESXi is free to use, but other than VMware's online resources, there's no support for the OS unless you buy a support contract. ESX you have to pay for, but when you buy it, you're also buying a support contract for the product at the same time. As far as capabilities/limitations, both options are identical in that regard.
5. Check with your salesrep. I can't imagine that the 2900 can't handle 2 dual port NICs. The website just may not offer it for some reason. You could always just go with a Quad port NIC.
1, 2, 7. A VMDK 'file' (for the VM it's considered a harddrive) should only be given to a single virtual machine (unless you're creating a virtual cluster (e.g. Windows 2003 Enterprise Edition cluster)). Raid and disk type/speed requirements will depend on what kind of VMs and how many VMs you intend to run on there. When not virtualizing you're only putting 1 task on the disks; to run 1 OS and it's apps. When virtualizing the disks will be getting hit a lot more when running several VMs on the disks. I'd suggest to look into faster drives and multiple raid sets so you can separate the IO and prevent having a 4 or 5 disk raid 5 with 7200rpm SATA drives running into performance bottlenecks as you're putting too much IO on slow drives and can't split it between raid sets.
3 + 6. The hardware could expand the raid volume, however, ESX can't use virtual disks over 2TB and when you expand the virtual disk, the partition (VMFS formatted partition) doesn't expand with it and there's no way to expand it that I know of. To do future expansion, you'd buy enough drives to create a completely new raid set (raid 1, 5, 6 or 10), make sure it's less than 2TB, and then let VMware format it (VMFS).
4. Yes, this is possible. With ESXi you could even boot from the internal USB port/USB memory stick and then use the raid 1 and raid 5 for VM storage.
Thanks a lot for your detailed reply. Being completely new to virtualisation, I have a few more queries:
1,2,7: Are you suggesting 1 RAID set per VM? The configuration I was thinking to use is a RAID 1 of 2 SAS disks containing the VMDK files and a RAID 5 of 4-5 SATA disks to be used to store files for the file server VM and Lotus Domino mailboxes for the mail server VM.
3,6: Will an NTFS-formatted volume be visible to a VM?
4: How does Windows, from inside the VM, see the RAID 1 and the RAID 5 volumes in Disk Management? Does Windows see the RAID controller? How will ESX/ESXi present the RAIDs to Windows? If I use an MD1000 instead of the internal RAID 5 volume, again, how will Windows, from inside the VM, see the RAID 1 and the RAID 5 volumes in Disk Management?
5: I want to use 2 dual-port network adapters instead of 1 quad-port in case of failure of 1 of the adapters.
8. The Dell website shows some difference in capabilities between ESX and ESXi. See this page. Is VMware moving towards ESXi?
9. Regarding backup, I am thinking of enabling VSS snapshots on the file server for the production files and on the mail server for Lotus Domino mailboxes and having a full + incremental backup system for everything (ie, including the VMDK files) on FreeNAS. Is this feasible? We do not intend to make use of tapes, only disk-backup. Do you think this is sufficient?
10. I asked about this because I thought that all the production files on the file server would be in a VMDK file. Will it really be so? Or will they be, for eg, individual files on an NTFS volume accessible outside the VM?
11. I have been asked to design a low-cost and a high-cost solutions for our future virtualisation system. I suppose the design I described in my post is a low cost solution. How different can a high-cost solution be?
12. Is it worth going for an iSCSI solution with SAS disks operating at 3 Gbps using a 1 Gbps Ethernet storage network with clients on the LAN using a 100 Mbps network? Will the storage network not be a bottleneck?
1,2,7: Are you suggesting 1 RAID set per VM? The configuration I was thinking to use is a RAID 1 of 2 SAS disks containing the VMDK files and a RAID 5 of 4-5 SATA disks to be used to store files for the file server VM and Lotus Domino mailboxes for the mail server VM.
You dont need to have 1 RAID set per VM, unless your specific VM is I/O intensive. So you can have 1 RAID for all non-I/O intensive VMs and have seperate RAID volumes for I/O intensive ones.
Specific to your configuration, it is probably better to go with SAS drives, instead of SATA.
3,6: Will an NTFS-formatted volume be visible to a VM?
No. The volumes will be formated as VMFS volumes. (unless you are doing RDM)
4: How does Windows, from inside the VM, see the RAID 1 and the RAID 5 volumes in Disk Management? Does Windows see the RAID controller? How will ESX/ESXi present the RAIDs to Windows? If I use an MD1000 instead of the internal RAID 5 volume, again, how will Windows, from inside the VM, see the RAID 1 and the RAID 5 volumes in Disk Management?
VMs do not see the RAID controller. ESX manages the RAID controller. VMs will see a generic virtual controller. ESX will attach virtual disks (which are files in VMFS volumes) to this virtual controller.
If I use an MD1000 instead of the internal RAID 5 volume, again, how will Windows, from inside the VM, see the RAID 1 and the RAID 5 volumes in Disk Management?
Same as above. Windows will not see the RAID volume. You will not be able to do Disk Management of the MD1000 from the VMs. You can install OpenManage on ESX and using OMSS you can manage the RAID. ESXi lets you monitor it. (more on this later)
8. The Dell website shows some difference in capabilities between ESX and ESXi. See this page. Is VMware moving towards ESXi?
Probably eventually. But you should not be worried. It will not happen in short term and it is really easy to migrate between ESX and ESXi. The main difference between ESX and ESXi is management. (Apart for ESXi being free and you dont need local drives). Check this document (Appnedix). http://support.dell.com/support/edocs/software/eslvmwre/Systems_Mngmnt/ESXi_MNGMNT/PDF/MSMPA03.pdf
More documents are here: http://support.dell.com/support/edocs/software/eslvmwre/
9. Regarding backup, I am thinking of enabling VSS snapshots on the file server for the production files and on the mail server for Lotus Domino mailboxes and having a full + incremental backup system for everything (ie, including the VMDK files) on FreeNAS. Is this feasible? We do not intend to make use of tapes, only disk-backup. Do you think this is sufficient?
A little old, but will give you some idea: http://support.dell.com/support/edocs/software/eslvmwre/AdditionalDocs/backuprstre/46636A00MR.pdf
10. I asked about this because I thought that all the production files on the file server would be in a VMDK file. Will it really be so? Or will they be, for eg, individual files on an NTFS volume accessible outside the VM?
Yes. They will all be in a VMDK file. However, one workaround will be using iSCSI (MD3000i). You can mount iSCSI volumes from a Windows VM, using Micrsoft iSCSI initiator. Now you are not bound by VMFS, but by Microsoft iSCSI.
11. I have been asked to design a low-cost and a high-cost solutions for our future virtualisation system. I suppose the design I described in my post is a low cost solution. How different can a high-cost solution be?
the biggest difference i see is to have a SAN in a high-cost solution. There you will be able to do things like VMotion, RDM and even mount NTFS volumes directly to VMs
12. Is it worth going for an iSCSI solution with SAS disks operating at 3 Gbps using a 1 Gbps Ethernet storage network with clients on the LAN using a 100 Mbps network? Will the storage network not be a bottleneck?
Yes (depending on your future needs). One of the biggest advantages of virtualization is flexibility. You can start with a low-cost solution (local drives) and very easily scale to iSCSI. It is just as simple as moving your VMs from local storage to iSCSI.
You will need 1 Gbps connection from ESX/ESXi to iSCSI storage
-Bala
You can mark these answers as useful, so it can help others.
Consider beefing up your hosts as much as posisble. If you run ESX 35i it is very nice just to run all VM'son one host and use the other for full redundancy.
I think most people would be surprised at how well ESX 3.5i can run several heavy VM's along with many many lighter VM's.
With ESX 3.5i, I'd suggest just making sure you put as many spindles as you can into the disk group. This some what goes against published "best tunning practices" but in our testing... It was way way better performance wise to have lots of spindles on one disk group and all the production VM's all running from that group. Performance always drecreased drastically across the board as we reduced the number os spindles involved.
There are about 100 users who will be working directly on the file server and there are about 25 Lotus Domino mailboxes. Please advise as to which RAID type is best for the above setup. If I'm not mistaken, RAID 5 offers fast reads but slow writes. Which RAID type is best for the VMDK files containing the operating systems whether it be Windows, Linux, Solaris, ...? Regarding SANs, in case the LAN operates at 1 Gbps, will the storage network not be a bottleneck? Should I not have a storage network operating at 10 Gbps before going for a SAN? For a start, I won't go for a SAN, but will most likely be using internal storage. That's why I think I'll go for the PE 2900. Thanks a lot for all input to this thread.
San has made a big difference in my I.T life, I'll never go back to direct attached with products like the MD3000i out there at the price points you can get them for.....
Aside from performance, snapshot and VD copy is to powerfull of a redundancy tool, I mean I am literally sleeping better at night these days.
We have around 100 users, we run an Exchange 2007 64bit, an SQL server with a 25gig DB along with like 20million flat files as an imaging system as we are paperless, other SQL servers, other flat file servers, about 12 VM's. We use Raid 5 with 14 disks in the disc group. We do use an MD3000i san.
Performance went up for us compared to pre-virtualization with direct attached storage. Some operations were as much as 10 times faster.
I think the real problem with raid 5 is the speed impact when a drive fails and it rebuilds in the hot swap. (We tested it and it seemed OK, but we have not had it come up yet in actual production)
I reply all this, but they way I do it, is in contrast to the "best tunning practices", but as I stated earlier any other way we set it up, it just wasn't as fast. At this point I say spindle count rules!!! I probably am switching the bottle neck to the 1gb ethernet iSCSI, but it's fast, very fast, so that must not be a bad place to have your bottle neck is all I can surmize. :)
I am using a PE1850 with ESXi. We just did a speed compareison with having the VMDK on the internal RAID1 vs ISCSI RAID1. Just with using the internal intel NIC for the ISCSI connection at 100MB our W2003 server that is on the ISCSI booted twice as fast as the one that is stored on the internal storage. From power on to login screen with the ISCSI is 12 sec. Still waiting to see what happens when we change this to fiber.
One other thing I can say is RAM!!!! and lots of it on your host. All testing I did ram was a very big part of performance.
Second is using 1 virtual processor instead of multiple. I have seen a decrease on any VM that has more then one VP.
Dev Mgr
4 Operator
•
9.3K Posts
0
February 11th, 2009 12:00
Answering those I can in no particular order:
10. Yes; this isn't possible. No computer that uses a bios can boot to a (virtual) disk larger than 2TB (2048GB or 2,097,152MB). Also, VMware ESX/ESXi cannot use any (virtual) disks over 2TB, and the VMFS filesystem is maxed at using files up to 2TB as well. You can extend the filesystem over multiple (2TB) (virtual) disks to a max of 32 components (original disk and 31 extents) to yield a max of 64TB, however, the filesystem still limits a single file to no more than 2TB. Using extents isn't the safest choice, because if 1 of the components fails for whatever reason (e.g. double drive failure), all the data is lost (not just the data on that component).
9. Not with ESXi, and it may be possible with ESX (SSH into the management Linux or use the management linux from the keyboard), but I don't think it's supported. You may want to check VMware's site for documentation on backup practices. I don't know if this link will get you what I'm looking at, but check it. Check out vRanger by Vizioncore as one possible backup solution, or you can back up from within each VM itself.
8. ESXi is free to use, but other than VMware's online resources, there's no support for the OS unless you buy a support contract. ESX you have to pay for, but when you buy it, you're also buying a support contract for the product at the same time. As far as capabilities/limitations, both options are identical in that regard.
5. Check with your salesrep. I can't imagine that the 2900 can't handle 2 dual port NICs. The website just may not offer it for some reason. You could always just go with a Quad port NIC.
1, 2, 7. A VMDK 'file' (for the VM it's considered a harddrive) should only be given to a single virtual machine (unless you're creating a virtual cluster (e.g. Windows 2003 Enterprise Edition cluster)). Raid and disk type/speed requirements will depend on what kind of VMs and how many VMs you intend to run on there. When not virtualizing you're only putting 1 task on the disks; to run 1 OS and it's apps. When virtualizing the disks will be getting hit a lot more when running several VMs on the disks. I'd suggest to look into faster drives and multiple raid sets so you can separate the IO and prevent having a 4 or 5 disk raid 5 with 7200rpm SATA drives running into performance bottlenecks as you're putting too much IO on slow drives and can't split it between raid sets.
3 + 6. The hardware could expand the raid volume, however, ESX can't use virtual disks over 2TB and when you expand the virtual disk, the partition (VMFS formatted partition) doesn't expand with it and there's no way to expand it that I know of. To do future expansion, you'd buy enough drives to create a completely new raid set (raid 1, 5, 6 or 10), make sure it's less than 2TB, and then let VMware format it (VMFS).
4. Yes, this is possible. With ESXi you could even boot from the internal USB port/USB memory stick and then use the raid 1 and raid 5 for VM storage.
blueice2
3 Posts
0
February 14th, 2009 10:00
Hi Dev,
Thanks a lot for your detailed reply. Being completely new to virtualisation, I have a few more queries:
1,2,7: Are you suggesting 1 RAID set per VM? The configuration I was thinking to use is a RAID 1 of 2 SAS disks containing the VMDK files and a RAID 5 of 4-5 SATA disks to be used to store files for the file server VM and Lotus Domino mailboxes for the mail server VM.
3,6: Will an NTFS-formatted volume be visible to a VM?
4: How does Windows, from inside the VM, see the RAID 1 and the RAID 5 volumes in Disk Management? Does Windows see the RAID controller? How will ESX/ESXi present the RAIDs to Windows? If I use an MD1000 instead of the internal RAID 5 volume, again, how will Windows, from inside the VM, see the RAID 1 and the RAID 5 volumes in Disk Management?
5: I want to use 2 dual-port network adapters instead of 1 quad-port in case of failure of 1 of the adapters.
8. The Dell website shows some difference in capabilities between ESX and ESXi. See this page. Is VMware moving towards ESXi?
9. Regarding backup, I am thinking of enabling VSS snapshots on the file server for the production files and on the mail server for Lotus Domino mailboxes and having a full + incremental backup system for everything (ie, including the VMDK files) on FreeNAS. Is this feasible? We do not intend to make use of tapes, only disk-backup. Do you think this is sufficient?
10. I asked about this because I thought that all the production files on the file server would be in a VMDK file. Will it really be so? Or will they be, for eg, individual files on an NTFS volume accessible outside the VM?
11. I have been asked to design a low-cost and a high-cost solutions for our future virtualisation system. I suppose the design I described in my post is a low cost solution. How different can a high-cost solution be?
12. Is it worth going for an iSCSI solution with SAS disks operating at 3 Gbps using a 1 Gbps Ethernet storage network with clients on the LAN using a 100 Mbps network? Will the storage network not be a bottleneck?
Thanks again for your answers.
Bala Chandrasek
57 Posts
0
February 15th, 2009 18:00
You dont need to have 1 RAID set per VM, unless your specific VM is I/O intensive. So you can have 1 RAID for all non-I/O intensive VMs and have seperate RAID volumes for I/O intensive ones.
Specific to your configuration, it is probably better to go with SAS drives, instead of SATA.
No. The volumes will be formated as VMFS volumes. (unless you are doing RDM)
VMs do not see the RAID controller. ESX manages the RAID controller. VMs will see a generic virtual controller. ESX will attach virtual disks (which are files in VMFS volumes) to this virtual controller.
Same as above. Windows will not see the RAID volume. You will not be able to do Disk Management of the MD1000 from the VMs. You can install OpenManage on ESX and using OMSS you can manage the RAID. ESXi lets you monitor it. (more on this later)
Probably eventually. But you should not be worried. It will not happen in short term and it is really easy to migrate between ESX and ESXi. The main difference between ESX and ESXi is management. (Apart for ESXi being free and you dont need local drives). Check this document (Appnedix). http://support.dell.com/support/edocs/software/eslvmwre/Systems_Mngmnt/ESXi_MNGMNT/PDF/MSMPA03.pdf
More documents are here: http://support.dell.com/support/edocs/software/eslvmwre/
A little old, but will give you some idea: http://support.dell.com/support/edocs/software/eslvmwre/AdditionalDocs/backuprstre/46636A00MR.pdf
Yes. They will all be in a VMDK file. However, one workaround will be using iSCSI (MD3000i). You can mount iSCSI volumes from a Windows VM, using Micrsoft iSCSI initiator. Now you are not bound by VMFS, but by Microsoft iSCSI.
the biggest difference i see is to have a SAN in a high-cost solution. There you will be able to do things like VMotion, RDM and even mount NTFS volumes directly to VMs
Yes (depending on your future needs). One of the biggest advantages of virtualization is flexibility. You can start with a low-cost solution (local drives) and very easily scale to iSCSI. It is just as simple as moving your VMs from local storage to iSCSI.
You will need 1 Gbps connection from ESX/ESXi to iSCSI storage
-Bala
You can mark these answers as useful, so it can help others.
JOHNADCO
2 Intern
•
847 Posts
0
February 23rd, 2009 14:00
Consider beefing up your hosts as much as posisble. If you run ESX 35i it is very nice just to run all VM'son one host and use the other for full redundancy.
I think most people would be surprised at how well ESX 3.5i can run several heavy VM's along with many many lighter VM's.
With ESX 3.5i, I'd suggest just making sure you put as many spindles as you can into the disk group. This some what goes against published "best tunning practices" but in our testing... It was way way better performance wise to have lots of spindles on one disk group and all the production VM's all running from that group. Performance always drecreased drastically across the board as we reduced the number os spindles involved.
blueice2
3 Posts
0
February 23rd, 2009 17:00
JOHNADCO
2 Intern
•
847 Posts
0
February 24th, 2009 11:00
I would like to add.....
San has made a big difference in my I.T life, I'll never go back to direct attached with products like the MD3000i out there at the price points you can get them for.....
Aside from performance, snapshot and VD copy is to powerfull of a redundancy tool, I mean I am literally sleeping better at night these days.
JOHNADCO
2 Intern
•
847 Posts
0
February 24th, 2009 11:00
The raid type is a tough call.....
We have around 100 users, we run an Exchange 2007 64bit, an SQL server with a 25gig DB along with like 20million flat files as an imaging system as we are paperless, other SQL servers, other flat file servers, about 12 VM's. We use Raid 5 with 14 disks in the disc group. We do use an MD3000i san.
Performance went up for us compared to pre-virtualization with direct attached storage. Some operations were as much as 10 times faster.
I think the real problem with raid 5 is the speed impact when a drive fails and it rebuilds in the hot swap. (We tested it and it seemed OK, but we have not had it come up yet in actual production)
I reply all this, but they way I do it, is in contrast to the "best tunning practices", but as I stated earlier any other way we set it up, it just wasn't as fast. At this point I say spindle count rules!!! I probably am switching the bottle neck to the 1gb ethernet iSCSI, but it's fast, very fast, so that must not be a bad place to have your bottle neck is all I can surmize. :)
Redbaran
2 Posts
0
February 25th, 2009 21:00
Just some feed back on what I have seen.
I am using a PE1850 with ESXi. We just did a speed compareison with having the VMDK on the internal RAID1 vs ISCSI RAID1. Just with using the internal intel NIC for the ISCSI connection at 100MB our W2003 server that is on the ISCSI booted twice as fast as the one that is stored on the internal storage. From power on to login screen with the ISCSI is 12 sec. Still waiting to see what happens when we change this to fiber.
One other thing I can say is RAM!!!! and lots of it on your host. All testing I did ram was a very big part of performance.
Second is using 1 virtual processor instead of multiple. I have seen a decrease on any VM that has more then one VP.