Unsolved
This post is more than 5 years old
25 Posts
0
2817
VM I/O Size to CX I/O Size
I’m running into an issue regarding the I/O size between a VM and a CLARiiON (Unified Storage, of course.) I've tested with Windows 2003 R2 and RHEL 5.5 guest OS's.
The summary of the issue is; I/O’s leave the client application as 512KB, but the I/O’s arriving at the LUN are smaller.
- On Windows VM, sending 512KB I/O, see 64KB at the LUN.
- On Linux VM, sending 512KB I/O, see 256KB at the LUN.
These I/O sizes apply to sequential and random reads/writes. I’m not using a file system for these tests, so I don’t think I need to worry about alignment. I'm using Iometer as the load generator, which aligns at sector boundaries by default.
Environment details
- ESXi 4.0.0, 261974. I believe this is the latest patch bundle for ESXi 4.0.
- One Guest OS is Windows 2003 R2 SP2 32-bit, another is RHEL 5.5 64-bit
- Storage is an NS-480, FLARE 30
- Server is Dell R805
- FC SAN attached (Brocade DS_300B) using Emulex LPe12000 HBA’s.
- Single RAID-5 FC LUN provisioned using RDM.
- RAW device testing. No File Systems.
I feel like I'm overlooking something simple. The environment is pretty much all defaults.
The ESX variable Disk.DiskMaxIOSize is set to default of 32767 KB, so my understanding is that I should be able to send any I/O size smaller than 32MB.
I checked another ESX environment and saw the same thing.
Any help is appreciated.
Thanks,
-Scott
clintonskitson
116 Posts
0
January 18th, 2011 21:00
Scott,
I am going to take a stab at it here, and if not we can forward this on to the CX propeller heads =) I think that narrowing this down to dig into the writes would be the first thing I would look at. Can you check something simple like "full stripe writes"? I am assuming you are using Naivsphere Analyzer to view the performance information from the LUN level? "Full stripe writes" is something natural that the array would do when it is able to cache enough sequential blocks or receives an aligned 256KB write destined for the LUN which allows minimizng raid penalty overhead by making a one time parity write (instead of multiple reads/writes) while doing so. So in the Linux example, with an aligned 512KB write it would make sene to me to see it showup at the LUNs as 2x256KB IOs. Another thing to look at would be to disable write caching from the LUN temporarily to see a better depiction of more native writes to the backend LUN and report back with that information. And one last thing to try under Windows would be to create a partition and force the alignment to 64 and check the results that way.
halls1
25 Posts
0
January 19th, 2011 11:00
The behavior you are describing is related to SP cache coalescing IOs, except the opposite direction. Combining IOs, not breaking apart.
Regarding statement; "So in the Linux example, with an aligned 512KB write it would make sene to me to see it showup at the LUNs as 2x256KB IOs."
SP cache does not do this. The IO size seen at the LUN would still be 512KB unless something upstream (not the array) breaks the IO.
Alignment does not dictate a host write request size. LUN cache does not dictate a host write request size.