I just created a baby file system on the sas drives which won't have checkpointing on and the write speed is marginally better 18MBps but that is still pretty slow in my opinion. I don't know whether a file system's size has any bearing on performance.
Fair point and I will continue to investigate but at this stage I wouldn't think the network is the most likely candidate, especially given we are only just maxing 100Mbps speed and the core fabric the celerra sits in is 10Gbps. I have checked all involved switch ports and the error count is 0, I have checked the tcp retransmission count on the celerra and it is so small you may as well say 0.
Open to any other suggestions for things to look at on the network.
Check out this post. Others have the same issue. I removed my checkpoints and it helped but unfortunatley I can still to better on a standard Windows file server.
mquayle
5 Posts
0
November 17th, 2011 22:00
I just created a baby file system on the sas drives which won't have checkpointing on and the write speed is marginally better 18MBps but that is still pretty slow in my opinion. I don't know whether a file system's size has any bearing on performance.
dynamox
9 Legend
•
20.4K Posts
0
November 17th, 2011 22:00
Can you test with a file system that does not have any checkpoints ?
Rainer_EMC
4 Operator
•
8.6K Posts
0
November 18th, 2011 07:00
Just because the read speed is good doesnt mean the network is ok.
There are Ethernet / IP problems that only show up in one direction.
Rainer_EMC
4 Operator
•
8.6K Posts
0
November 18th, 2011 07:00
What OS release are you running ?
There were significant performance improvements for NFS write latency in the latest codes.
Also make sure you are using the uncached mount option for file systems hosting NFS data stores.
Rainer
mquayle
5 Posts
0
November 20th, 2011 16:00
We are running Dart 6.0.40-5 and Flare 2.23.50.5.709,6.23.8 (0.13)
Yes I am using uncached for the NFS mounts
mquayle
5 Posts
0
November 20th, 2011 19:00
Fair point and I will continue to investigate but at this stage I wouldn't think the network is the most likely candidate, especially given we are only just maxing 100Mbps speed and the core fabric the celerra sits in is 10Gbps. I have checked all involved switch ports and the error count is 0, I have checked the tcp retransmission count on the celerra and it is so small you may as well say 0.
Open to any other suggestions for things to look at on the network.
Rainer_EMC
4 Operator
•
8.6K Posts
0
November 21st, 2011 02:00
You want at least 6.0.41 to get the fix below.
If you cant upgrade immediately - the next 6.0.xx maintenance release is planned in a few weeks - beginning of December
Rainer
------
Impact Level
Severity 1
Symptom Desc.
User reported poor write response time with VMware ESX using Celerra through NFS with uncached file systems.
Fix
In this case, the Data Mover serialized the writes and sent the replies after the last write storage. Code
was changed to no longer serialize incoming NFS write requests.
Fixed in version
6.0.41.3
Rainer_EMC
4 Operator
•
8.6K Posts
0
November 21st, 2011 02:00
You could run a ttcp from the client to the data mover and back
Instructions should be here in forum
Rainer
Rainer_EMC
4 Operator
•
8.6K Posts
0
November 21st, 2011 02:00
BTW – 6.0.40.5 is over a year old now
96Chevyz71
1 Rookie
•
20 Posts
0
November 23rd, 2011 06:00
Check out this post. Others have the same issue. I removed my checkpoints and it helped but unfortunatley I can still to better on a standard Windows file server.
https://community.emc.com/message/560910.