Start a Conversation

Unsolved

This post is more than 5 years old

6004

November 19th, 2007 05:00

poor performance cifs

Hi,

I have a ns83 integrated. A Host Windows 2003 access to 100 FS CIFS. The write and read performance are poor (2 mb/s)
I have hosts Solaris connected to FS NFS. The R/W operations are good (35 mb/s)

The interface Lan is fixed to 1000 FD on the NS83 and cisco swith.

NFS and CIFS are bound on the same interface Lan.

Thank you

8.6K Posts

November 19th, 2007 07:00

sounds strange

I would first verify that the network is indeed good.

one simple way to do that is using ftp - a simple protocol that you can also compare the results between Windows and Unix.

If you already have CIFS configured you should be able to just ftp to the data mover using user.domain as the login name and the CIFS password
If not just create a local test user with /nas/sbin/server_user server_2 -add -passwd

If your ftp read performance differs vastly from the write performance you do have a network problem.

then you can see if there maybe is a performance difference between the WIndows and the Unix file systems.

8.6K Posts

November 20th, 2007 05:00

I suggest to open a case with support then

7 Posts

November 20th, 2007 05:00

The network seems to be clean.

Unix performances are 12 x greater than CIFS performances.

For CIFS
I have used two differents windows hosts to compare the cifs r/w perf, that are the sames.

7 Posts

November 21st, 2007 00:00

A case is opened.

But no response

5.7K Posts

November 21st, 2007 05:00

Response times are related to severity. You'll get a response, don't worry.

90 Posts

November 26th, 2007 15:00

The first thing I would check is that the speed of the IP port indeed does match the port speed of the switch/router. Way too many times the auto/auto setting results in the host getting a random configuration like 100 simplex while the port thinks it 1000 duplex. This can result in (believe it or not) the ability to ping and send small files but real slow transfer of larger files. We've just standardized on using fixed speeds wherever we can, usually the switches allow this, and only use auto on the hosts when we can't fix the speed.

Forget science, make it work!

5 Practitioner

 • 

274.2K Posts

January 7th, 2008 12:00

Hi,

The CIFS and NFS performance can never be the same. NFS always gives you far better performance than CIFS, because of CIFS Protocol over head. But still the perfornace what you are getting can always be improved. You might be interested in looking at the following:

1. Cat 5e to Cat 6 Cable (significant performance improvement)
2. True GigE card on the Host
3. Number of Hops
4. Jumbo Frames on the Host GigE card
5. Hard setting end to end from Celerra - switch - host
6. MTU settings on the Host GigE card

Hope this helps

15 Posts

January 7th, 2008 12:00

What kind of switches do you have?
Are you using IP Telephony?

5 Practitioner

 • 

274.2K Posts

January 7th, 2008 13:00

If you are still facing problem, I could give you more info. Let me know your email id.

7 Posts

February 4th, 2008 08:00

Yes, TOIP is implemented on the network.

8.6K Posts

February 4th, 2008 12:00

I think there was a problem with some Cisco switches / modules and the QoS normally setup for Telephony

It should affect NFS as well though - at least you should see retransmits

8 Posts

June 3rd, 2010 23:00

Hi,

We were also having performance bottleneck while reading data from Celerra. After some investigation and consultation with EMC, we have changed fastRTO setting on data-movers from default value of "0" to "2" and now getting almost full throughput. Before this change we were getting very low throughput (<10Mbps) while reading from any Celerra machine.

In our scenario, Celerra is connected to 3Com Core Switch 8810 via aggregated links of 2x1Gbps and clients are connected through edge switch on 100Mbps ports. Now clients are getting 90 to 100 Mbps.

Can you share your fastRTO current settings?

Regards,

Saleem

14 Posts

August 23rd, 2012 08:00

Just thought I would add to this. We have been experiencing very poor CIFS performance for over 2 years and we even had EMC onsite to look at it. They dismissed the issue as a bottleneck with the disks which I disputed. I implemented the fastRTO setting today and saw a 10 fold increase in write speed.

The only downside to this is the requirement to reboot the data mover as we dont have standby data movers so everthing has to go offline.

Thanks

Paul

8 Posts

August 24th, 2012 02:00

Dear Paul,

I am glad that you have the solution for very prolonged issue. EMC were unable to provide the solution back in 2010 and we had to investigate and try solution by ourselves.

Can you share more details?

Regards,

Muhammad Saleem, Network Administrator

Information Technology | Pakistan Petroleum Limited | T +92.21.111-568-568, Ext: 4575 | M +92.333.2123332

7 Posts

October 18th, 2012 06:00

I have a similar issue. Read performance is great. 100mB/s Sometimes higher, but write performance is terribly slow. Under 10mB/s.

I have opened several request over the last 2 years, but no fix.

I would like to test the fastRTO setting, but it looks like you can only choose value 0 or 1 (disable/enable) What is the "2" value?

No Events found!

Top