Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2570

April 20th, 2010 09:00

Celerra 32K nfs blocks

I apologize if this is a question that has been asked and answered here before. I'm having trouble searching the forum for information.

Can someone tell me how we get information about Celerra NFS block size? Need to know current size and whether that size can be increased. Thank you!

We're running DART 5.6.44-4.

366 Posts

April 20th, 2010 10:00

One more reference :

> White Paper: Optimizing EMC Celerra IP Storage on Oracle 11g Direct NFS Applied Technology

http://powerlink.emc.com/km/live1//en_US/Offering_Technical/White_Paper/h6180-optimizing-celerra-ip-oracle-11g-direct-nfs-wp.pdf

366 Posts

April 20th, 2010 09:00

Hi,

the Celerra block size is fixed in 8k, and cannot be changed. This is the block size the Data Movers uses to access the storage system.

But, for the NFS clients, you specify the I/O size on the mount options ( rsize and wsize ). This is the block size your client will access the Celerra Data Movers.

I hope this answers your question.

Regards,

Gustavo Barreto.

62 Posts

April 20th, 2010 09:00

Good information. Thank you very much!

What we're after is a way to increase block size to see if we can improve our Oracle performance on Celerra nfs mounted filesystems. I understand that I can ask for larger block sizes for read/write on the host ... but I'm not sure if that's good enough if ... the NAS server is pushing 8K blocks. Has this been discussed on this forum before?

366 Posts

April 20th, 2010 10:00

Hi,

I don't think the block size used by the Data Movers to access the storage system will cause performance issues on the application.

We have some references on the powerlink regarding Oracle with Celerra/Unified Platform.

This are some examples :

> White Paper: EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra Unified Storage Platform - Applied Technology Guide :

http://powerlink.emc.com:80/km/live1//en_US/Offering_Technical/White_Paper/H4161-emc-sol-oracle-db-10g-11g-emc-celerra-ns-series-multi-prtcl-applied-tech-guide-wp.pdf

> White Paper: EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra Unified Storage Platform - Best Practices Planning

http://powerlink.emc.com/km/live1//en_US/Offering_Technical/White_Paper/H4160-emc-sol-oracle-db-10-11g-emc-celerra-ns-series-multi-protocol-wp.pdf

But, I suggest you contact your EMC sales/pre sales representative to engage someone from EMC with Oracle expertise, and recommend best practices for your environment.

Regards,

Gustavo Barreto.

8.6K Posts

April 20th, 2010 11:00

Hi,

the Celerra will automatically negotiate to the NFS block size that the client requests - nothing to do on the Celerra there - just use the rsize/wisze mount options on the client.

if possible use Oracle DirectNFS (see the white papers) - it will improve the performance and lower the Oracle database server CPU utilization.

you need some params on the Celerra for that that are mentioned in the white paper

Rainer

117 Posts

April 20th, 2010 15:00

For the most part in NFS (and CIFS), I/O sizes are driven by the client.  The client requests a block of data of a certain size, and the server responds with the data.

With that said, NFS servers typically have some maximum I/O size they will support.  Like many/most NFS servers, Celerra's is 32KB.  It's rare for an NFS configuration to ever use larger than 32KB I/Os.

It is also extremely rare for an NFS server to support larger than 64KB I/Os.  NFSv3 traditionally never uses I/Os this large.  NFSv3 can run atop UDP, which has a 64KB max packet size limit.  With RPC and NFS protocol overheads, the maximum theoretical I/O size for NFSv3/UDP is less than 64KB (and RPC headers can be variable size), and most NFS vendors simplify the situation by supporting 32KB max.

In any case - there's no reason why your Oracle hosts shouldn't be able to do larger than 8KB I/Os.  Generally setting rsize and wsize to 32KB is the recommendation on all hosts, and many OSes default to this nowadays.  (There are some very rare circumstances where it may be wise to leave these numbers lower, but generally something in the environment would need to be broken or substandard for this to be true.)

62 Posts

April 21st, 2010 02:00

Thanks Rainer, Ian and Gustavo! That is extremely helpful! I will review the documents you mentioned. 

No Events found!

Top