1 Copper

Best practise for Oracle FS with Vmax


we are going to migrate our existing config in an oracle server to Oracle 11g  with QFS

DB team requested these new sizes :

4X350 GB for DATA files R1 (VP pool)

2X 250 GB for index      R1(VP pool)

1 X 200 GB Archive log      R1(VP pool)

  4X 10 GB redo   R1  (VP pool)

each of these will be a seperated File system :

what is the best practise for presenting these devices to the host ?

shall we present 1 meta lun for each FS ( ex : 350 GB stripes Meta) or devide these 350 into for example 3 x 114 Metas to be concated under OS ?


0 Kudos
2 Replies
8 Krypton

Re: Best practise for Oracle FS with Vmax

It is difficult to give you specific advice on your set-up without knowledge of how your environment is set up and the workload generated by the database. So here is some general tips to help you decide:

- What level of incremental growth is required after the inital config is built?

- If there is significant workload required, 8-way meta devices would be preferable over 4-way. Particularly if there is skewed workload between the DATA filesstems. This would also be true if the devices are concat at the host.

- If SRDF/S is involved, write concurrency considerations are important. Write concurrency in SRDF/S is determined by the number of symdevs/paths.

- Ensure there are enough FA procesors involved to cater for peak workload.

0 Kudos
6 Indium

Re: Best practise for Oracle FS with Vmax


The easiest part is the redo logs that have pure write workload. You should use 8 single hypers of 1.25GB each one, use the LVM of the host and stipe then with 64KB. Also it would be good to have dedicate FAs CPUs for the redo logs depending the workload 2 or 4.

Data, Index and Archive logs use meta structure created from the storage. As Jasonc said use 8-way striped meta devices (now in VMAX you can have a single hyper upto 240GB so it's not a problem). Use 8 FA's CPUs (different CPUs from the ones that you used for redo logs).

If SRDF/S is involved use 8 paths per device at least.

Again the key thing is to know your workload.

0 Kudos