Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1434

February 22nd, 2010 15:00

Thin Provisioning for Oracle BBDD (Doubt)

Hi there guys,

I've some doubts about implementing Thin Provisioning with oracle database. I've read (CLARiiON Virtual Provisioning White Paper) and (Laveraging EMC CLARiiON CX4 Virtual Provisioning for Database Deployment).


This is my scenario.

CLARiiON CX4-120
3x DAE 15 disks 450GB 15k SAS
1x DAE 15 disks 1TB 7,5k SATA2

I must to deploy 2 databases Oracle 11g in two separated machines with the follows storage requirements.

Oracle1:
---------
- Linux RHEL 5.2 x64 (RAID1 Local disks)
- 400GB Oracle Databases
- 100GB ArchiveLogs
- 400GB Backup (RMAN procedures)

Oracle2:
---------
- Linux RHEL 5.2 x64 (RAID1 Local disks)
- 400GB Oracle Databases
- 100GB ArchiveLogs
- 400GB Backup (RMAN procedures)

For provision it, I've thought in create a ThinPool RAID5 3+1 disk SAS 450GB  for  Orable Databases  and  ArchiveLogs for both machines using SATA2 disks for backup in other ThinPool.

My doubt is the next: When I created RAID5 3+1, it appeared a warning message and said me that the best practice is RAID5 with minimum five disks, but I need only 4 disk for my purpose, five disks is too much space.

If i use RAID5 3+1 instead of RAID5 4+1, will I lose performance? What is the real penalty on the array ??

Can i expand thin pool with only 1 disks or it's necessary do it with multiple of 5  (CLARiiON Virtual Provisioning White Paper)

Is it necessary to do disk alignment in Array and Server file system side for this implementation? It's the same than traditional RAID Groups ?

Best regards.

2 Intern

 • 

5.7K Posts

February 25th, 2010 07:00

AFAIK it's an "IBM compatible" issue, so intel/AMC using MBR, so Linux, ESX, Windows, DOS are all impacted, however: Linux nowadays aligns disks by default, just as Windows 2k8 does

But manually align any disk before putting any partition on it is always good ! Just to be on the safe side !!!

4.5K Posts

February 24th, 2010 07:00

I would recommend that you review the two additional White Papers below.

Thin LUNs will not provide the level of performance that a normal raid group can. We also recommend that you attempt to create raid groups that match (block size) your application/file system - 4+1 raid 5 will give you a raid stripe of 256KB (64KB for each disk - not counting the parity disk). We also recommend that you align your file system prior to formatting the LUNs - this does provide better performance.

EMC CLARiiON Storage System Fundamentals for Performance and Availability

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H1049_emc_clariion_fibre_channel_storage_fundamentals_ldv.pdf

EMC CLARiiON Performance and Availability Release 29.0 Firmware Update Applied Best Practices.pdf

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h5773-clariion-best-practices-performance-availability-wp.pdf

glen

43 Posts

February 24th, 2010 08:00

Hi Kelleg,

I've these documents and I read them, in RAID 5 4+1 with 256kb STRIPE size, the aligment in file system is 512blocks or 128blocks ??
I've read some documents and i'm a few confused.

Clariion best practices says, in linux system, fdisk menu, select

x # Expert mode
b # Starting block
1 # Partition 1
128 # Stripe Element = 128
w # Write

In stripe Element  I must set it to 128 or 512 blocks??? for RAID5 4+1 ???

I'm confused with this.

thanks in advance.

4.5K Posts

February 24th, 2010 11:00

On the CLARiion the "stripe element size" is always 64KB - this is the amount of data that is written to each physical disk in a raid group - all raid groups. The "Stripe Size" is the number of disks (except for the parity disk when using Raid 3 or Raid 5) times the element size. So for a raid 5 using s 4+1 - that's 4 disks times 64KB = 256KB. This is the stripe size  - 64KB written to each disk (4), then parity (1).

In Windows you want to align the file-system to overcome the BIOS using part of the first sector on the disk - you use diskpart to move the starting point up to a 64KB boundary on the disk - if you use multiples of 64KB you should be OK. if you use 1MB, you start the partition at sector 2048.

For Linux the situation is similar for Intel processors - you need to off-set the starting sector for the partition - if you use the 128 number listed in the example, this will off-set the starting point to a sector boundary. This is for raid and non-raid groups - it's the BIOS that off-sets the starting position on the disk, not the array.

glen

43 Posts

February 24th, 2010 12:00

Hi kelleg,

Thanks for your explanation, but I've forgotten say, I want use LVM2 (Logical Volume Manager) for volumes ORADATA and ARCHIVELOGS.

It's still necessary to do disk alignment before create a LVM2 "Physical Volume" or not ??

4.5K Posts

February 24th, 2010 14:00

Not sure about that - maybe someone here will chime in about this -  just off the top, it probably won't hurt and since you can't do it after the fact, might be best to set the off-set anyway.

glen

31 Posts

February 25th, 2010 06:00

is that "BIOS effect" true for non-booted external disks too?. We dont use any offset for LVM controlled SAN devices

4.5K Posts

February 25th, 2010 07:00

When you initialize a disk in Windows (and by default BIOS) it gets this off-set - it on every MBR disk - not sure about Linux - I believe that this is more of an Intel\Windows issue

glen

No Events found!

Top