Start a Conversation

Unsolved

This post is more than 5 years old

2051

January 12th, 2012 11:00

Oracle application with raid 1/0

Hello All,

I have this scenario where in customer wants following  to be configured for their single Linux Host connected directly to VNX through fiber and no switch(fabric)

DATABASE LAYOUT

300g  for Data        raid 10 (300g SAS drives for oracle database)(faster disks)

450g  for index       raid 10   (300g SAS drives for oracle database)(faster disks)

100g  for system     raid 10  (300g SAS drives for oracle database)(faster disks)

20g    for redo         raid 10 (300g SAS drives for oracle database)(faster disks)

100g  for arch         raid 10  (300g SAS drives for oracle database)(faster disks)

flashback 3TB raid 5 slower disks ( 600g SAS drives fro image cache and db backups)(slower disks)

and Image Cache Layout

mage cache 4  X 1TB raid 5 ( 600g SAS drives fro image cache and db backups)(slower disks)

1TB LUN for EasyVis HA database configuration  ( 600g SAS drives for  image cache and db backups)(slower disks)

After creating  4 vault drives and Hotspares –

I have 19 300Gig 10K SAS drive

And ,  24  600Gig 10K SAS drive

I am thinking of creating  a Single pool with Raid 5 for 600Gig Drives .That  they meant for Cache And Db backups.

My configuration will be 4(4+1) & 1(3+1).

And for Raid 1/0 –can I create a pool with & raid group

I want to create a traditional raid group for Log files and a pool for application luns.That way I will keep the logs seperate from application

This what I am going to configure - Please recommend me:

pool 1 (16 drives, RAID 1/0) - data, index, system   later on they can buy Fast Cache and enable it just for that pool
raid group 1 - RAID 1/0 (2 drives x 300G) - redo logs

raid group 2 - RAID 1/0 (2 drives x 600G) - archive logs (yeah a lot of capacity but DBAs like to keep archive logs for a while)

everything else one big RAID 5 pool.

thanks for your time




256 Posts

January 12th, 2012 15:00

I worked as part of a team that did a very similar configuration for a local company here in North Carolina. I have uploaded that configuration here. We ended up using RAID 10 for datafiles, though. We did segragate the online redo logs onto their own disks, however.

The choice of RAID 5 vs. RAID 10 for datafiles is non-trivial. You really need to think about that. Frequently, RAID 10 is a better choice. RAID 10 is better at write intensive workloads than RAID 5. RAID 10 also does sequential I/O better than RAID 5. You should use RAID 5 for small, random I/O which largely consists of reads. Heavy, large sequential write I/O should be put on RAID 10. I think you get the idea.

Note the use of FAST Cache vs. FAST VP. You also need to think about that carefully. FAST VP is better for deterministic, predictable I/O patterns which do not vary significantly over time. FAST Cache is better for databases (such as this one) where the I/O patterns are variable and unpredictable.

Anyway, take a look at the configuration we put together and see if that sparks some ideas.

1 Rookie

 • 

20.4K Posts

January 12th, 2012 16:00

Jeff,

why did you decide not to use RAID 10 pool and went with traditional RAID 10 raid groups ? I can't tell from the document you posted if MetaLUNs were in use or simply dedicated raid groups for specific number of data files ?  I would think with pool solution you can later on utilize FAST-VP and hopefully with next version of flare being able to add drives to a pool and being able to "rebalance" the data. Looking at deeppat design, and considering his relatively small resources ..anything stands out as a red flag ?

256 Posts

January 13th, 2012 06:00

The customer specified traditional RAID groups. Not sure for the reason, or if there even was one. Obviously, a RAID 10 pool would have worked fine as well. Sometime folks just don't want to come into the new way of doing things!

225 Posts

January 15th, 2012 20:00

My thoughts for your consideration.

Selection on FAST_Cache and FAST_VP, general speaking, F_Cache is more friendly to smaller IO and Random IO, FAST_VP would provide more on bigger IO and Sequential IO, therefore F_Cache would be better choice for OLTP, FAST_VP would be one for OLAP, this selection should be careful.

About Raid Selection in F_Cache environment.

R10 and R5 would not have much performance difference after F_Cache warm-up if you have proper F_Cache configuration. The difference on them are on F_Cache warm-up time, performance baseline of access on un-cached data. R10 would perform better on it with cost / # of spindles as tradeoff.

I tends to R5 if customer is able to understand F_Cahce warn-up.

Thanks,

Eddy

1 Rookie

 • 

20.4K Posts

January 15th, 2012 20:00

at some point data gets distage from fast cache, it destages that much faster (less write penalty) with raid 1/0 than raid-5.

Eddy Yang wrote:

I tends to R5 if customer is able to understand F_Cahce warn-up.

what does it mean if customer understands fast cache warm-up ?

63 Posts

January 16th, 2012 05:00

I think what Eddy is trying to say is :

  1. FAST Cache is created empty.
  2. There is an initial warm up period, as more and more hot data is cached and  the FAST Cache hit rate increases gradually.
  3. Host reads/writes of hot data will be serviced by the FAST Cache leading to a reduced demand for IOPS on the rotating disk behind the FAST Cache.

Therefore if analysis shows the system suitable for FAST Cache, a different disk or raid onfiguration can be used (R5 instead of R10): this is discussed in the whitepaper: Deploying Oracle Database on EMC VNX Unified Storage

Allan

1 Rookie

 • 

20.4K Posts

January 16th, 2012 06:00

if you have a brand new application, you don't know it locality of reference, it's re-hit rate so how to make that recommendation (raid 1/0 vs raid-5 ) ?

225 Posts

January 28th, 2012 19:00

Thanks Allan, I like to add a bit.

Performance chart of IO w/ F_Cache looks like a log algorithm, therefore the selection of baseline is important under certain SLA.

Eddy

225 Posts

January 28th, 2012 19:00

Here is my thoughts for your consideration.

1. Application design does matter with a specific data structure, so speak with application dev to estimate its tendency of OLAP / OLTP.

2. Put OLTP oriented part on R10 and OLAT oriented part on R5.

3. Put 5-10% capacity EFD on system, note it is not for performance baseline, it is only to absorb peak IO.

4. Enable Perf monitor on system to collect NAR/Trace or BTP for future analysis

Eddy

1 Rookie

 • 

20.4K Posts

January 28th, 2012 19:00

Eddy Yang wrote:

Thanks Allan, I like to add a bit.

Performance chart of IO w/ F_Cache looks like a log algorithm,

what does it mean, you can please explain ?

1 Rookie

 • 

20.4K Posts

January 28th, 2012 22:00

i am just repeating what you wrote above, trying to understand what you mean by

"Performance chart of IO w/ F_Cache looks like a log algorithm, therefore the selection of baseline is important under certain SLA."

what do you mean fast cache looks like a log algorithm ?

225 Posts

January 28th, 2012 22:00

Please see the attachment.

1 Attachment

1 Rookie

 • 

20.4K Posts

January 28th, 2012 22:00

i am not following you, what does fast cache warm up have to do with "Performance chart of IO w/ F_Cache looks like a log algorithm"

225 Posts

January 28th, 2012 22:00

Could you elaborate your points? What’s your perspective on it?

Thanks,

Eddy

225 Posts

January 30th, 2012 02:00

Please open the graphic I attached on my former post.

The Performance chart of system w/ F_Cache started on the point of “FC Perf”,since Hot data has not been promoted to EFD,F_Cache, when system started up. With time going, more and more hot data is promoted to EFD, system could generate more IOPS, the The Performance chart of system w/ F_Cache would get closer to the line of “EFD Perf”, but it would not reach it, since there is still data accessed on FC device, the gap does matter with SKEW of application data.

The process of “Hot data promotion to F_Cache”, the Performance chart moving up, is called “Warm-up”

Usually, the chart looks like a chat of log algorithm.

Eddy

No Events found!

Top