Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2881

November 5th, 2013 19:00

TDAT Cylinder Creation Question

I am looking for a simple way to come up with Cylinder size/TDAT based on disk requirements below.

I am planning on configuring pool1 with following configuration.


a) place in Disk Group 3 (42 * 300GB FC disk)

  - TDAT hypers on disks

  - Raid 10

I am planning on configuring pool2 with following configuration.

b) place in Disk Group 2 (40 * 200GB EFD disk)

  - TDAT hypers on disks

  - Raid 5 (3+1)

c) place in Disk Group 2 (68 * 300GB FC disk)

  - TDAT hypers on disks

  - Raid 10

d) place in Disk Group 2 (16 * 2000GB SAS disk)

  - TDAT hypers on disks

  - Raid 10

I am planning on configuring pool3 with following configuration.

e) place in Disk Group 3 (10 * 600GB FC disk)

  - TDAT hypers on disks

  - Raid 10

Based on the requirement above, is there a formula or tool I can use to come up with number of splits, cylinder/TDAT sizes?

Any help on this topic is much appreciated as I am trying to figure out best approach on this design.

Thanks,

2.2K Posts

November 13th, 2013 11:00

If this is for an Epic Electronic Medical Record (EMR) deployment on a VMAX then there are a few areas where the recommend design will deviate from the general EMC best practices. One of those EMC specific recommendations for the Epic Cache database is indeed 4 splits per disk for the TDATs.

There are other specifics for Epic that need to be followed as well to ensure a successful deployment. EMC has a healthcare practice that focuses on EMR and I recommend you reach out to your EMC representative to engage that group.

November 10th, 2013 23:00

What hyper sizes are you planning to use?

Thanks,

Sreehari

7 Posts

November 11th, 2013 07:00

I was thinking of doing following.

These disk groups will have a standard splits.

b) place in Disk Group 2 (40 * 200GB EFD disk)

  - TDAT hypers on disks

  - Raid 5 (3+1)

c) place in Disk Group 2 (68 * 300GB FC disk)

  - TDAT hypers on disks

  - Raid 10

d) place in Disk Group 2 (16 * 2000GB SAS disk)

  - TDAT hypers on disks

  - Raid 10

These disk groups will have a 4 way splits.

a) place in Disk Group 3 (42 * 300GB FC disk)

  - TDAT hypers on disks

  - Raid 10

e) place in Disk Group 3 (10 * 600GB FC disk)

  - TDAT hypers on disks

  - Raid 10

I am new to design side so just trying to learn as well as to follow best practice guides.

1.3K Posts

November 11th, 2013 08:00

Why have different disk groups with different split counts?

Every pool should have 8 splits per disk.

7 Posts

November 11th, 2013 12:00

Per Application requirement we need to have four split on pool 1. which is 42 * 300 GB FC drives.

In this design I will have about four different pool so my in order for the different splits I will need to create separate disk group. is that not the case? What would be the better design?

1.3K Posts

November 11th, 2013 12:00

Each thin pool should have 8 splits per drive, or the minimum number to use the capacity of the disk.  I don't see why an application should care about how the data devices are laid out on the pool.  I can understand the requirement for segregation on disks.  However be careful when segregating.  Make sure you have enough drives to support the workload.  Also make sure your drives you are segregating onto are spread evenly across engines.

7 Posts

November 11th, 2013 17:00

I agree. All the best practice guides I have seen recommends 8 way for some reason for this app I am being asked to design with 4 way splits on FC disks. But for the EFD we can use standard 8 way.

As far as splitting the drives across, is that something I can have installation engineer make sure during bin file creation or something I need to add in my configuration?

7 Posts

November 12th, 2013 07:00

I have following pool design that I am trying to implement with splits recommended pre application requirement.

48, 300GB 15k drives RAID 1+0 (Pool1)

48, 300GB 15k drives, RAID 1+0 (Pool2)

 

24, 600GB 15kdrives, RAID 3+1 (Pool8)

40, 200GB SSD, RAID 3+1 (FAST Pool 5&6)

68, 300GB 15k drives, RAID 1+0 (FAST Pool 5&6)

16, 2TB 7.2k drives, RAID 6+2 (FAST Pool 5&6)

2.2K Posts

November 12th, 2013 15:00

Rakesh,

From the pool names and split details you are referring to I take it you are working on a project for an Epic deployment? If so please contact the EMC Healthcare Delivery Champions to assist you. You will find them in your Global Address List.

7 Posts

November 13th, 2013 10:00

I am getting conflicting information on the split size and that is one of the reason I am requesting help from community.


The documents I have from EMC suggest the Split size should be minimum of 8 but I am being asked to go with 4 splits on 300 Gb drives. I wanted to verify if that recommendation is based on best practices since I can't find any doc that shows that.


Thanks,

7 Posts

November 13th, 2013 11:00

That makes sense.

So in your opinion is it best to split the diskgroups separately for the EPIC pools that require 4 splits.

2.2K Posts

November 13th, 2013 12:00

Most of the pools that support the Epic hosts required spindle isolation through separate disk groups to meet Epic's response time requirements.

82 Posts

July 15th, 2014 15:00

Guys,

  I have a similar question on creating TDAT, just continuing the thread.

Here in my environment, we have a disk group with 16* 600 GB FC disks and have to create TDATS with RAID 5 (7+1).

There should be 8 hypers per disk in a pool, so how do I find or calculate the size of each TDAT?  Are there any standard sizes? Please guide me.

Below are the Specs:

Disk Technology : FC

Capacity (MB) Free: 558281

Numebr of disks in the disk group: 16

TDAT RAID : RAID-5 (7+1)

Regards

Dino

1.3K Posts

July 15th, 2014 16:00

I don't know the exact cly count for your drives, but the idea is to take the capacity of one disk, multiply by 7 (for 7+1) and then divide by 8 for the size of each hyper.  So capacity in MB for each would be 488495.875

However this is larger than the allowed 240GB volume in VMAX, so you should make the smallest number of volumes to use the whole disk.

On another note, 3+1 is preferred on VP over 7+1 for the improved sequential write performance, and R1 in a tiered environment is highly recommended because it ends up being less expensive than R5 per IOP in almost all cases.  Also with larger FC drives, R5 will be less available than R1 (without RDF).

82 Posts

July 16th, 2014 13:00

I was also concerened with availability factor when I have to move from RAID-1 to RAID-5 (7+1) but here no other go, anyways I have explained the risk of 2 drive failure at a time to the management.

    Regarding hypers on a disk - Can we have any number of hypers on a disk as long as there is equal number of hypers on all disks in a pool?

No Events found!

Top