Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Dell Community
- :
- Education
- :
- Student Discussions
- :
- E10-001 Exam Question Query

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Highlighted

Willing

2 Bronze

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-29-2015
08:11 AM

Hello,

I greatly appreciate if you can you help me understand with explanation the answers to these four questions please. Thank you

An application generates 4200 small random I/Os at peak workloads with a read/write ratio of 2:1.

What is the disk load at peak activity for a RAID 5 configuration?

A. 2,800

B. 5,600

C. 8,400

D. 11,200

Answer: C

An application generates 4200 small random I/Os at peak workloads with a read/write ratio of 2:1.

What is the disk load at peak activity for a RAID 6 configuration?

A. 2,800

B. 5,600

C. 8,400

D. 11,200

Answer: D

An organization plans to deploy a new business application in their environment. The new

application requires 2 TB of storage space. During peak workloads, the application is expected to

generate 9,800 IOPS with a typical I/O size of 4 KB. Because the application is business-critical,

the response time must be within acceptable range.

The available disk drive option is a 15,000 rpm drive with 100 GB capacity. The number of IOPS in

which the disk drive can perform at 100 percent utilization is 140.

What is the number of disk drives needed to meet the application’s capacity and performance

requirements?

A. 20

B. 60

C. 70

D. 100

Answer: D (WHY ISN'T THIS ANSWER, C. 70)

An organization plans to deploy a new business application in their environment. The new

application requires 4 TB of storage space. During peak workloads, the application is expected to

generate 4900 IOPS with a typical I/O size of 8 KB. Because the application is business-critical,

the response time must be within acceptable range.

The available disk drive option is a 10,000 rpm drive with 200 GB capacity. The number of IOPS in

which the disk drive can perform at 100 percent utilization is 140.

What is the number of disk drives needed to meet the application’s capacity and performance

requirements?

A. 20

B. 25

C. 35

D. 50

Answer: D (WHY ISN'T THIS ANSWER, C. 35)

Solved! Go to Solution.

1 Solution

Accepted Solutions

Highlighted

mikem_

2 Bronze

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-07-2015
09:24 AM

Hello, my name is Mike Mendola. I'm a Senior Technical Education Consultant and would like to answer your questions.

Your initial calculations to find the number of disk drives to satisfy the stated application capacity and performance calculations are correct. However, what also must be considered in the final analysis are key variables found in a real-time storage environment. These include disk native command queuing, RAID level used (and number of disks in the RAID set) and even I/O controller characteristics. It is typical to “short hand” these into the calculation by adding about 25% (and rounding up for capacity and performance headroom) to the final number of disks arrived at, even after factoring in that no more than 70% of the disk performance shall be exceeded.

10 Replies

Highlighted
It would be helpful if you could show the work that you did on your calculations for these questions. Please explain what you currently understand about each one.

KEHutchinson

4 Tellurium

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-30-2015
05:42 AM

Highlighted

Willing

2 Bronze

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-30-2015
03:35 PM

Hi,

I now understand how the 1st two questions are answered once I converted the read/write ratio to a fraction then to decimal. That way I can calculate read ratio + write ratio x RAID write penalty for actual average required back-end IOPS.

However for the last 2 questions, I divided the required storage sizes by disk sizes and also the total size IOPS by the IOPS of each drive. The larger figure from the two calculations would be used. But the resulted figure isn't showed as correct?

Highlighted
Can you show the calculations you used on these two questions? It will help others to see where there might be a problem.

KEHutchinson

4 Tellurium

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-07-2015
06:19 AM

Highlighted

pavlos1

Not applicable

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

06-02-2015
09:10 AM

And the equations for questions 1 and 2.

**reads + (write penalty * writes)**

4200 I/O with 2:1 read/write ratio means that 66.6% of 4200 are reads and 33.3% are writes

So 4200*66.6%=2800 I/O and 4200*33.3%=1400 I/O

Write penalty for RAID is:

RAID 1/0 = 2

RAID 5 = 4

RAID 6 = 6

So for question 1

=2800 + (4 * 1400)

=2800 + 5600

=8400

For question 2

=2800 + (6 * 1400)

=2800 + 8400

=11200

Highlighted

mikem_

2 Bronze

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-07-2015
09:24 AM

Hello, my name is Mike Mendola. I'm a Senior Technical Education Consultant and would like to answer your questions.

Your initial calculations to find the number of disk drives to satisfy the stated application capacity and performance calculations are correct. However, what also must be considered in the final analysis are key variables found in a real-time storage environment. These include disk native command queuing, RAID level used (and number of disks in the RAID set) and even I/O controller characteristics. It is typical to “short hand” these into the calculation by adding about 25% (and rounding up for capacity and performance headroom) to the final number of disks arrived at, even after factoring in that no more than 70% of the disk performance shall be exceeded.

Highlighted

Willing

2 Bronze

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-10-2015
07:50 PM

Hi Mike,

Thank you much for the info thus far.

2 questions:

- Are you saying that the answers I suggested in brackets are correct or that only the formula used is correct
- The additional overhead factors (25%). Is this useful real world formula geared more toward IOPS requirements than capacity

Highlighted

mikem_

2 Bronze

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-13-2015
06:20 AM

Thanks for your reply.

The answers you selected are incorrect because they only take into account the formula and do not factor the real-world design criteria. Actually, this is the point of the question set, which is also to determine if the student would instinctively take into account the stated SLA requirements and making sure these are always met.

In reality, the entire point of designing any type of IT environment is to at a minimum completely satisfy all stated SLA requirements AND to take into account how required performance can be sustained over time, most critically, during peak workloads. As a matter of fact, to the limit it can be reasonably determined, all systems designers should even take into account the type of business function(s) the IT platform being designed will support.

Here’s an example: A certain retailer with a strong online merchandising presence decides in October of the year to purchase a SAN system. The vendor’s sales and Systems Engineering team along with the customer IT team determine the current SAN design capacity and performance requirements based on modeling historical application and workload performance over the six-months preceding. They also factor in a rate-of-growth variable to fully support all SLA’s until the next budget cycle where there possibly will be an expansion.

They take this information and calculate that the new SAN should step-into their environment providing at least 35% performance and storage capacity over what they summarized from the last six months of data. Of course, they realize that a SAN can be relatively easily expanded, so they all agree that 35% more capacity to start with will be very good and still fit nicely with their current budget. It will also hold them over until the next budget cycle, two years out, where they predict that can purchase more capacity into the SAN.

The SAN is ordered, installed and configured and running by January of the following year. It runs just as planned and all users of the applications on the system are very satisfied with applications performance.

Here’s the kicker that is the result of the design phase not taking into account the business aspects of the users community:

The SAN runs fine – until the last quarter of that first year. Although storage capacity is still okay, they start to see a decrease of key application performance and SLA’s are not being met. Worst of all this begins really hitting them during the peak earnings time of their business! Customers are frustrated because online ordering is now very slow. They are starting to migrate to other retailers because of this.

Much business was lost that Thanksgiving-to-Christmas season of the first year on their new SAN infrastructure.

I’ll leave it to you to take the scenario information into account and determine what happened and where the error was in SAN capacity design phase. Your answer will in fact answer your questions about the importance of designing IT systems to factor-in real-world variables to meet and sustain SLA performance over the life of the system.

Reply when you arrive at your answer if you wish.

I hope this helps!

Mike M.

Michael Mendola

Senior Technical Education Consultant

EMC² Corporation

55 Constitution Blvd.

Franklin, Massachusetts 02038

Cell: 603-767-0723

Office: 978-699-0016

Email: michael.mendola@emc.com

Highlighted

Willing

2 Bronze

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-13-2015
10:07 AM

Hi Mike,

They didn't cater for increased performance utilization (disk and disk controller utilization etc.) at peak workloads/peak times (faster response times) to cover entire SLA including business aspects. Safety net of .7 utilization is key. I now see how the original answers are correct

Thanks also for the info on the additional 25% aspect which I was not aware of. In addition it's good we can also leverage extended flash cash and flash to help optimize disk requirements from a performance perspective to name a few. Thanks again.

Highlighted

mikem_

2 Bronze

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-13-2015
10:27 AM

You are correct in that some information was not specifically mentioned in the course. I offered my personal scenario as an example of solution design considerations being made in a more holistic way.

I wanted to convey how important it is to take as broad and holistic approach as possible when designing storage solutions. This is the key to determining how best to meet and sustain key SLA’s in a particular environment - and maintain that effectiveness for the business accessing that data over time.

Specifically included in this aspect of SLA-specific solution design are types of storage and techniques which can be applied in various ways to not only increase performance, but “target” performance to any specific application.

You mentioned flash cache and even flash-based arrays such as XtremeIO. I’ll add Isilon scale-out NAS file-level storage, too. These block- and file-level systems all can be applied as appropriate to most effectively and economically apply very-high-performance attributes to data storage for the applications which require it most.

Well done, and I am glad to have helped!

Mike M.

Dell Support Resources