Highlighted
2 Bronze

Mean Time Between Failure

Jump to solution

please how is the ''Mean Time Between Failure" calculated for 1 component and for 3 components?

0 Kudos
Reply
1 Solution

Accepted Solutions
Highlighted
2 Bronze

Re: Mean Time Between Failure

Jump to solution

Thank you for your question.

Typically, MTBF specifications that disk drive manufactures provide are average time between single disk unit failures.  This is simply a benchmark for systems and solutions designers. They use component MTBF specifications when determining the general disk failure rate the system or solution must be able to handle at scale and still provide reliable data storage in conjunction with meeting the required disk I/O performance.

Here is an example:

If the MTBF of a single drive is 750,000 hours, and there is a single drive in the system (as with your laptop and most desktop computer systems), then the MTBF of the data storage integrity of that system will be calculated at about 750,000 hours.

With the same MTBF expectation, and now there are three drives in the system, then the MTBF of the data storage integrity of that system will be calculated at 250,000 hours (750,000 / 3 drive units).  This is because the probability of a single device failure has tripled because there are three of them in the system.  There is a higher probability that at least one of them will fail.

The upside is that if the three drives are in a RAID configuration (excluding RAID0) the data integrity expectancy is effectively decoupled from the component MTBF expectancy.  This is because the failure of one disk will not result in data unavailability and a replacement disk could be installed to replace the failed unit. Additionally, the data will automatically be rebuilt onto the new drive.  At an MTBF of 750k hours, statistically, replacement and rebuild would complete and the replacement drive joined into the RAID groups well before another drive in that RAID group would fail.

View solution in original post

0 Kudos
Reply
3 Replies
Highlighted
4 Tellurium

Re: Mean Time Between Failure

Jump to solution

Can you provide some more context for this question? Is it from the Practice Test?

How are you currently calculating mean time?

0 Kudos
Reply
Highlighted
2 Bronze

Re: Mean Time Between Failure

Jump to solution

Hi Kate,

yes, this is from the Practice Test and from a question from the ISM v2 book per my understanding from ISM v2 book the calculating is Mean Time Between Failures = (Total up time) / (number of breakdowns).

Reply
Highlighted
2 Bronze

Re: Mean Time Between Failure

Jump to solution

Thank you for your question.

Typically, MTBF specifications that disk drive manufactures provide are average time between single disk unit failures.  This is simply a benchmark for systems and solutions designers. They use component MTBF specifications when determining the general disk failure rate the system or solution must be able to handle at scale and still provide reliable data storage in conjunction with meeting the required disk I/O performance.

Here is an example:

If the MTBF of a single drive is 750,000 hours, and there is a single drive in the system (as with your laptop and most desktop computer systems), then the MTBF of the data storage integrity of that system will be calculated at about 750,000 hours.

With the same MTBF expectation, and now there are three drives in the system, then the MTBF of the data storage integrity of that system will be calculated at 250,000 hours (750,000 / 3 drive units).  This is because the probability of a single device failure has tripled because there are three of them in the system.  There is a higher probability that at least one of them will fail.

The upside is that if the three drives are in a RAID configuration (excluding RAID0) the data integrity expectancy is effectively decoupled from the component MTBF expectancy.  This is because the failure of one disk will not result in data unavailability and a replacement disk could be installed to replace the failed unit. Additionally, the data will automatically be rebuilt onto the new drive.  At an MTBF of 750k hours, statistically, replacement and rebuild would complete and the replacement drive joined into the RAID groups well before another drive in that RAID group would fail.

View solution in original post

0 Kudos
Reply