Sigma Level Calculator

Use this sigma calculator to easily calculate process sigma level, defects per million opportunities (DPMO, PPM), yield, rolled throughput yield (RTY), percent defects, percent defect units, as well as defects per million units (DPM). Various entry combinations are possible, but for full output enter defects, units, and defect opportunities per unit. The calculator can also solve for the number of samples required to control process quality to a given standard.

Share calculator:

Embed this tool!
get code

Quick navigation:

  1. Using the Sigma Level Calculator
  2. What is Six Sigma in process control?
  3. Yield vs. RTY, DPMO vs DPM
  4. Formulas
  5. Sigma shift, long-term vs. short-term sigma?
  6. Sample size for process control

Using the Sigma Level Calculator

This sigma calculator can be used to estimate the sigma level of a process (of producing units or delivering a service) based on the ratio of defects it results in. Depending on the input, the output consists of:

  • sigma level of the process (shows how well it is controlled relative to acceptance standard)
  • yield in terms of opportunities which did not result in a defect (standard yield)
  • yield in terms of acceptable products or services delivered (rolled throughput yield, RTY)
  • percentage of defects from total opportunities of the process to produce a defect
  • defects per million opportunities (DPMO, a.k.a. PPM)
  • percentage of defect units from the total production
  • defects per million units (DPM)

The minimum required input is DPMO in which case the six sigma calculator outputs the corresponding sigma level, standard yield and percent defects. Entering number of defects and total opportunities for a defect to occur outputs the control level (sigma), yield, percent defects and DPMO. Entering defects, number of units and number of opportunities per unit (the number of specifications that need to be controlled for quality for each unit, defect opportunities per unit) results in the full output of the calculator, including DPM, percentage of defect units, and rolled throughput yield on top of the outputs covered so far.

What is Six Sigma in process control?

In industrial control of production quality and in project management in general where a process of any kind needs to be controlled for quality, the quality is assured by taking measurements on samples from the output of the process and comparing them to a specification. A process is usually controlled for several specifications. For example, a production line for steel sheets coated with Polyvinyl chloride may control the width, length, and thickness of the sheets, as well as the thickness, color, and uniformity of the PVC coating. A service desk may monitor performance of servicing customers by checking the length of interactions, the number of interactions required to resolve an issue, and customer feedback.

All processes exhibit variability over time and all measurements taken on samples of the process output are subject to additional variability simply due to the fact of sampling. A process controlled at a level of six sigma (6σ) is a process whose variability is controlled in such a manner that it produces an out-of-specification output (defect) twice in 1 billion opportunities[1]. A process which produces more defects per million opportunities will have a lower sigma level, signifying that it results either in more waste, if defects are captured before they reach the consumer, or in more poorly serviced customers, making it more expensive to produce a given number of outputs which are up to specification. This can be visualized by comparing the specifications to the process variability as shown on the six-sigma chart below.

six sigma process

As you can see, the probability that a process having control at the six sigma level will result in defects is miniscule, even with a large amount of produce. However, not all processes are designed with this level of quality assurance. A minimum standard for industrial production is three sigma.

Six sigma is also the name of a set of techniques and tools for process improvement introduced by engineer Bill Smith while working at Motorola and later made famous by General Electric claiming to save up to 1 trillion dollars by using Six-Sigma under Jack Welch in the final decade of the 20-th century. According to Smith[1], a process can achieve a particular sigma level by either reducing its variability or by changing the specifications so they allow larger variability in the output.

There is a direct relationship between the sigma level of a process and the number of defects it results in, which are usually expressed either as defects per million opportunities or as percent defects, as shown on the table below:

Rolling throughput yield (unit yield) by sigma level
Sigma levelDPMOYieldPercent defects
3 2700.0000 99.73% 0.27%
4 63.3700 99.9937% 0.0073%
4.645 3.4000 99.99965% 0.00045%
5 0.5742 99.99994258% 0.00006852%
6 0.0020 99.9999998% 0.0000002%

4.64 is given since it is what was equated to six sigma in Smith's original work.

Yield vs. RTY, DPMO vs DPM

The sigma level calculator outputs both standard yield: percentage of opportunities which did not produce a defect from the total opportunities present, and its complimentary value - defects percent, as well as defects per million opportunities. These values are important for understanding the overall rate of success of the process. However, when a unit of produce has multiple opportunities to result in a defect, either because a product is produced from multiple parts, a service or process consists of several separate tasks or steps, and/or is measured by multiple measurements, measurements like defects per million units (DPM) and rolled throughput yield (unit yield) become more important.

Let us examine how these are related and why both are important in process control. If there is just one parameter to control for each unit produced, then DMPO and DPM are the same, and so is yield and RTY. However, with an increasing number of parts of the product or process, the difference between them increases geometrically. In examples provided by Smith[1] we can see how a manufacturing process in which each part is produced according to a six sigma standard, the yield in terms of units without any defects is virtually 100% for units consisting of 1-10 parts, however it drops to 99.9% if the parts are 30 and can go down to 90.3% if the number of steps in the process are 30,000, each with 3.4 defects per million opportunities. If the control level is more relaxed, for example 3-sigma, then producing even a unit with 10 parts would result in nearly 50% scrap.

Rolling throughput yield (unit yield) by sigma level
Process Complexity (parts/unit)Yield with 3σ*Yield with 4σ*Yield with 4.645σ*Yield with 6σ*
1 99.73% 99.99% 100.00% 100.00%
10 97.33% 99.94% 100.00% 100.00%
100 76.31% 99.37% 99.97% 100.00%
1,000 6.70% 93.860% 99.66% 100.00%
10,000 0.00% 53.06% 96.66% 100.00%
20,000 0.00% 28.16% 93.43% 100.00%
50,000 0.00% 4.21% 84.37% 99.99%
100,000 0.00% 0.18% 71.18% 99.98%

* the specified sigma level is for the production process of each individual part. Note that 4.64σ corresponds to what is labeled as 6σ in Smith's original work where he uses offsets for mean shift. 3σ = 2700.00 DPMO; 4σ = 63.37 DPRMO; 4.64σ = 3.40 DPMO; 6σ = 0.002 DPMO.

The above table lists the relationship between the sigma level for the process of each individual part and the resulting rolled throughput yields for manufacturing of units consisting of a different number of parts. The same logic applies to multi-step processes of any kind. Calculations for any number of parts and any level of sigma can be performed using this sigma level calculator, as long as the sigma level is the same for each part. If different levels apply to different parts, use the RTY formula specified below to perform the calculations.

Formulas

Below we present some of the key formulas used in the calculator, with short explanations.

DPMO formula

The equation for calculating defects per million opportunities is fairly straightforward: we take the number of defects, multiply by 1 million, then divide by the total opportunities which in itself is the product of the number of units and the number of defect opportunities per unit. Note that DPMO is often also written as PPM (parts per million), as was in the original Bill Smith paper.

dpmo defects per million opportunities

As discussed above, DPMO is more useful when looking at a single process in isolation. When it is part of a multi-step or multi-part process, the defects per million units measure and its complimentary - the rolled throughput yield, become relevant.

RTY formula

The equation for rolled throughput yield is given below:

rolled throughput yield

Following the Law of propagation of error, noted in the process control literature at least as early as Shewhart's key work in 1930 "Economic Control Of Quality Of Manufactured Product"[2], the combined error of a series of processes, each with a particular yield, is the product of the individual yield rates. Consequently, the rate of defect units is 1 minus the RTY.

Sigma shift, long-term vs. short-term sigma?

The so-called sigma shift was originally employed[1] to account for batch-to-batch variability of the true mean of the manufactured product characteristic (width, length, thickness, diameter, etc.). Smith reported that a shift in the mean by as much as 1.5σ was observed in Motorola's manufacturing processes. From there, he resorts to adjusting of reported sigma levels by shifting them by exactly 1.5 sigma and effectively reporting a 4.645σ process as having a sigma level of six. However, it seems that Smith confused the observed changes in the mean (subject to natural variation) with the actual changes in the mean (unknown). He did not report any confidence intervals or other uncertainty measures which would help us ascertain the uncertainty of his estimated mean shifts, suggesting that this might indeed be the source of the confusion.

From this initial confusion seemingly stem the notions of short-term versus long-term sigma: one could have a short-term process exhibiting the characteristics of a 4.64σ process, but it would be classified as a long-term 6σ process, on the grounds that one is allowing for short-term misconfiguration of the machinery of exactly 1.5σ. This, however, has no basis in reality. Observed changes in the mean are not true changes in the mean and there is also no reason to take 1.5 sigma as standard shift since any process will exhibit shifts in the mean from batch to batch, and, equally importantly, from sample to sample. These effects cancel out after measuring a certain number of batches and this is all accounted for in the calculation of the standard deviation of the process, and from there - its sigma level. Practically, this can be done by taking samples from more than one batch and weighing them equally, or using a time-decay function if deterioration of manufacturing equipment is to be taken into account.

If in doing so one discovers that the measurements of the mean and standard deviation of the process during batch #1 estimate sigma at 4.64, if six sigma is required, then one cannot simply wave their hand and proclaim six sigma to be achieved, even though the available data suggests a much lower number (resulting 1,700 times more defects!). One either has to find the reason for the observed mean shift, if any, or find a way to reduce variability until the target sigma is achieved. Similarly, if the first 10 batches of a product had an estimated sigma level of six, then suddenly batch #11 results in a sigma estimate of 4.645 (if taken alone), it may well be a sign that the machinery needs to be inspected and the production line fixed, and not that everything is going as expected. If there is natural expected drift in the mean, one way or another, this has to be included in the sigma calculation. Ideally it will be detected before it has a significant adverse effect on quality, a fix will be applied and the process will be brought back under control. Using sigma shift instead simply misrepresents the actual imperfection of the process.

In short, using 1.5 sigma shift is completely arbitrary as the number has no basis in reality. Furthermore, applying any sigma shift to calculations regarding the yield and defect rate of a process will result in underreporting of the expected defect rate and of overreporting of its expected yield. Therefore, the concept of a "Sigma Score" detached from the statistical sigma, standard deviation, makes no sense at all.

Further discussion into the origins of the 1.5 sigma shift and the applicability of any shift whatsoever for process control or for estimation of long or short-term sigma can be found in this article on sigma shift as well as in reference 3.

Sample size for process control

Oftentimes in process control one needs to estimate the number of samples needed in order to ensure that a process is performing up to specification. Upholding of standards usually happens by computing a confidence interval around the observed sample mean or, equivalently, through comparison with control charts. Since taking measures or estimating compliance with specification can be time consuming, material consuming, and even destructive, it is of utmost importance that quality control is assured with the minimum possible sample size.

In order to compute the sample size, one needs to have estimated the standard deviation σ of the characteristic of interest from past samples, needs to set a probability for the estimation procedure to contain the true value of the characteristic (customary values are 90%, 95%, 99%, but the exact value chosen depends on a trade-off between accuracy and cost of estimation), and needs to determine the maximum width of the interval which would satisfy the estimation task.

The latter is half the standard error E, also known as margin of error and is dubbed "maximum error" in the six sigma calculator interface.

sample size confidence interval

The maximum error should certainly be less than the difference between the upper specification limit (UCL) and the lower specification limit (LCL) to be of any practical use. For example, if the upper specification limit for the diameter of a rod is 10.2mm and the lower specification limit is 10.0mm, the maximum error of the estimation procedure canno be more than 10.2 - 10.0 = 0.2. Usually it is set significantly lower in order to ensure adherence to production standards. If the standard deviation is estimated from previous measurements to be 0.05 (making this a 4σ process), the maximum error can be set to 0.025 (margin of error of 0.0125), meaning that 62 randomly selected samples will need to be measured to ensure compliance with specification with a difference of no more than 0.025 with 95% confidence.

Of course, the above is just an example. You should follow the procedures for setting maximum error (or margin of error = 2 · maximum error) applicable to your particular case.

References

[1] Smith, B. (1993) "Making War on Defects", IEEE Spectrum 30(9):43-50; DOI: 10.1109/6.275174

[2] Shewhart, W.A. (1930) "Economic Control Of Quality Of Manufactured Product", Bell Labs Technical Journal 9(2):364-389; DOI: 10.1002/j.1538-7305.1930.tb00373.x

[3] Akerman, T. (2018) "Where is the evidence for sigma shift?" [online] https://www.tamarindtreeconsulting.com/where-is-the-evidence-for-sigma-shift/ , accessed Jul 18, 2019

Cite this calculator & page

If you'd like to cite this online calculator resource and information as provided on the page, you can use the following citation:
Georgiev G.Z., "Six Sigma Calculator", [online] Available at: https://www.gigacalculator.com/calculators/six-sigma-dpmo-calculator.php URL [Accessed Date: 10 Dec, 2019].