Coefficient of Variation (CV)

Sources:  Wikipedia ; UsableStats ;

 

Wikipedia

 

In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed as a percentage, and is defined as the ratio of the standard deviation {\displaystyle \ \sigma } to the mean {\displaystyle \ \mu } (or its absolute value{\displaystyle |\mu |}). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R.[citation needed] In addition, CV is utilized by economists and investors in economic models.

The coefficient of variation (CV) is defined as the ratio of the standard deviation {\displaystyle \ \sigma } to the mean {\displaystyle \ \mu }{\displaystyle c_{\rm {v}}={\frac {\sigma }{\mu }}.}[1] It shows the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on a ratio scale, that is, scales that have a meaningful zero and hence allow relative comparison of two measurements (ie division of one measurement by the other). The coefficient of variation may not have any meaning for data on an interval scale.[2] For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on which scale you used. On the other hand, Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero. While a standard deviation (SD) can be measured in Kelvin, Celsius, or Fahrenheit, the value computed is only applicable to that scale. Only the Kelvin scale can be used to compute a valid coefficient of variability.

Measurements that are log-normally distributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements.

A more robust possibility is the quartile coefficient of dispersion, half the interquartile range {\displaystyle {(Q_{3}-Q_{1})/2}} divided by the average of the quartiles (the midhinge), {\displaystyle {(Q_{1}+Q_{3})/2}}.

In most cases, a CV is computed for a single independent variable (e.g., a single factory product) with numerous, repeated measures of a dependent variable (e.g., error in the production process). However, data that are linear or even logarithmically non-linear and include a continuous range for the independent variable with sparse measurements across each value (e.g., scatter-plot) may be amenable to single CV calculation using a maximum-likelihood estimation approach.[3]

Examples[edit]

A data set of [100, 100, 100] has constant values. Its standard deviation is 0 and average is 100, giving the coefficient of variation as

0 / 100 = 0

A data set of [90, 100, 110] has more variability. Its standard deviation is 10 and its average is 100, giving the coefficient of variation as

10 / 100 = 0.1

A data set of [1, 5, 6, 8, 10, 40, 65, 88] has still more variability. Its standard deviation is 32.9 and its average is 27.9, giving a coefficient of variation of

32.9 / 27.9 = 1.18

Examples of misuse[edit]

Comparing coefficients of variation between parameters using relative units can result in differences that may not be real. If we compare the same set of temperatures in Celsius and Fahrenheit (both relative units, where kelvin and Rankine scale are their associated absolute values):

Celsius: [0, 10, 20, 30, 40]

Fahrenheit: [32, 50, 68, 86, 104]

The sample standard deviations are 15.81 and 28.46, respectively. The CV of the first set is 15.81/20 = 79%. For the second set (which are the same temperatures) it is 28.46/68 = 42%.

If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute.

Comparing the same data set, now in absolute units:

Kelvin: [273.15, 283.15, 293.15, 303.15, 313.15]

Rankine: [491.67, 509.67, 527.67, 545.67, 563.67]

The sample standard deviations are still 15.81 and 28.46, respectively, because the standard deviation is not affected by a constant offset. The coefficients of variation, however, are now both equal to 5.39%.

Mathematically speaking, the coefficient of variation is not entirely linear. That is, for a random variable {\displaystyle X}, the coefficient of variation of {\displaystyle aX+b} is equal to the coefficient of variation of {\displaystyle X} only when {\displaystyle b=0}. In the above example, Celsius can only be converted to Farenheit through a linear transformation of the form {\displaystyle ax+b} with {\displaystyle b\neq 0}, whereas Kelvins can be converted to Rankines through a transformation of the form {\displaystyle ax}.

Estimation[edit]

When only a sample of data from a population is available, the population CV can be estimated using the ratio of the sample standard deviation {\displaystyle s\,} to the sample mean {\displaystyle {\bar {x}}}:

{\displaystyle {\widehat {c_{\rm {v}}}}={\frac {s}{\bar {x}}}}

But this estimator, when applied to a small or moderately sized sample, tends to be too low: it is a biased estimator. For normally distributed data, an unbiased estimator[4] for a sample of size n is:

{\displaystyle {\widehat {c_{\rm {v}}}}^{*}={\bigg (}1+{\frac {1}{4n}}{\bigg )}{\widehat {c_{\rm {v}}}}}

Log-normal data[edit]

In many applications, it can be assumed that data are log-normally distributed (evidenced by the presence of skewness in the sampled data).[5] In such cases, a more accurate estimate, derived from the properties of the log-normal distribution,[6][7][8] is defined as:

{\displaystyle {\widehat {cv}}_{\rm {raw}}={\sqrt {\mathrm {e} ^{s_{\rm {ln}}^{2}}-1}}}

where {\displaystyle {s_{\rm {ln}}}\,} is the sample standard deviation of the data after a natural log transformation. (In the event that measurements are recorded using any other logarithmic base, b, their standard deviation {\displaystyle s_{b}\,} is converted to base e using {\displaystyle s_{\rm {ln}}=s_{b}\ln(b)\,}, and the formula for {\displaystyle {\widehat {cv}}_{\rm {raw}}\,} remains the same.[9]) This estimate is sometimes referred to as the “geometric CV”[10][11] in order to distinguish it from the simple estimate above. However, “geometric coefficient of variation” has also been defined by Kirkwood[12] as:

{\displaystyle \mathrm {GCV_{K}} ={\mathrm {e} ^{s_{\rm {ln}}}\!\!-1}}

This term was intended to be analogous to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of {\displaystyle c_{\rm {v}}\,} itself.

For many practical purposes (such as sample size determination and calculation of confidence intervals) it is {\displaystyle s_{ln}\,} which is of most use in the context of log-normally distributed data. If necessary, this can be derived from an estimate of {\displaystyle c_{\rm {v}}\,} or GCV by inverting the corresponding formula.

Comparison to standard deviation[edit]

Advantages[edit]

The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a dimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation.

Disadvantages[edit]

  • When the mean value is close to zero, the coefficient of variation will approach infinity and is therefore sensitive to small changes in the mean. This is often the case if the values do not originate from a ratio scale.
  • Unlike the standard deviation, it cannot be used directly to construct confidence intervals for the mean.
  • CVs are not an ideal index of the certainty of measurement when the number of replicates varies across samples because CV is invariant to the number of replicates while the certainty of the mean improves with increasing replicates. In this case, standard error in percent is suggested to be superior.[13]

Applications[edit]

The coefficient of variation is also common in applied probability fields such as renewal theoryqueueing theory, and reliability theory. In these fields, the exponential distribution is often more important than the normal distribution. The standard deviation of an exponential distribution is equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV < 1 (such as an Erlang distribution) are considered low-variance, while those with CV > 1 (such as a hyper-exponential distribution) are considered high-variance[citation needed]. Some formulas in these fields are expressed using the squared coefficient of variation, often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with the Root Mean Square Deviation (RMSD). While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constant absolute error over their working range.

In actuarial science, the CV is known as unitized risk.[14]

In Industrial Solids Processing, CV is particularly important to measure the degree of homogeneity of a powder mixture. Comparing the calculated CV to a specification will allow to define if a sufficient degree of mixing has been reached.[15]

Laboratory measures of intra-assay and inter-assay CVs[edit]

CV measures are often used as quality controls for quantitative laboratory assays. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are incorrect and that a more complex computational process is required.[16] It has also been noted that CV values are not an ideal index of the certainty of a measurement when the number of replicates varies across samples − in this case standard error in percent is suggested to be superior.[13] If measurements do not have a natural zero point then the CV is not a valid measurement and alternative measures such as the intraclass correlation coefficient are recommended.[17]

As a measure of economic inequality[edit]

The coefficient of variation fulfills the requirements for a measure of economic inequality.[18][19][20] If x (with entries xi) is a list of the values of an economic indicator (e.g. wealth), with xi being the wealth of agent i, then the following requirements are met:

  • Anonymity – cv is independent of the ordering of the list x. This follows from the fact that the variance and mean are independent of the ordering of x.
  • Scale invariance: cv(x)=cvx) where α is a real number.[20]
  • Population independence – If {x,x} is the list x appended to itself, then cv({x,x})=cv(x). This follows from the fact that the variance and mean both obey this principle.
  • Pigou-Dalton transfer principle: when wealth is transferred from a wealthier agent i to a poorer agent j (i.e. xi > xj) without altering their rank, then cv decreases and vice versa.[20]

cv assumes its minimum value of zero for complete equality (all xi are equal).[20] Its most notable drawback is that it is not bounded from above, so it cannot be normalized to be within a fixed range (e.g. like the Gini coefficient which is constrained to be between 0 and 1).[20] It is, however, more mathematically tractable than the Gini Coefficient.

Distribution[edit]

Provided that negative and small positive values of the sample mean occur with negligible frequency, the probability distribution of the coefficient of variation for a sample of size {\displaystyle n} has been shown by Hendricks and Robey to be[21]

{\displaystyle \mathrm {d} F_{c_{\rm {v}}}={\frac {2}{\pi ^{1/2}\Gamma \left({\frac {n-1}{2}}\right)}}\;\mathrm {e} ^{-{\frac {n}{2\left({\frac {\sigma }{\mu }}\right)^{2}}}{\frac {{c_{\rm {v}}}^{2}}{1+{c_{\rm {v}}}^{2}}}}{\frac {{c_{\rm {v}}}^{n-2}}{(1+{c_{\rm {v}}}^{2})^{n/2}}}\sideset {}{^{\prime }}\sum _{i=0}^{n-1}{\frac {(n-1)!\,\Gamma \left({\frac {n-i}{2}}\right)}{(n-1-i)!\,i!\,}}{\frac {n^{i/2}}{2^{i/2}\left({\frac {\sigma }{\mu }}\right)^{i}}}{\frac {1}{(1+{c_{\rm {v}}}^{2})^{i/2}}}\,\mathrm {d} c_{\rm {v}},}

where the symbol {\displaystyle \sideset {}{^{\prime }}\sum } indicates that the summation is over only even values of {\displaystyle n-1-i}, i.e., if {\displaystyle n} is odd, sum over even values of {\displaystyle i} and if {\displaystyle n} is even, sum only over odd values of {\displaystyle i}.

This is useful, for instance, in the construction of hypothesis tests or confidence intervals. Statistical inference for the coefficient of variation in normally distributed data is often based on McKay’s chi-square approximation for the coefficient of variation [22][23][24][25][26][27]

Alternative[edit]

According to Liu (2012),[28] Lehmann (1986).[29] “also derived the sample distribution of CV in order to give an exact method for the construction of a confidence interval for CV;” it is based on a non-central t-distribution.

Similar ratios[edit]

Standardized moments are similar ratios, {\displaystyle {\mu _{k}}/{\sigma ^{k}}} where {\displaystyle \mu _{k}} is the kth moment about the mean, which are also dimensionless and scale invariant. The variance-to-mean ratio{\displaystyle \sigma ^{2}/\mu }, is another similar ratio, but is not dimensionless, and hence not scale invariant. See Normalization (statistics) for further ratios.

In signal processing, particularly image processing, the reciprocal ratio {\displaystyle \mu /\sigma } (or its square) is referred to as the signal to noise ratio in general and signal-to-noise ratio (imaging) in particular.

Other related ratios include:

See also[edit]

 

Usable Statistics

 

Fundamentals of Statistics 1: Basic Concepts :: The Standard Deviation and Coefficient of Variation

Properties of the Standard Deviation

In terms of measuring the variability of spread of data, we’ve seen that the standard deviation is the preferred and most used measure.

Some additional things to think about the standard deviation:

  1. The standard deviation is the typical or average distance a value is to the mean
  2. If all values are the same, then the standard deviation is 0
  3. The standard deviation is heavily influenced by outliers just like the mean (it uses the mean in its calculation).
  4. The sample standard deviation is denoted with the letter s and the population standard deviation is denoted with the lower case Greek letter sigma σ.

If your data is more spread out (has more variability) then you will have a higher standard deviation. It’s often difficult to interpret a standard deviation since it’s based on the sample of data. Is a standard deviation of 12 high or is a .20 high?

Coefficient of Variation (CV)

If you know nothing about the data other than the mean, one way to interpret the relative magnitude of the standard deviation is to divide it by the mean. This is called the coefficient of variation. For example, if the mean is 80 and standard deviation is 12, the cv = 12/80 = .15 or 15%.

If the standard deviation is .20 and the mean is .50, then the cv = .20/.50 = .4 or 40%. So knowing nothing else about the data, the CV helps us see that even a lower standard deviation doesn’t mean less variable data.

I’ve found the CV to be an underused metric considering it is so simple to compute and helps a lot with understanding relative variability.

Social Sharing

Share on facebook
Share on twitter
Share on whatsapp