Discrimination Test Settings - Pd and d' Analysis

Discrimination Test Settings - Pd and d' Analysis

Purpose 

Establish the power of a discrimination test given a set sample size or to calculate the sample size required to get a desired power.

This can be done specifying the expected difference as a proportion of discriminators (Pd) or as a Thurstonian difference (d’). 

Data Format

  1. Discrimination.xlsx
  2. Results of the discrimination test are binary (1 = correct answer, 0 = incorrect answer)
  3. At present this analysis requires an uploaded dataset – although this is not used for the analysis. 

Background 

The power of a test is defined as the probability of the test to correctly reject the null hypothesis given the alternative hypothesis is true. For a difference test, the power of the test is the probability a difference will be detected, given there is a difference. For a similarity test the power of the test is the probability that no difference will be detected, given there is no difference.

Before a test is run it should be ensured that it is powered correctly. The Discrimination Test Settings modules allow the user to calculate the power of a test, or to find the correct sample size for a test for a desired power. The Pd option allows the user to specify test parameters in terms of the proportion of discriminators (Pd). The d’ (dprime) option allows the user to specify test parameters in terms of the Thurstonian difference. 

Discrimination tests can be conducted using one of 2 models.
  1. The Guessing model is panellist oriented and determines the proportion of panellists who can detect differences between products (i.e. proportion of distinguishers) Pd.
  2. The Thurstonian model is product oriented. It estimates the difference between products through a signal/noise ratio (d’). The dprime is specific to each test protocol. 

The power of the test depends on several parameters.

  1. Protocol: Type of discrimination test (2-AFC, 3-AFC, Duo-Trio, Triangle or Tetrad).
  2. Type of Test: Similarity or difference test.
  3. Pd or d’: The relevant level of difference in the products. Either measured as the proportion of discriminators (Pd) or d-prime (d’) the size of the difference between the products.
  4. Significance: Type I error risk (alpha) – the chance that the null is rejected if the null is true. For a difference test this is the risk of claiming a difference when there isn't one. For a similarity test this is the risk of claiming equivalence when the products are different.
  5. Sample Size: The total number of tests in the study.

Similarly, the required sample size of test for a chosen power depends on the same parameters.

  1. Power: the probability (value between 0 and 1) of the test to reject the null hypothesis if the alternate hypothesis is true. This is equivalent to 1 – Beta, where Beta is the Type II error – the chance of accepting the null hypothesis when the alternate is true. For example, if Beta = 0.2 then the Power = 0.8. 

Options

  1. Protocol: Type of discrimination test (2-AFC, 3-AFC, Duo-Trio, Triangle or Tetrad)
  2. Type of test: Similarity or difference test.
  3. Prop of Discriminators (Pd): Proportion of panellists who can detect difference between products.
    Or
  4. D-prime (d’): The estimated difference between products.
  5. Estimation: Calculate the required sample size for a given power (N) or Calculate the power achieved for a specified sample size (Power)
  6. Power: (If Estimation = N) The required power as a proportion.
  7. Total Number: (If Estimation = Power) The specified sample size.
  8. Alpha: Type I error or false positive rate.
  9. Number of Decimals: Required number of decimals for values given in the results. 

Results and Interpretation for Sample Size Calculation 

  1. N Exact: The smallest number of samples which will obtain the specified poer for the test.
  2. N Stable: Power is not a monotonic function of sample size (in this case, it is not strictly increasing). Therefore, slightly higher samples sizes than N Exact will usually have less than the specified power. N stable is therefore the sample size for which no larger sample sizes will have a power less than that specified. 

Results and Interpretation for Power calculation 

  1. Prob Guess: The probability of a correct guess. This is a function of the test protocol selected and is not dependent on other parameters.
  2. Power: The power of the tests for the specified sample size, protocol and type of test, alpha and proportion of distinguishers (Pd) or d-prime (d’).
  3. Min Correct:  If the test statistic is more extreme than the critical value of a hypothesis test, then the null hypothesis is rejected.
For a difference test, if the number of correct tests is greater or equal to the minimum correct (for the given alpha) then the null hypothesis is rejected.
For a similarity test, if the number of correct tests is less than or equal to the minimum correct (for a given alpha) then the null hypothesis is rejected.
The minimum correct is therefore the minimum/maximum number of correct responses that must be returned for the null to be rejected depending on whether it is a similarity or difference test. 

Technical Information 

  1. The R package sensR (Rune Christensen and Per B. Brockhoff) is used.
  2. The power calculation is done using the ‘exact’ binomial calculation. 

References 

  1. Christensen, R.H.B. and Brockhoff, P.B (2014).  Package ‘sensR’. http://cran.r-project.org/web/packages/sensR/sensR.pdf

 


    • Related Articles

    • Same/Different Test Analysis

      Available from version: 5.0.8.6 Purpose The Same Different Test is a discrimination test that is a variation of the paired comparison test. The assessor is presented with two samples and is asked to decide whether these samples are the same or ...
    • Penalty Analysis

      Purpose To provide a penalty analysis of a consumer data set, that is to investigate how liking or acceptability of product decreases when product attributes are not at the optimal intensity. Data Format Consumer.xlsx Note: for EyeOpenR to read your ...
    • Two Step Double-Faced Applicability Test

      Introduction In the 'double-faced applicability' test, each attribute is presented with a “double-faced” approach, featuring two descriptors (a pair of semantic-differential descriptors) separately presented in the questionnaires to represent both ...
    • Tetrad

      Introduction The Tetrad method is a testing procedure designed to understand the presence of a perceptible sensory distinction between two products. Panelists are given four samples, with two being identical and two being different. The panellists ...
    • Triangle

      Introduction The triangle method is used to understand the presence or absence of a perceptible sensory difference or similarity between samples of two products through a forced-choice approach. Participants are presented with three samples, two of ...