To analyse the
results of a paired comparison test.
- Datatype for the attribute is pairedcomp.
A paired comparison
test is a directional / specified test. It is used as a difference test with 2
products, to determine if one product has
more or less of an attribute than another product. It is also called a paired
preference test if preference is the attribute of interest. It should be used in situations where there is no definitive ‘right’
The test is carried
out as follows:
Two blind-coded samples are presented to each assessor (typically consumer).
- Assessors are asked to focus on a specified attribute or preference
(the latter for consumer testing)
- Balance the presentation order across assessors
- Chance of each product being selected = 0.5
- If there is a ‘no-preference’ option when running a consumer test. Assessors that pick the no-preference option can be removed or split equally between the 2 products.
The user should consider in advance if their test is 1-sided or 2-sided as part of the study design.
- No Choice: What action to take if assessors
have selected a no-preference option. This can be ‘Delete’ which removes those
assessors. Or ‘Split’ which divided the assessors equally between products.
- Type of test: One-sided or two-sided. This
depends on the context of the test and whether the comparison is one sided or
- Threshold: The significance level or type-I
error for the comparison.
- Number of Decimals for Values: Required number
of decimals for values given in the results.
- Number of Decimals for P-Values: Required number
of decimals for any p-values given in the results.
Results and Interpretation
A n x n contingency table of results. The
cell totals are the number of times the product in the row was selected when
compared to the product in the column.
This can be considered as a combined set of 2 x 2 tables.
- One tab per Attribute. A binomial test is
carried out on the results of each comparison.
- The following totals are returned:
number of times each product was selected.
- N Total: total number of tests
- Minimum: The number of times a product would
need to be selected over the other to detect a significant difference at the
- P-value: From the binomial test – the probability
of obtaining the result under the null hypothesis of no difference.
If the p-value is less
than the selected threshold (significance level) for the binomial test,
conclude that the samples are significantly different. You may choose to
have a higher level of confidence to minimise your risk.
Note that even if
you conclude a difference, this does not imply similarity.
- ISO 5495:2005 Sensory Analysis – Methodology – Paired Comparison Test
- ASTM-2263-12 Standard Test Method for Paired Preference Test