# Paired Comparison Analysis

## Purpose

To analyse the results of a paired comparison test.

## Data Format

1. paired_comparison.xlsx
2. Datatype for the attribute is pairedcomp.

## Background

A paired comparison test is a directional / specified test. It is used as a difference test with 2 products, to determine if one product has more or less of an attribute than another product. It is also called a paired preference test if preference is the attribute of interest. It should be used in situations where there is no definitive ‘right’ answer.

The test is carried out as follows:

Two blind-coded samples are presented to each assessor (typically consumer).

1. Assessors are asked to focus on a specified attribute or preference
(the latter for consumer testing)
2. Balance the presentation order across assessors
3. Chance of each product being selected = 0.5
4. If there is a ‘no-preference’ option when running a consumer test. Assessors that pick the no-preference option can be removed or split equally between the 2 products.

The user should consider in advance if their test is 1-sided or 2-sided as part of the study design.

## Options

1. No Choice: What action to take if assessors have selected a no-preference option. This can be ‘Delete’ which removes those assessors. Or ‘Split’ which divided the assessors equally between products.
2. Type of test: One-sided or two-sided. This depends on the context of the test and whether the comparison is one sided or two sided.
3. Threshold: The significance level or type-I error for the comparison.
4. Number of Decimals for Values: Required number of decimals for values given in the results.
5. Number of Decimals for P-Values: Required number of decimals for any p-values given in the results.

## Results and Interpretation

### Contingency

A n x n contingency table of results. The cell totals are the number of times the product in the row was selected when compared to the product in the column.
This can be considered as a combined set of 2 x 2 tables.

### Binomial

1. One tab per Attribute. A binomial test is carried out on the results of each comparison.
2. The following totals are returned:
1. The number of times each product was selected.
2. N Total: total number of tests
3. Minimum: The number of times a product would need to be selected over the other to detect a significant difference at the given threshold.
4. P-value: From the binomial test – the probability of obtaining the result under the null hypothesis of no difference.

### Interpretation

If the p-value is less than the selected threshold (significance level) for the binomial test, conclude that the samples are significantly different.  You may choose to have a higher level of confidence to minimise your risk.

Note that even if you conclude a difference, this does not imply similarity.
1. ISO 5495:2005 Sensory Analysis – Methodology – Paired Comparison Test
2. ASTM-2263-12 Standard Test Method for Paired Preference Test