Free Sorting

Free Sorting

Purpose

To provide analysis of data collected from the Free Sorting (FS) procedure. This procedure presents assessors with a set of samples (products) with the instruction to group the samples into an unspecified number of groups, to satisfy the condition that the samples within a cluster are similar and samples in different clusters are different. In other words, samples are grouped based on their perceived similarity and each assessor is free to choose the number of based on their own criteria. After categorizing samples into different groups, assessors have the optional task to describe each group using whatever words (descriptors) they wish to.

There are several ways to analyse FS data, with EyeOpenR offering Multiple Dimension Scaling (MDS), Multiple Correspondence Analysis (MCA) and cluster analysis.

Data Format

Note that for EyeOpenR to read your dataset, the first five columns must be: Assessor, Product, Session, Replica and Sequence. The sixth column (column F) should denote the group (cluster) that the respective assessor gave to the respective product in the FS procedure. For more information on data format, see the following paragraphs and demo data set.

The Assessor column denotes the assessor or consumer. The Product column refers to the product or item presented. The Session and Replica columns are often redundant for BWS: enter the numeric value of “1” in each cell unless there are sessions or replica assessments in your data. The Sequence column in Free Sorting is not applicable: each cell should be ‘NA’.

Column F provides the respective assessor’s categorization to the respective sample. As assessors are free to choose different numbers of groups and also different words to describe each group, then this column can take many values over all assessors. If assessors describe a group using more than one descriptor, it is recommend to use a comma to separate words with no spaces between (e.g., “fruity,acidic”). Note that each sample belonging to same group has to have the same descriptors, otherwise it will be treated as a different group. In Free Sorting, describing a cluster of samples is typically optional: it is therefore likely that some assessors do not describe samples. The analyst should therefore code the clusters manually. For example “A”, “B”, “C” and “D” if the assessor has categorized the samples into four groups.

Please see the example dataset for a specific illustration of data format for FS data.

Background

The Free Sorting (FS) task has gained popularity within sensory and consumer science since its application in the late nineties (Coxon, 1999). It can be viewed as a rapid technique that seeks to group a larger number of products into smaller, homogenous clusters, thereby assessing the similarities and differences amongst a set of samples. In consumer and sensory science samples are most often products or prototypes.

From an assessor’s perspective the FS task can be divided into two sub-tasks. The first sub-task is to categorize the presented samples into different groups. The number of clusters is up to each assessor but should satisfy the constraint that samples within a cluster are similar and those in difference clusters are different. Thus, by categorizing samples it is assumed the assessor can at discriminate between them in a non-verbal way. The second sub-task is often harder for assessors: after categorizing samples into different groups, they are now asked to describe each of the groups. This descriptive step can be dependent on the skill and experience of the assessor. For example, trained assessors may provide more descriptors than that of an untrained assessor (see Courcoux et al., 2015, for a thorough review of the free sorting task). Nevertheless, what the analyst receives in the traditional FS method is each assessor providing their own groupings and descriptors.

There are many advantages to the FS method. As mentioned above, prior training of assessors may not be required, thus saving time, money and resource. The FS task is simple to understand and to perform, is said to reflect a typical everyday cognitive process (categorization) and takes relatively little time and energy to complete. Further, the categorization task requires no description (verbalization): this can be very useful if recruiting participants where language can be a barrier.

One of the main disadvantages of the FS task is the time required for the pre-processing of assessor group descriptors. This includes correcting typos, discarding words used rarely, identifying words of the same meaning, etc. These steps should be performed by the analyst prior analysis in EyeOpenR. As briefly mentioned above, the downside of having no trained assessors may be that the quantity or and validity of descriptors provided by untrained assessors is questionable. Another limitation of FS is that the categorization task reduces to binary data in the analysis: that is, the data is reduced to the cooccurrence of sample-pairs in the same group across assessors.

Options

  1. Method: select either MDS or MCA.
  2. Assessors considered unique: only relevant for when MCA is selected as method and there are either replicates or more than one session in the data. If that is the case, then select whether assessors should be considered unique within replicates or within sessions.
  3. Clustering based on: either the mean or median can be selected in order to calculate the Rand criteria (see below and interpretation chapter) across the assessors.
  4. Rand criteria: the Rand criteria is used to assess the similarity of groupings between assessors. The criteria can be either the Standard or Adjusted Rand Index. The Adjusted option is recommended (and default) as it takes into account the grouping of samples by chance (see Courcoux et al, 2015). See interpretation section for more information.
  5. Define clusters: The analyst can use ‘automatically’ to define clusters or specify the number of clusters. It is recommended to use the ‘automatically’ option first and then review the dendrogram and related statistics in the output. If warranted, a specific number of clusters can then be fitted.
  6. Number of clusters: if manually defining clusters, enter the number here.
  7. Word filter: select the minimum number of word occurrences to be included in the analysis. The default value of “-1” does this automatically
  8. P-value words cluster characterization: enter p-value for the threshold probability of a word used to describe a cluster (default 0.05)
  9. Number of Decimals for Values: enter preferred number of decimal places (default = 2)
  10. Number of Decimals for P-Values: enter preferred number of decimal places for p-values (default = 3)

Results and Interpretation

From a statistical analysis perspective, we are interested in the association between samples, assessors and the words (descriptors) in defining the underlying patterns in the data.

There are several ways one can approach this. Most common in sensory and consumer science is to use either Multi Dimension Scaling (MDS) or Multiple Correspondence Analysis (MCA) and both are available in EyeOpenR. Both techniques are factor analytical techniques that visualize relationships between samples in a lower dimensional space. That is, the dimensions represent the categorization process of the assessors.

A cluster analysis technique (Hierarchical Clustering of Principal Components, HCPC) is also integrated into EyeOpenR that is performed on the basis of the MDA/MCA result: this techniques aims to find clusters of homogeneous samples across assessors. So EyeOpenR combines the dimension reduction technique of MDS and MCA with clustering algorithms to provide the analyst with insightful results.

If MDS is selected:

a non-metric (ordinal) MDS is performed on the overall dissimilarity matrix. That is, dissimilarity can be assessed from assessors not putting the same samples in the same group. Therefore, increased distances between samples can be considered as increased dissimilarity.

  1. Eigenvalues: the number of MDS components and their respective percentage of variance is explained.
  2. Products: this tab provides information on the samples (products). It contains four sub-tabs that described how similar/dissimilar the samples are:
    1. Coord: co-ordinates of each product across the dimensions, as plotted on the graph.
    2. Cos2: squared cosines, which provide information on how well each product is explained on each dimension.
    3. Contrib: contributions of each product to each dimension, i.e., how much does each product contribute to the construction of each MDS dimension.
    4. Graph: In MDS, the closer two samples are the more similar they are perceived by the panel of assessors (considered as a whole). The analyst has the option to export and print the graph directly (click the three horizontal bar icon in the upper-right of the graph).
  3. Contingency: a contingency table indicating the number of times two respective samples were categorized into the same group, aggregating over all assessments. The higher the number, the more times the two respective samples were grouped together (and therefore the stronger the evidence of the two samples being more similar).
  4. Cluster: in the calculations a cluster analysis is performed (Hierarchical Clustering of Principal Components, HCPC). The results are shown over three tabs (Cluster, Dendrogram and Cluster characterization). The Cluster tab contains three sub-tabs. 
    1. Prod Cluster: Information on what products belong to the same cluster.
    2. NbClust: Confirmation of the number of clusters and whether this was automatically chosen by the algorithm or specified by the analyst.
    3. Rand Judges-Consensus: Adjusted (ARI) or Standard Rand Index (SRI) scores are presented in table constructed with assessors as rows and cluster solutions that differ in the number of clusters in columns. Whether the ARI or SRI is used depends on the parameter set in the Options section. The recommendation is to use Adjusted as this corrects for chance groupings. In general, the Rand Index assesses the proportion of agreement in categorizations of each assessor to that of different cluster solutions. The SRI ranges from 0 to 1, whilst the ARI ranges -1 to +1. If the same pairs of sample are grouped together by both then a higher Rand score is seen, with 1 indicating perfect agreement. The first row of the table provides either the mean or median Rand Index over all assessors (see the Options sections to set either mean or median). The number of clusters automatically chosen will be equal to the number of clusters with the highest mean/median Rand Index.
  5. Dendrogram: the dendrogram visualizes the merging and segregation of the various samples into the number of clusters.
  6. Cluster characterization: information for which words significantly characterize each of the clusters is provided. Each cluster is indicated by the ‘Group’ column with the respective samples that comprise the group in the ‘Products’ column. The ‘Freq in group’ column provides information on how many times that word (descriptor) was used to describe that cluster, while the ‘Freq overall’ column indicates the total number of times that word was used across all clusters. A p-value is then calculated based on the frequency in group relative to the total frequency.
  7. Words: a contingency table of samples in rows by each descriptor in columns. Numbers indicate the number of time the respective descriptor was used for each sample.
  8. WordCloud: a word cloud based on the total number of descriptors. The size of a words reflects the proportion of times it was used relative to all words used. The word cloud can be exported directly.
  9. Comments: the cluster algorithm (HCPC) combines hierarchical clustering (which results in the dendrogram) prior to a k-means consolidation. Therefore, it is possible that the dendrogram may not exactly reflect the final cluster solution.

If MCA is selected:

Multiple Correspondence Analysis (MCA) can be seen as an extension to simple Correspondence Analysis and as a Principal Components Analysis (PCA) on categorical data.

When MCA is selected, EyeOpenR will automatically convert the data into the format required: samples in rows and each assessor’s groupings as an indicator matrix (binary), joined to form a wide matrix. This is different from the dissimilarity matrix required for MDS. The results of the MCA can be interpreted as follows:

  1. Eigenvalues: the number of MCA components (aka dimensions) and their respective percentage of variance is explained. It is typical for MCA components to explain less variation than MDS components and a direct comparison is not valid. 
  2. Products: this tab provides information on the samples (products). It contains four sub-tabs that described how similar/dissimilar the samples are: 
    1. Coord: co-ordinates of each product across the dimensions, as plotted on the graph. 
    2. Cos2: squared cosines, which provide information on how well each product is explained on each dimension. The total per row equals 1. 
    3. Contrib: contributions of each product to each dimension, i.e., how much does each product contribute to the construction of each MCA dimension. Higher contributions indicate that this sample is more contributing to that dimension. 
    4. Graph: The coordinates of the samples on dimensions 1 and 2 are plotted. Distance between samples can be interpreted as a measure of similarity: the closer two samples are, the more similar they were perceived over the dimensions plotted (i.e., the more they were placed in the same group). The analyst has the option to export and print the graph directly (click the three horizontal bar icon in the upper-right of the graph). 
  3. Judges: MCA also provides information on the Judges (an advantage vs. MDS) 
    1. Coord: co-ordinates of each assessor across the dimensions 
    2. Graph: graphical display of the co-ordinates on dimensions 1-2. Regarding interpretation, distance can be interpreted as a measure of similarity: judges closer together grouped samples together more similarly than opposing judges. 
  4. Cluster: in the calculations a cluster analysis is performed (Hierarchical Clustering of Principal Components, HCPC). The results are shown over three tabs (Cluster, Dendrogram and Cluster characterization). The Cluster tab contains three sub-tabs
    1. Prod Cluster: Information on what products belong to the same cluster
    2. NbClust: Confirmation of the number of clusters and whether this was automatically chosen by the algorithm or specified by the analyst
    3. Rand Judges-Consensus: Adjusted (ARI) or Standard Rand Index (SRI) scores are presented in table constructed with assessors as rows and cluster solutions that differ in the number of clusters in columns. Whether the ARI or SRI is used depends on the parameter set in the Options section. The recommendation is to use Adjusted as this corrects for chance groupings. In general, the Rand Index assesses the proportion of agreement in categorizations of each assessor to that of different cluster solutions. The SRI ranges from 0 to 1, whilst the ARI ranges -1 to +1. If the same pairs of samples are grouped together by both then a higher Rand score is seen, with 1 indicating perfect agreement. The first row of the table provides either the mean or median Rand Index over all assessors (see the Options sections to set either mean or median). The number of clusters automatically chosen will be equal to the number of clusters with the highest mean/median Rand Index. 
  5. Dendrogram: the dendrogram visualizes the merging and segregation of the various samples into the number of clusters.
  6. Cluster characterization: information for which words significantly characterize each of the clusters is provided. Each cluster is indicated by the ‘Group’ column with the respective samples that comprise the group in the ‘Products’ column. The ‘Freq in group’ column provides information on how many times that word (descriptor) was used to describe that cluster, while the ‘Freq overall’ column indicates the total number of times that word was used across all clusters. A p-value is then calculated based on the frequency in group relative to the total frequency.
  7. Comments: the cluster algorithm (HCPC) combines hierarchical clustering (which results in the dendrogram) prior to a k-means consolidation. Therefore, it is possible that the dendrogram may not exactly reflect the final cluster solution.

Technical Information

  1. FactoMineR, SensoMineR, smacof

References

  1. Courcoux, P., Qannari, E. M., & Faye, P. (2015). Chapter 7—Free sorting as a sensory profiling technique for product development. In J. Delarue, J. B. Lawlor, & M. Rogeaux (Eds.), Rapid Sensory Profiling Techniques (pp. 153–185). Woodhead Publishing. https://doi.org/10.1533/9781782422587.2.153

    • Related Articles

    • Free Sorting

      Introduction Free Sorting is a simple method for collecting similarity data in which each panellist groups together products based on their perceived similarities. All products are presented simultaneously and randomly displayed. Panelists are asked ...
    • Flexible Product Evaluation with Free Selection

      Introduction The Free Selection functionality is designed to enhance flexibility in product validation sessions by giving panellists greater control over their evaluation experience. This feature, available within the "Set Validation" options of the ...
    • Free Choice Profiling

      Introduction Free-choice profiling (FCP) is a quick and inexpensive method in which consumers are asked to both identify attributes in the sample and rate the liking and/or intensity of those attributes. They should be provided with adequate ...
    • Sorted Napping

      Introduction Sorted Napping is a methodology that identifies how consumers classify, describe and interpret a large group of products based on certain sensory attributes from their own perspective. The panellists are given a set of products and ...
    • Runtime Options (Design)

      Runtime options are settings that specify how the design is used throughout the project. In most cases the project template contains the correct runtime options for the project you selected. Runtime options can be found in the design of your project. ...