Retrospectively adding supplementary points to a correspondence analysis can greatly assist in the interpretation of results. In other words, including supplementary row or column points to a correspondence analysis after the core data has determined the map can improve interpretation of the results.

Correspondence analysis is a technique for analyzing tables of data, often used to produce a 2-dimensional scatterplot (map) for brand positioning purposes. The map shows the relative position of brands and various attributes.

This post describes how to add supplementary points to a correspondence analysis, and how to interpret them on a map. It uses time-series and brand subset examples. There is a link to the worked example from this article at the end of this post.

Create your own Correspondence Analysis

How supplementary points can improve interpretation of results

Supplementary points can aid in the interpretation of correspondence analysis by providing additional context to the main data.  The context may be depicting changes over time (e.g., tracking data) or treating a subset of data as ancillary points. Supplementary points are additional rows and columns that do not determine the axes, but you can plot them on the same map.

Trends of brand perceptions

An earlier post describes how you can use correspondence analysis to analyze trends. I have repeated one of the scatterplots from this earlier post below. It shows the change in people's perceptions of technology brands from 2012 to 2017.

The plot has Principal coordinates normalization. This means that the distances between row points and the distances between column points are meaningful, but not necessarily the distance between row and column points. Click here for a further explanation about interpretation and normalization.

Separating core and supplementary points

An alternative way to analyze the same data is to consider 2017 as the "ground truth" and plot 2012 as supplementary points. This means that the 2017 data determine the dimensions and axes of the map. You can add 2012 data after as supplementary points. In technical correspondence analysis terminology, the 2012 rows have zero mass.

We can see below that while the output shows the same themes as the first chart, it is different in the detail. Easy to use and Innovative are now closer together. We can now deduce that on the basis of 2017 data, Easy to use and Innovative have more similar meanings.

Yet another perspective is to consider 2012 the ground truth and to then plot the 2017 points as supplementary. This produces the results below where Easy to use and Innovative are further apart than in the original chart. Evidently, the association between innovation and ease of use is a more recent phenomenon.

All three charts are equally valid views of the data. They differ in their emphasis. For instance, the second chart would be most relevant for a study on the state of the technology market in 2017. In this case, the 2012 data is added for context but does not influence the positioning of the 2017 points.

Note that the first chart from the previous post is an "average" (in a strictly non-technical, hand-waving sense!) of the 2012 and 2017 charts.

Focusing on a subset of data

The second example below is the correspondence analysis resulting from a table of 14 car models. Let's say we wanted to study the 4 German brands. They form a line across the top from Volkswagen on the left, through Audi, Mercedes then BMW. The chart has Row principal normalization. This means that is it valid to compare distances between row points. It is also valid to measure the association between rows and columns by their scalar products.

We might be tempted to say that the Volkswagen was Popular, the Audi and Mercedes are Luxury and the BMW X5 is Sporty. Before doing so, note that the total explained variance is only 53%. This means there is information hidden in the dimensions that are not plotted.

Let's repeat the analysis, this time treating all the non-German cars as supplementary. Now we see that the Audi A4 is very near the center of the plot. This means that it is not strongly associated with any of the characteristics. We can conclude that amongst all 14 cars the Audi is considered a luxury car, but amongst the German cars, it is not. Note also that the total explained variance below is now almost 97%. This means that we can be more confident about our conclusions.

There is also a close relationship between Family and Sporty. Evidently, the German cars discriminate relatively little between those characteristics.

Finally, we can check the result above by removing the supplementary points. This produces the chart below, which is the same except we can no longer see how the German cars relate to the non-Germans.


You can add data to a "core" correspondence analysis as supplementary points. The advantage of supplementary points over just analyzing all the data together is that supplementary points do not influence the placement of core data points. As the name implies, they are added after the core data has determined the map. Supplementary data points are an excellent way to provide additional context to an analysis that is driven entirely by another part of the data set.

All the analysis in this post was conducted in Displayr. Review the worked example from this post or run your own analysis by clicking through to this correspondence analysis example. The supplementary points are specified in the Inputs panel, seen on the right after clicking on any map. You can also try your own correspondence analysis for free in Displayr.

Create your own Correspondence Analysis

The flipDimensionReduction package (available on GitHub) was used, which itself uses the ca package for correspondence analysis.

The car data is from a latent feature analysis performed in Meulders, M. (2013). An R Package for Probabilistic Latent Feature Analysis of Two-Way TwoMode Frequencies. Journal of Statistical Software, 54(14), 1-29. This analysis uses data from Van Gysel, E. (2011). Perceptuele analyse van automodellen met probabilistische feature modellen. [translation from Dutch: Perceptual analysis of car models with probabilistic feature models] Master thesis. Hogeschool-Universiteit Brussel.