How to Show Sentiment in Word Clouds using Displayr

The Word Cloud above summarizes some data from tweets by President Trump. The green words are words that are significantly more likely to be used in tweets with a positive sentiment. The red represents words more likely to be used in negative tweets. This post describes the basic process for creating such a Word Cloud in Displayr. Please read How to Show Sentiment in Word Clouds for a more general discussion of the logic behind the code below.

Step 1: Importing the data

This post assumes that you have already imported a data file and this data file contains a variable that contains the phrases that you wish to use to create the Word Cloud. If you have the data in some other format, instead use Insert > R Output and use the code and instructions described in How to Show Sentiment in Word Clouds using R.

If you want to reproduce the Word Cloud form above, you can do so by pressing Insert > Data Set (data), clicking on R, and

  • Set the Name to trumpTweats
  • Enter the code below.
  • Press OK.
trump_tweets_df$text <- gsub("http.*", "", trump_tweets_df$text)

Step 2: Extracting the words

  • Insert > More (Analysis) > Setup Text Analysis
  • Select the Text Variable as text (this is the name of the variable containing the tweats)
  • Check the Automatic option at the top.

Step 3: Sentiment for the phrases (tweets)

  • Go to the Variables and Questions tab
  • Select the first variable (it is called text)
  • Insert > More (Analysis) > Techniques > Save Sentiment Scores

Step 4: Sentiment for each word

  • Create > R Output
  • Paste in the code below
  • Press Calculate and you will have the Word Cloud!

As discussed in How to Show Sentiment in Word Clouds , your Word Cloud may look a bit different and you do need to perform a check to make sure no long words are missing. Also, if you have tried these steps a few times in the same project, you will need to update the variable, R Output, and question names to make everything work.

# Sentiment analysis of the phrases 
phrase.sentiment = `Sentiment scores from text`
phrase.sentiment[phrase.sentiment >= 1] = 1
phrase.sentiment[phrase.sentiment <= -1] = -1

# Sentiment analysis of the words
td = as.matrix(AsTermMatrix(text.analysis.setup, min.frequency = 1.0, sparse = TRUE))
counts = text.analysis.setup$final.counts 
phrase.word.sentiment = sweep(td, 1, phrase.sentiment, "*")
phrase.word.sentiment[td == 0] = NA # Setting missing values to Missing
word.mean = apply(phrase.word.sentiment,2, FUN = mean, na.rm = TRUE) = apply(phrase.word.sentiment,2, FUN = sd, na.rm = TRUE)
word.n = apply(!,2, FUN = sum, na.rm = TRUE) = / sqrt(word.n)
word.z = word.mean /
word.z[word.n <= 3 ||] = 0        
words = text.analysis.setup$final.tokens
x = data.frame(word = words, 
      freq = counts, 
      "Sentiment" = word.mean,
      "Z-Score" = word.z,
      Length = nchar(words)) = x[order(counts, decreasing = TRUE), ]

# Working out the colors
n = nrow(
colors = rep("grey", n)
colors[$Z.Score < -1.96] = "Red" 
colors[$Z.Score > 1.96] =  "Green"

# Creating the word cloud
wordcloud2(data =[, -3], color = colors, size = 0.4)

About Tim Bock

Tim Bock is the founder of Displayr. Tim is a data scientist, who has consulted, published academic papers, and won awards, for problems/techniques as diverse as neural networks, mixture models, data fusion, market segmentation, IPO pricing, small sample research, and data visualization. He has conducted data science projects for numerous companies, including Pfizer, Coca Cola, ACNielsen, KFC, Weight Watchers, Unilever, and Nestle. He is also the founder of Q, a data science product designed for survey research, which is used by all the world’s seven largest market research consultancies. He studied econometrics, maths, and marketing, and has a University Medal and PhD from the University of New South Wales (Australia’s leading research university), where he was an adjunct member of staff for 15 years.

Ask a question

If you would like further information on any of the topics mentioned in this article, please get in touch using the form on this page.

Keep updated with the latest in data science.