Legoplots in R (3D barplots in R)

UPDATE 2020: my R package to do this (“barplot3d”) is now available at CRAN:

I previously mentioned these, but I thought about it some more and went ahead and wrote an R implementation.  With very little work you could rejig this code to display all kinds of 3D bar plots; I’ve commented the code thoroughly so it should be easy to follow.  Immediately below is an example plot and below that is a gallery comparing an original plot by the Broad Institute with two versions produced by my code.  I wouldn’t want to slavishly copy, so I’ve added an optional transparency effect that is both useful and attractive.

My code with transparency

All 96 different mutation/context combinations, with added transparency


  • z : a named vector of length 96 representing each of the 6 possible mutations in 16 different sequence contexts.  Note that the names are hard-coded and can be found in the code below.  Values represent the height of each column
  • alpha : transparency of column sides.  Top is always opaque.  Default is 1 (opaque)
  • scalexy : value to scale the area of all columns.  Small values lead to skinny columns, larger values lead to fatter columns.  Default is 10
  • scalez : value to scale the height of all columns; see notes below.  Default is 1
  • gap : a value determining the space between each column.  Default is 0.2

Things to note

  • It was essential to include the lit=FALSE argument when drawing surfaces, as the default lighting makes everything look very shiny and pretty ugly.  I also had to change the field of view (FOV) to 0, as the default value gives a slightly wall-eyed view.
  • The scalexy argument scales both the “fatness” of the columns and the gaps between them.
  • Scaling z using the scalez argument is useful if your data is very flat and you want to emphasize it.  It will have knock-on effects on the scale that appears at the side, so be careful.  If you’re going to transform your data, it’s probably best to do so before putting it into this function.
  • The alpha transparency I added to the column sides is very useful for viewing columns that are obstructed by other columns and has the added benefit of looking pretty cool.
  • I haven’t included the wireframe axis background, I leave that as an exercise for the reader, or maybe I’ll add it later.

Code is available below and example data is included if you check out my GitHub here.

## This script creates a "legoplot" similar to those produced by the Broad Institute
## The plot shows the relative abundance of each of the 6 possible mutations in the
## 16 sequence contexts

## Load packages


## Functions modified from the "demo(hist3d)" examples in the rgl package:
# library(rgl)
# demo(hist3d)
## Note; would it have killed the original author to comment their code?

## Draws a single "column" or "stack".
## X and Y coordinates determine the area of the column
## The Z coordinate determines the height of the column
## We include "lit=FALSE" arguments to remove the nasty shiny surfaces caused by lighting

## These lines allow the active rgl device to be updated with multiple changes
## This is necessary to draw the sides and ends of the column separately
save on.exit(par3d(save))

## Determine the coordinates of each surface of the column and its edges
z2=c(rep(c(0,z),4),rep(0,8),rep(z,8) )
y2=c(rep(y[1],4),rep(y[2],4),rep(c(rep(y[1],3),rep(y[2],3),y[1],y[2]),2) )

## These lines create the sides of the column and its coloured top surface
## This line adds black edges to the column
# Example:
# stackplot.3d(c(0,1),c(0,1),3,alpha=0.6)

## Calls stackplot.3d repeatedly to create a barplot
## z is the heights of the columns and must be an appropriately named vector
## These lines allow the active rgl device to be updated with multiple changes
## This is necessary to add each column sequentially
save on.exit(par3d(save))

## Recreate Broad order
for(type in types){

## Reorder data into 6 regions

## Define dimensions of the plot

## Scale column area and the gap between columns

## Scale z coordinate

## Set up colour palette

## Plot each of the columns
for(i in 1:dimensions[1]){
for(j in 1:dimensions[2]){
it=(i-1)*dimensions[2]+j # Variable to work out which column to plot; counts from 1:96
## Set the viewpoint and add axes and labels
# Example:
# context3d(counts)


## Read in example data and cast to an appropriate vector

## Example plots

## Save your images to files if you wish

Stop the madness – no more pie charts

There has been a trend in the last few years to put interesting-looking but non-informative figures in papers; the pie chart is the worst recurrent offender.  I have no idea how they keep getting included, as they’re famously misleading and awful.  I want my work to look as much like the cockpit of a mecha or Iron Man’s HUD as possible, but I know that it’s not going to be as clear or concise as it should be.


So cool

A recent example is this otherwise very interesting and good paper:

Subclonal diversification of primary breast cancer revealed by multiregion sequencing by Yates et al, 2015

Figure 2 is confusing at best and reproduced below.  These pie-chart-like plots were made famous by Florence Nightingale 150 years ago and have been called “coxcomb plots”.  Wikipedia claims that that’s a mistake and they are really called “polar area diagrams” or “courbes circulaires”.


I admit that they look pretty cool and are fine if you’re building some sort of hip infographic about consumer tastes or Facebook trends.  However, I think that they’re inappropriate and misleading because:

  1. They look unusual.  The reader expends a lot of energy working out what they’re looking at instead of processing the relationships in the data
  2. These data contain only one variable that changes, but the plots used can encode two variables (arc length and radius can vary, discussed below).  The plot is therefore unnecessarily complex
  3. In pie charts, the arc length is proportional to the value represented.  In these plots, the arc length is identical for each slice of pie… but you might not know this and you may infer that there is less of a certain segment.  This is appropriate for the type of data that Florence Nightingale was plotting (deaths in each month of the year), but not for arbitrary divisions
  4. In these plots, the radius of each segment (i.e. how far it extends from the centre) is informative.  You’re supposed to read off an almost-invisible grey scale of concentric rings, but it’s not easy.  Also, the visual effect is non-linear because the area of a circle is πr^2, which means that a small increase in radius has a disproportionately large gain
  5. It’s really hard to compare between plots; visual comparison is difficult and reading the scale to get at the numbers is even harder
  6. Multiple data types are represented in a single plot.  I’m not sure mixing somatic mutation variant allele frequencies and copy number log ratios is very effective

Some potential solutions

Criticizing without offering a solution isn’t very helpful, so I offer two solutions:

Use barplots or boxplots.  Yes, they’re boring, but they’re also familiar and easy to read.  I suppose you could even put a scale at either edge and mix data types without too much complaint (e.g. somatic VAFs and copy number logRs on the same plot).

A type of 3D barplot.  I generally think 3D is a bad thing when displayed in a 2D plane, particularly if it’s non interactive.  For example, a 3D shape on a 2D display is fine if you can rotate and zoom it freely.  “Lego plots” have been popularized by the Broad Institute in their sequencing papers, usually to show the sequence context of mutations (see below; taken from Exome and whole-genome sequencing of esophageal adenocarcinoma identifies recurrent driver events and mutational complexity by Dulak et al, 2013).  The irony of the pie charts isn’t lost on me.


They’re relatively appealing and a relatively concise way of showing what would otherwise be a barplot of 96 values; it would get tricky if the data weren’t carefully arranged so as not to obscure anything, though.

If 3D barplots are now acceptable, why don’t we make a 3D barplot that encodes two variables?  The height would represent one variable and the depth another.

I’ve created a small data set and some R code to illustrate these points below:

Alternative plots

The data set represents 4 mutations (labeled A to D) in a set of 100 samples.  Each sample has a variant allele frequency (VAF) for each mutation, between 0 (not present) and 1 (every sequencing read contains it).

# A is frequent (80/100 samples) and usually subclonal (VAF ~0.1)
# B is infrequent (20/100 samples) and usually clonal (VAF ~0.8)
# C is frequent (90/100 samples) and usually clonal (VAF ~0.9)
# D is infrequent (15/100 samples) and usually subclonal (VAF ~0.15)

  • Coxcomb / polar area plot.  Arc length represents proportion of samples, radius (wedge length) represents median VAF.  Can be difficult to interpret.


  • Boxplot.  Good representation of the data, but hard to tell how many samples are represented.  For example, B is far less common than C but appears to be very similar.  Likewise for D and A.


  • Barplot.  No representation of the spread of the data.  The proportion of affected samples is encoded using colour (darker mutations affect more samples).  Perhaps the colours could be applied to the boxplot for the best result?


  • 3D barplot.  It’s very basic (missing axes and labels), but shows the relationships between proportion and VAF more clearly than other plots.  I added some transparency so no column totally obscures any others.  It’s more convincing when you can rotate the object yourself (download the code and try it yourself), but I think even a static image is better than a Coxcomb/polar coordinate plot.


Code is below and available on GitHub here.

## Load packages


## Functions modified from the "demo(hist3d)" examples in the rgl package:
# library(rgl)
# demo(hist3d)
## Would it have killed the author to comment their code?

## Draws a single "column" or "stack".
## X and Y coordinates determine the area of the stack
## The Z coordinate determines the height of the stack

## These lines allow the active rgl device to be updated with multiple changes

## Determine the coordinates of each surface of the stack and its edges
z2=c(rep(c(0,z),4),rep(0,8),rep(z,8) )
y2=c(rep(y[1],4),rep(y[2],4),rep(c(rep(y[1],3),rep(y[2],3),y[1],y[2]),2) )

## These lines create the sides of the stack and its top surface
## This line adds black edges to the stack
# Example:

## Calls stackplot.3d repeatedly to create a barplot
# x is a constant distance along x axis
# y is the depth of column
# z is the height of column
## These lines allow the active rgl device to be updated with multiple changes

## Plot each of the columns
breaks.x = seq(0,n-1)
for(i in 1:n){
## Set the viewpoint
# Example


## Example data:
# 4 mutations in 100 samples
# VAF range is from 0 to 1
# A is frequent and usually subclonal
# B is infrequent and usually clonal
# C is frequent and usually clonal
# D is infrequent and usually subclonal


## Summarize data in new object

## Code to produce coxcomb/polar coordinate plot adapted from:
pos = 0.5 * (cumsum(vafsum$proportion) + cumsum(c(0, vafsum$proportion[-length(vafsum$proportion)])))
p = ggplot(vafsum, aes(x=pos)) + geom_bar(aes(y=median), width=vafsum$proportion, color = "black", stat = "identity") + scale_x_continuous(labels = rownames(vafsum), breaks = pos) # Linear version is ok
p + coord_polar(theta = "x")
# (ignore warnings thrown)

## A traditional boxplot

## A barplot where height represents median VAF and the color of the bar represents
## how many samples contain each mutation

## Our new 3D barplot function
rgl.snapshot("3dbarplot.png", fmt = "png", top = TRUE )

R, GRanges objects and intersecting them

I discovered unpredicted behavior when intersecting GRanges objects; this is how to avoid the same problem I found.  Maybe my intuition wasn’t good, but this represents the a bad type of bug; a silent one.  If I had not spotted it, I would have lost several nonsynonymous KRAS mutations in my dataset.

If you want to represent a long list of genomic coordinates in R, you probably want to use a GRanges object.  It’s a very space and time efficient way of storing the data, even if it can be a little difficult to get accustomed to.  It’s ideal for representing bed files, aligned sequence data, SNV and indel coordinates, annotations and so forth.

In my case, I was intersecting two GRanges objects; I wanted to see which of my SNVs were within the boundaries of an exome capture kit.  In the example below, we use three exome capture regions (one from chromosome 1, exon 2 of KRAS and one from chromosome Y) and three consecutive SNVs i.e. SNVs that occur at immediately adjacent positions.  I could have just used exon 2 of KRAS, but I wanted to demonstrate the code can handle multiple locations on different chromosomes at once.

The SNVs could be in the same sample or different samples, it doesn’t matter; what matters is that their coordinates are contiguous.  The contiguous SNVs are collapsed into a single window (which is undesirable), then the regions are expanded back to single base-pair sites.  You of course can’t do this with indels of longer than 1bp.

We could have avoided this bug entirely if we’d worked in reverse and used setdiff() instead of intersect() to identify SNVs that don’t fall within the exome.

# Load packages

## Create GRanges object of three exons

## Create GRanges object of three consecutive SNVs and
## two others on different chromosomes

## Intersect the SNVs with the exome.
## You DO NOT get five hits back, but three.
## The 3 SNVs on chr12 are reported as a single, contiguous region

## We expand the region back to the original, single-base sites
## and paste the appropriate chromosome to it, separated by a colon.
## Some duplication occurs, so we remove this using unique()":",start(grhits),end(grhits)))

## If you want a GRanges object again, just split the
## output back into its components and rebuild.
## Final line demonstrates that our results are identical to the input


Analysing Dracula

I recently read Bram Stoker’s Dracula, mostly because I wanted to rewatch the 1990s movie of the same name and see how badly they’d mutilated the story.  [Update 26/04/2015: I rewatched the 1992 movie and while it’s the most faithful version to the book, it has some weird additions and is utterly appalling.  People have been mean about Keanu Reeves’ performance, but he’s no worse than anyone else and the direction is the real issue.  The Annie Lennox song at the end is nice, though].  The book is wholly composed of diary entries, letters and newspaper clippings.  This means that it’s effectively a self-contained social network, with every “sample” having a date, an author and an intended recipient.  It occurred to me that it might be interesting to look at these data and nobody else seems to have done so.  Fortunately, someone had tabulated much of the data, even if they hadn’t done anything further with it.  Therefore, thanks to Harold J Hotchkiss for typing it up over a decade ago.

Did it really happen?

Obviously not.  The year it takes place isn’t important, but it was published in 1897 and ends with a note dated “7 years later”, so let’s assume it takes place in 1890.  If you calculate the days of the week that events occur on throughout the book, you get the plot below.  I have no empirical evidence to back this up, but it looks too evenly distributed to me to be real.  I’ve looked at real time series before and you will generally see some sort of trend, be it for weekdays, weekends or a particularly day.  As it says in the book, “denn die Todten reiten Schnell”  (“for the dead travel fast”) and they apparently don’t take days off, either.


Authorship and structure of the book

~70% of the book is diary entries chiefly written by 3 authors.  John Seward (~40% of diary entries), Jonathan Harker (~31% of diary entries) and Wilhelmina “Mina” Murray-Harker (~24% of diary entries).  Mina gets a double-barreled surname because she marries Jonathan midway in the book and changes her surname, which was the style at the time.  I needed a single name to refer to both her ‘states’.  There is some truly incredible sexism in the book, it seems like gender was a big deal in the late 19th century.  You can divide documents by the type (e.g. letter, diary entry, etc) and also both who they are sent by and to.

documents_to_barplot documents_from_barplot

The above figures are global summaries of the entire book.  What is happening over the course of the book?  This is slightly complicated because the book isn’t in strict chronological order; chapters may feature letters and events that occurred previously.


The figure above gives a clear view of the structure of the book.  The plots below show who is writing what and to whom.  The first few chapters are Jonathan Harker’s diary of events in Castle Dracula, before a flurry of letters to introduce most of the rest of the characters.  Subsequently, the captain of the Demeter gives his account of bringing Dracula to Whitby (summary; not a lot of fun), and then Dr Seward gets on with the business of writing the majority of the book.  You can also clearly see how certain characters come and go; for example, Lucy Westenra’s death and Jonathan Harker’s heroic reappearance in the second half.

document_sender_by_chapter document_receiver_by_chapter

Barplots are great, but there other ways to represent the data and they can show structure that hard to grasp in any other way.



The above plot is a timeline of events in the book; it takes place in a mere 7 months.  Events in odd-numbered chapters are above the timeline and those in even-numbered chapters are below.  An arbitrary Y-axis separates events by their document type.  Events are represented by coloured squares corresponding to the sender of the document, with the colours matching those used in other plots here.

Network analysis

We’ll start with the most interesting figure, the network itself.  Primary characters are green hexagons and secondary characters are blue triangles.  Nodes are labeled with character initials and scaled relative to how many documents they sent or received.  Edges between nodes are also scaled relative to how many documents are sent.


A few things of note:

  • Nobody ever writes or receives any correspondence from Count Dracula
  • Jonathan Harker and Mina Murray-Harker never speak.  Letters are mentioned, but we never get to read them
  • It’s fairly interesting how many characters never speak to one another
  • Arthur Holmwood receives a lot of post, but doesn’t bother writing back.  I guess he’s too important being a Lord
  • Quincey exists mostly as a bizarre stereotype of Americana and he only communicates with Arthur Holmwood once.  He spends the entire book being very worthy, his actions announcing “I will die heroically” to the reader
  • The cluster of secondary characters is some removal companies talking to each other about Dracula’s boxes of earth, some newspapers reporting on events at Whitby and the “bloofer lady” (i.e. undead Lucy), and of course the unfortunate captain of the Demeter, scribbling in his log about the worst cruise ever

Below is a barplot of how many characters each character sends and receives documents from (this is a summary of the edges in the network above).  It includes self-edges, so people who write only to themselves still “receive” at a value of 1.  It further illustrates the fact that Arthur Holmwood is the most popular girl at school, whilst Jonathan Harker is a weird, angry loner who talks only to himself.  Van Helsing is nice enough to write to him… once.


I have also produced a correlation plot, although the data plotted are not correlation values.  It shows the proportion of documents each author sent to each recipient.  The scale is misleading, as all values are non-negative (i.e. between 0 and 1).  I find it sad that Van Helsing likes to write letters and never writes in his own self-absorbed diary, but people don’t write back.  The Y-axis is sender and the X-axis is recipient.


Finally in this section, an alternative to a network is an arcplot.  These can look very nice indeed, but I don’t think they work particularly well for these data:



By all means go nuts and do some sentiment analysis or something more complicated, but I was happy just to create the obligatory word cloud.  They’re not really empirical, but they can be nice to look at, I guess.  The most common word is “said”, because the book is a “he-said-she-said” bonanza, like all social networks.


Final thoughts

It holds up quite well.  I was surprised to see how “modern” the text was; for example,

“To use an Americanism, he had ‘taken no chances'”

In other ways, it’s comically out of date.  It’s very much a story about a group of wealthy people and anyone else is given short shrift and comical accents.  Servants are also plentiful and the word “peasant” is used semi-frequently, usually to refer to people from central and eastern Europe:

Some of them were just like the peasants at home
or those I saw coming through France and Germany, with short jackets, and round hats, and home-made trousers; but others were very picturesque.  The women looked pretty, except when you got near them, but they were
very clumsy about the waist.

The sexism is so overt it’s quite funny, with men being strong and noble whilst women are mostly feeble and emotional victims:

“Ah, that wonderful Madam Mina!  She has man’s brain, a brain that a man should have were he much gifted, and a woman’s heart.”

It can just about be forgiven for its time, as Mina is ultimately instrumental in defeating Dracula and has a central role in events overall, even if she eventually becomes a bit of a damsel in distress.  On the other hand, if you wrote a vampire novel today, it would be unforgivable to have a female protagonist willingly submitting to all kinds of demeaning emotional and physical abuse… wouldn’t it?

Perhaps the strangest thing about the story is the way they combat and defeat vampires; on the one hand, the most potent weapons against them appear to be herbs and Christian mythology (particularly bits of magic wafer), all of which is treated with absolute po-faced seriousness.  On the other hand, while vampires seem to be averse to these things, but are not physically harmed by them.  They are susceptible to ultraviolence, though, as stabbing a vampire in the heart and sawing its head off is pretty effective. Maybe the power of prayer is overstated.  The next time you suspect that you’re in the presence of supernatural being intent on giving you immortality (although there are some downsides), try warding them off with a Kit Kat before you decide to brutally murder them, which will be frowned upon if they’re not actually one of the undead.


You can access the code and files used on my GitHub repository, but the R code is copied below if you’re interested in a quick look at how the plots were made.  You can download the book itself for free from Amazon, and also a nice text version from Project Gutenberg.

#### Load packages ####
library(openxlsx) # for reading Excel files
library(RColorBrewer) # nice color palettes
library(ggplot2) # fancy-looking plots
library(corrplot) # correlation plot
library(network) # for network plot
library(sna) # for network plot
library(tm) # Text mining package
library(wordcloud) # word cloud plotting

#### Read in data and setup ####

## Read from Excel. Note that we have word counts for chapters,
## but never bother using them
chron = read.xlsx("dracula.xlsx",sheet="chronology", startRow=1,
 colNames=TRUE, skipEmptyRows=TRUE, rowNames=FALSE)
wc = read.xlsx("dracula.xlsx",sheet="wordcounts",
 startRow=1, colNames=TRUE, skipEmptyRows=TRUE, rowNames=FALSE)

## Reorder type column for aesthetic reasons and
## cast chapters as a factor
"Telegram","Memo","Newspaper","Ship's log"))

## To reorder the characters, we manually define
## the order to ensure proper grouping
## A frequency-based approach looks more confused
primarycharacters=c("John Seward",
"Jonathan Harker",
"Wilhelmina Murray-Harker",
"Lucy Westenra",
"Abraham Van Helsing",
"Arthur Holmwood",
"Quincey P Morris")

secondarycharacters=c("Captain of the Demeter",
"The Dailygraph",
"The Westminster Gazette",
"The Pall Mall Gazette",
"Carter, Paterson & Co",
"Samuel F Billington & Son",
"Mitchell, Sons & Candy",
"Rufus Smith",
"Patrick Hennessey",
"Sister Agatha")

## Store the initials of each character for labelling plots

## Calculate days of the week - this cannot be done in
## Excel before the year 1900
chron$Date=as.Date(gsub(" ","-",chron$Date))

## Define color palettes


#### End of reading data and setup ###

#### Basic plots ####

## Which days documents were written on
ggtitle("Documents by day of the week")

## Plot document types with who wrote them and who to:
scale_fill_manual(values=acols)+ggtitle("Documents From")
scale_fill_manual(values=acols)+ggtitle("Documents To")

## Who writes what and when, split by chapters
ggtitle("Document type by chapter")
scale_fill_manual(values=acols)+ggtitle("Document sender by chapter")
scale_fill_manual(values=acols)+ggtitle("Document receiver by chapter")

## A heatmap of who sends what to who...

## corrplot() messes with the device setup, so reset it

#### End of basic plots ####

#### Timeline ####

## Crop off the last entry, as it's 7 years later

## Define heights for points
height[main$Type=="Diary"]= 1
height[main$Type=="Ship's log"]=2
height[main$Type=="Memo"]= 3
height[main$Type=="Newspaper"]= 5
height[main$Type=="Letter"]= 6

## Define vectors of colours for the plot
for(i in 1:length(levels(chron$From))){

## Use chapter information to determine whether the
## point is above or below the line
## Odd numbers above, even below

## Perform plot
plot(x=main$Date,y=rep(1,nrow(main)),type="n",xaxt = "n",yaxt="n",
bty = "n",xlab = "Date", ylab=NA,ylim=c(-6,6),main="Dracula Timeline")
#points(main$Date,height,pch=15,col=tocols) #uncomment for "to" colours
u = par("usr")
arrows(u[1], 0, u[2], 0, xpd=TRUE,lwd=3)
ylabels=c("Diary","Ship's log","Memo","Telegram","Newspaper","Letter")
axis(1, at=as.Date(c("1890-05-01","1890-06-01","1890-07-01",

#### End of timeline ####

#### Network ####

## Create network of who communicates with who
comtablebinary=comtable/comtable # divide by itself so all values are 1

## Number of people who write to each character
## Number of people who each character writes to

## Barplot of results
barplot(t(sentreceived),main="Characters interacted with",

## Generate some scaled edge widths

## Perform network plot
interactive=TRUE, # Uncomment for interactive plotting
diag=TRUE, # self-loops
loop.steps=5, # angular self-loops
usecurve=TRUE, # curvy arrows
arrowhead.cex=0.5, # smaller arrowheads
label=initials, # initials not full names
label.pos=5, # labels inside shapes
as.character(chron$To)))[levels(chron$From)])+3)/2, # log2 scale

#### End of network ####

#### Arc diagram ####

## IMPORTANT NOTE: loading this package will break the network-plotting
## code, which is why it's all commented out here:
# library(devtools) # for installing arcdiagram
# install_github('arcdiagram', username='gastonstat')
# library(arcdiagram) # for plotting arc diagram

## Generate a suitable object, order it for easy access and
## remove self-communication, as this can't be represented
## on an arc diagram

## Define which arcs we will place above and which below

## Define a custom order of nodes
ordering=c("Abraham Van Helsing",
"John Seward",
"Jonathan Harker",
"Wilhelmina Murray-Harker",
"Arthur Holmwood",
"Quincey P Morris",
"Lucy Westenra",
"Carter, Paterson & Co",
"Samuel F Billington & Son",
"Mitchell, Sons & Candy",
"Sister Agatha",
"Rufus Smith",
"Patrick Hennessey")

## Perform the arc plot
## Col.nodes has a bug, so colours are defined manually
## Could be improved with weighted line arcs and/or nodes
legend("topleft",c("Primary character","Secondary character"),

#### End of arc diagram ####

#### Word cloud ####

## Plot a word cloud. It's not really quantitative, but it's pretty

## Create a corpus and do some basic cleaning up
readerControl=list(language = "en"))
corpus=tm_map(corpus, content_transformer(removePunctuation))
corpus=tm_map(corpus, content_transformer(tolower))
corpus=tm_map(corpus, function(x) removeWords(x, stopwords("english")))

## Create the necessary objects for plotting
tdm = TermDocumentMatrix(corpus)
tdmatrix = as.matrix(tdm)
tdcounts = sort(rowSums(tdmatrix),decreasing=TRUE)
tddf = data.frame(word = names(tdcounts),freq=tdcounts)

## Perform plot
random.order=FALSE, rot.per=0.15, colors=gothic,random.color=TRUE)

#### End of word cloud ####


It’s not me, it’s you – don’t use Perl for math(s)

After a good five years with Perl, it finally crossed the line and forced me to start moving over to Python.  This is a short cautionary tale with two messages:

  1. No language is suitable for every job
  2. Languages are only as good as their libraries/modules/packages/etc

I like Perl a great deal; it’s ideal for almost any task that can’t be achieved trivially on the command line with a few pipes and some awk.  Perl is fast and easy to write, and it’s perfect for text editing and tying other bits of code together.  You could even use it as a flexible wrapper for bash commands and scripts, as I have mentioned before.

However, recently Perl let me down and in a big way.  What I wanted to do was thousands of t-tests on thousands of arrays of numbers, reporting back some basic metrics like p-values and similar.  The hitch was that the input files were enormous text files (several GB).  My language options were either:

  1. R, with a simple and inbuilt t.test() function should be great, but the input size was a problem.  Reading the whole input file into memory was a viable approach as I work on systems with a lot of memory, but it’s not particularly fast or smart to do so.  I could also catch up on some R reading and find a way to do the calculations dynamically, but on the other hand…
  2. Perl has excellent IO capabilities and can produce results as the files are read in, making the code almost as fast as files could be read and written.  How very efficient – all I needed was to find a nice t-test module

I figured I could solve the problem faster using Perl, so I went fishing in CPAN (Comprehensive Perl Archive Network), the Perl module repository to pull out a suitable module.  The best match was very promising:


Written a decade ago (July 2003), surely the code was mature and this module had been used in anger again and again?  The script was quickly completed and set to work, only to fail mysteriously with this this cryptic error:

Can't take log of -0.0439586 at /usr/local/share/perl/5.14.2/Statistics/ line 325

After checking for obvious causes like stray zero values or NAs, some debugging revealed that Perl was failing because I was attempting a t-test on a pair of arrays of length 4710.  I wasn’t passing in any illegal values, it was failing purely because the input was too big.  There’s no way I’m going to accept that when a desktop PC running R can perform t-tests on arrays orders of magnitude bigger.

I decided to solve the problem in a memory-hungry fashion in R for the time being and to be fair to Perl, haven’t attempted to write the same code in Python, but as Python famously has both NumPy and SciPy which are written specifically to augment the inbuilt maths functions, I’m optimistic that it won’t fail me.

Shame on you, Perl.

Perl + shell = Sherl

Backticks. Technically speaking, what I’m advocating is probably bad form and frowned upon by real computer scientists, but I love system calls from within Perl scripts. Why would I write several lines of Perl to query what’s in a directory or write a temporary file when I could just use bash shell scripting commands wrapped in backticks?

## Use sed from within Perl to edit a file
`sed -i 's/something/$somethingelse/g' somefile.txt`;
## Read a list of chromosomes directly into an array
my @chromos = `cat /home/cwardell/b37/human_g1k_v37.fasta.fai | awk '{print \$1}' `;

If we consider the ends to be justified by the means (let ends = “getting as much work done in the most time-efficient way possible” and means = “a bit of hackery”), I see no real problem with it. This is a great deal of my code ends up containing a great deal of backticks. In case you’re not familiar with backticks, you can find them in the top-left of your keyboard, nestled between the Esc, Tab and 1 keys. Officially a backtick is termed the “grave accent” and is not to be confused with the apostrophe, which will do nothing for your code except throw errors, as your enclosed command will be interpreted as a string.

Bearing in mind that Perl is already an unforgivably unattractive language and you’re doing it no great aesthetic harm by throwing in some shell scripting, I therefore name this ugly mongrel mix of two ugly languages “Sherl” and I’ll keep using it because I sometimes value function over form, at least until I’m visualising data. I’m also trusting that your code is so comment-heavy that you’ll understand the what, how and why of your Sherl code.  You do obsessively comment your code, right?