Using apply, sapply, lapply in R
This is an introductory post about using apply, sapply and lapply, best suited for people relatively new to R or unfamiliar with these functions.
We have three columns, 30 rows.
method1 method2 method3
[1,] 0.05517714 0.014054038 0.017260447
[2,] 0.08367678 0.003570883 0.004289079
[3,] 0.05274706 0.028629661 0.071323030
[4,] 0.06769936 0.048446559 0.057432519
[5,] 0.06875188 0.019782518 0.080564474
[6,] 0.04913779 0.100062929 0.102208706
We can simulate this data using rnorm, to create three sets of observations. The first has mean 0, second mean of 2, third of mean of 5, and with 30 rows.
m <- matrix(data=cbind(rnorm(30, 0), rnorm(30, 2), rnorm(30, 5)), nrow=30, ncol=3)
Apply
First I want to make sure I created that matrix correctly, three columns each with a mean 0, 2 and 5 respectively. We can use apply and the base mean function to check this.
We tell apply to traverse row wise or column wise by the second argument.
In this case we expect to get three numbers at the end,
the mean value for each column,
so tell apply to work along columns by passing 2 as the second argument.
But let’s do it wrong for the point of illustration:
apply(m, 1, mean)
# [1] 2.408150 2.709325 1.718529 0.822519 2.693614 2.259044 1.849530 2.544685 2.957950 2.219874
#[11] 2.582011 2.471938 2.015625 2.101832 2.189781 2.319142 2.504821 2.203066 2.280550 2.401297
#[21] 2.312254 1.833903 1.900122 2.427002 2.426869 1.890895 2.515842 2.363085 3.049760 2.027570
Passing a 1 in the second argument, we get 30 values back, giving the mean of each row. Not the three numbers we were expecting, try again.
apply(m, 2, mean)
#[1] -0.02664418 1.95812458 4.86857792
Great. We can see the mean of each column is roughly 0, 2, and 5 as we expected.
Our own functions
Let’s see how many negative numbers each column has, using apply again:
apply(m, 2, function(x) length(x[x<0]))
#[1] 14 1 0
So 14 negative values in column one, 1 negative value in column two, and none in column three.
Here we have used a simple function we defined in the call to apply, rather than some built in function. Note we did not specify a return value for our function. R will magically return the last evaluated value. The actual function is using subsetting to extract all the elements in x that are less than 0, and then counting how many are left are using length.
The function takes one argument, which I have arbitrarily called x. In this case x will be a single column of the matrix. Is it a 1 column matrix or a just a vector? Let’s have a look:
apply(m, 2, function(x) is.matrix(x))
#[1] FALSE FALSE FALSE
Not a matrix. Here the function definition is not required, we could instead just pass the is.matrix function, as it only takes one argument and has already been wrapped up in a function for us. Let’s check they are vectors as we might expect.
apply(m, 2, is.vector)
#[1] TRUE TRUE TRUE
Why then did we need to wrap up our length function? When we want to define our own handling function for apply, we must at a minimum give a name to the incoming data, so we can use it in our function.
apply(m, 2, length(x[x<0]))
#Error in match.fun(FUN) : object ‘x’ not found
We are referring to some value x in the function, but R does not know where that is and so gives us an error. There are other forces at play here, but for simplicity just remember to wrap any code up in a function. For example, let’s look at the mean value of only the positive values:
apply(m, 2, function(x) mean(x[x>0]))
#[1] 0.4466368 2.0415736 4.8685779
Using sapply and lapply
These two functions work in a similar way, traversing over a set of data like a list or vector, and calling the specified function for each item.
Sometimes we require traversal of our data in a less than linear way. Say we wanted to compare the current observation with the value 5 periods before it. Use can probably use rollapply for this (via quantmod), but a quick and dirty way is to run sapply or lapply passing a set of index values.
Here we will use sapply, which works on a list or vector of data.
sapply(1:3, function(x) x^2)
#[1] 1 4 9
lapply is very similar, however it will return a list rather than a vector:
lapply(1:3, function(x) x^2)
#[[1]]
#[1] 1
#
#[[2]]
#[1] 4
#
#[[3]]
#[1] 9
Passing simplify=FALSE to sapply will also give you a list:
sapply(1:3, function(x) x^2, simplify=F)
#[[1]]
#[1] 1
#
#[[2]]
#[1] 4
#
#[[3]]
#[1] 9
And you can use unlist with lapply to get a vector.
unlist(lapply(1:3, function(x) x^2))
#[1] 1 4 9
However the behviour is not as clean when things have names, so best to use sapply or lapply as makes sense for your data and what you want to receive back. If you want a list returned, use lapply. If you want a vector, use sapply.
Dirty Deeds
Anyway, a cheap trick is to pass sapply a vector of indexes and write your function making some assumptions about the structure of the underlying data. Let’s look at our mean example again:
sapply(1:3, function(x) mean(m[,x]))
[1] -0.02664418 1.95812458 4.86857792
We pass the column indexes (1,2,3) to our function, which assumes some variable m has our data. Fine for quickies but not very nice, and will likely turn into a maintainability bomb down the line.
We can neaten things up a bit by passing our data in an argument to our function, and using the … special argument which all the apply functions have for passing extra arguments:
sapply(1:3, function(x, y) mean(y[,x]), y=m)
#[1] -0.02664418 1.95812458 4.86857792
This time, our function has 2 arguments, x and y. The x variable will be as it was before, whatever sapply is currently going through. The y variable we will pass using the optional arguments to sapply.
In this case we have passed in m, explicitly naming the y argument in the sapply call. Not strictly necessary but it makes for easier to read & maintain code. The y value will be the same for each call sapply makes to our function.
I don’t really recommend passing the index arguments like this, it is error prone and can be quite confusing to others reading your code.
I hope you found these examples helpful. Please check out part 2 where we create a density plot of the values in our matrix.
Shifting sands
Density Plot with ggplot
This is a follow on from the post
Using apply sapply and lappy in R.
The dataset we are using was created like so:
m <- matrix(data=cbind(rnorm(30, 0), rnorm(30, 2), rnorm(30, 5)), nrow=30, ncol=3)
Three columns of 30 observations, normally distributed with means of 0, 2 and 5. We want a density plot to compare the distributions of the three columns using ggplot.
First let's give our matrix some column names:
colnames(m) <- c('method1', 'method2', 'method3')
head(m)
# method1 method2 method3
#[1,] 0.06288358 2.7413567 4.420209
#[2,] -0.11240501 3.4126550 4.827725
#[3,] 0.02467713 1.0868087 4.044101
ggplot has a nice function to display just what we were after
geom_density and it's counterpart
stat_density which has more examples.
ggplot likes to work on data frames and we have a matrix, so let's fix that first
df <- as.data.frame(m)
df
# method1 method2 method3
#1 0.06288358 2.7413567 4.420209
#2 -0.11240501 3.4126550 4.827725
#3 0.02467713 1.0868087 4.044101
#4 -0.73854932 -0.4618973 3.668004
Enter stack
What we would really like is to have our data in 2 columns, where the first column contains the data values, and the second column contains the method name.
Enter the base function stack, which is a great little function giving just what we need:
dfs <- stack(df)
dfs
# values ind
#1 0.06288358 method1
#2 -0.11240501 method1
#…
#88 5.55704736 method3
#89 6.40128267 method3
#90 3.18269138 method3
We can see the values are in one column named values, and the method names (the previous column names) are in the second column named ind. We can confirm they have been turned into a factor as well:
is.factor(dfs[,2])
#[1] TRUE
stack has a partner in crime, unstack, which does the opposite:
unstack(dfs)
# method1 method2 method3
#1 0.06288358 2.7413567 4.420209
#2 -0.11240501 3.4126550 4.827725
#3 0.02467713 1.0868087 4.044101
#4 -0.73854932 -0.4618973 3.668004
Back to ggplot
So, lets try plot our densities with ggplot:
ggplot(dfs, aes(x=values)) + geom_density()
The first argument is our stacked data frame, and the second is a call to the aes function which tells ggplot the 'values' column should be used on the x-axis.
However, our plot is not quite looking how we wish:
Hmm.
We want to group the values by each method used. To do this we will use the 'ind' column, and we tell ggplot about this by using aes in the geom_density call:
ggplot(dfs, aes(x=values)) + geom_density(aes(group=ind))
This is getting closer, but it's not easy to tell each one apart. Let's try colour the different methods, based on the ind column in our data frame.
ggplot(dfs, aes(x=values)) + geom_density(aes(group=ind, colour=ind))

Looking better. I'd like to have the density regions stand out some more, so will use fill and an alpha value of 0.3 to make them transparent.
ggplot(dfs, aes(x=values)) + geom_density(aes(group=ind, colour=ind, fill=ind), alpha=0.3)
That is much more in line with what I wanted to see. Note that the alpha argument is passed to geom_density() rather than aes().