This repository has been archived by the owner on Sep 18, 2019. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 370
/
Copy pathblock010_dplyr-end-single-table.rmd
384 lines (270 loc) · 17.5 KB
/
block010_dplyr-end-single-table.rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
---
title: "dplyr functions for a single dataset"
output:
html_document:
toc: true
toc_depth: 4
---
```{r setup, include = FALSE, cache = FALSE}
knitr::opts_chunk$set(error = TRUE)
```
### Where were we?
In the [introduction to dplyr](block009_dplyr-intro.html), we used two very important verbs and an operator:
* `filter()` for subsetting data with row logic
* `select()` for subsetting data variable- or column-wise
* the pipe operator `%>%`, which feeds the LHS as the first argument to the expression on the RHS
We also discussed dplyr's role inside the tidyverse and tibbles:
* dplyr is a core package in the [tidyverse](https://github.com/hadley/tidyverse) meta-package. Since we often make incidental usage of the others, we will load dplyr and the others via `library(tidyverse)`.
* The tidyverse embraces a special flavor of data frame, called a tibble. The `gapminder` dataset is stored as a tibble.
### Load dplyr and gapminder
I choose to load the tidyverse, which will load dplyr, among other packages we use incidentally below. Also load gapminder.
```{r}
library(gapminder)
library(tidyverse)
```
### Create a copy of gapminder
We're going to make changes to the `gapminder` tibble. To eliminate any fear that you're damaging the data that comes with the package, we create an explicit copy of `gapminder` for our experiments.
```{r}
(my_gap <- gapminder)
```
**Pay close attention** to when we evaluate statements but let the output just print to screen:
```{r eval = FALSE}
## let output print to screen, but do not store
my_gap %>% filter(country == "Canada")
```
... versus when we assign the output to an object, possibly overwriting an existing object.
```{r eval = FALSE}
## store the output as an R object
my_precious <- my_gap %>% filter(country == "Canada")
```
### Use `mutate()` to add new variables
Imagine we wanted to recover each country's GDP. After all, the Gapminder data has a variable for population and GDP per capita. Let's multiply them together.
`mutate()` is a function that defines and inserts new variables into a tibble. You can refer to existing variables by name.
```{r}
my_gap %>%
mutate(gdp = pop * gdpPercap)
```
Hmmmm ... those GDP numbers are almost uselessly large and abstract. Consider the [advice of Randall Munroe of xkcd](http://fivethirtyeight.com/datalab/xkcd-randall-munroe-qanda-what-if/):
>One thing that bothers me is large numbers presented without context... 'If I added a zero to this number, would the sentence containing it mean something different to me?' If the answer is 'no,' maybe the number has no business being in the sentence in the first place."
Maybe it would be more meaningful to consumers of my tables and figures to stick with GDP per capita. But what if I reported GDP per capita, *relative to some benchmark country*. Since Canada is my adopted home, I'll go with that.
I need to create a new variable that is `gdpPercap` divided by Canadian `gdpPercap`, taking care that I always divide two numbers that pertain to the same year.
How I achieve:
* Filter down to the rows for Canada.
* Create a new temporary variable in `my_gap`:
- Extract the `gdpPercap` variable from the Canadian data.
- Replicate it once per country in the dataset, so it has the right length.
* Divide raw `gdpPercap` by this Canadian figure.
* Discard the temporary variable of replicated Canadian `gdpPercap`.
```{r}
ctib <- my_gap %>%
filter(country == "Canada")
## this is a semi-dangerous way to add this variable
## I'd prefer to join on year, but we haven't covered joins yet
my_gap <- my_gap %>%
mutate(tmp = rep(ctib$gdpPercap, nlevels(country)),
gdpPercapRel = gdpPercap / tmp,
tmp = NULL)
```
Note that, `mutate()` builds new variables sequentially so you can reference earlier ones (like `tmp`) when defining later ones (like `gdpPercapRel`). Also, you can get rid of a variable by setting it to `NULL`.
How could we sanity check that this worked? The Canadian values for `gdpPercapRel` better all be 1!
```{r}
my_gap %>%
filter(country == "Canada") %>%
select(country, year, gdpPercapRel)
```
I perceive Canada to be a "high GDP" country, so I predict that the distribution of `gdpPercapRel` is located below 1, possibly even well below. Check your intuition!
```{r}
summary(my_gap$gdpPercapRel)
```
The relative GDP per capita numbers are, in general, well below 1. We see that most of the countries covered by this dataset have substantially lower GDP per capita, relative to Canada, across the entire time period.
Remember: Trust No One. Including (especially?) yourself. Always try to find a way to check that you've done what meant to. Prepare to be horrified.
### Use `arrange()` to row-order data in a principled way
`arrange()` reorders the rows in a data frame. Imagine you wanted this data ordered by year then country, as opposed to by country then year.
```{r}
my_gap %>%
arrange(year, country)
```
Or maybe you want just the data from 2007, sorted on life expectancy?
```{r}
my_gap %>%
filter(year == 2007) %>%
arrange(lifeExp)
```
Oh, you'd like to sort on life expectancy in **desc**ending order? Then use `desc()`.
```{r}
my_gap %>%
filter(year == 2007) %>%
arrange(desc(lifeExp))
```
I advise that your analyses NEVER rely on rows or variables being in a specific order. But it's still true that human beings write the code and the interactive development process can be much nicer if you reorder the rows of your data as you go along. Also, once you are preparing tables for human eyeballs, it is imperative that you step up and take control of row order.
### Use `rename()` to rename variables
When I first cleaned this Gapminder excerpt, I was a [`camelCase`](http://en.wikipedia.org/wiki/CamelCase) person, but now I'm all about [`snake_case`](http://en.wikipedia.org/wiki/Snake_case). So I am vexed by the variable names I chose when I cleaned this data years ago. Let's rename some variables!
```{r}
my_gap %>%
rename(life_exp = lifeExp,
gdp_percap = gdpPercap,
gdp_percap_rel = gdpPercapRel)
```
I did NOT assign the post-rename object back to `my_gap` because that would make the chunks in this tutorial harder to copy/paste and run out of order. In real life, I would probably assign this back to `my_gap`, in a data preparation script, and proceed with the new variable names.
### `select()` can rename and reposition variables
You've seen simple use of `select()`. There are two tricks you might enjoy:
1. `select()` can rename the variables you request to keep.
1. `select()` can be used with `everything()` to hoist a variable up to the front of the tibble.
```{r}
my_gap %>%
filter(country == "Burundi", year > 1996) %>%
select(yr = year, lifeExp, gdpPercap) %>%
select(gdpPercap, everything())
```
`everything()` is one of several helpers for variable selection. Read its help to see the rest.
### `group_by()` is a mighty weapon
I have found ~~friends and family~~ collaborators love to ask seemingly innocuous questions like, "which country experienced the sharpest 5-year drop in life expectancy?". In fact, that is a totally natural question to ask. But if you are using a language that doesn't know about data, it's an incredibly annoying question to answer.
dplyr offers powerful tools to solve this class of problem.
* `group_by()` adds extra structure to your dataset -- grouping information -- which lays the groundwork for computations within the groups.
* `summarize()` takes a dataset with $n$ observations, computes requested summaries, and returns a dataset with 1 observation.
* Window functions take a dataset with $n$ observations and return a dataset with $n$ observations.
* `mutate()` and `summarize()` will honor groups.
* You can also do very general computations on your groups with `do()`, though elsewhere in this course, I advocate for other approaches that I find more intuitive, using the `purrr` package.
Combined with the verbs you already know, these new tools allow you to solve an extremely diverse set of problems with relative ease.
#### Counting things up
Let's start with simple counting. How many observations do we have per continent?
```{r}
my_gap %>%
group_by(continent) %>%
summarize(n = n())
```
Let us pause here to think about the tidyverse. You could get these same frequencies using `table()` from base R.
```{r}
table(gapminder$continent)
str(table(gapminder$continent))
```
But the object of class `table` that is returned makes downstream computation a bit fiddlier than you'd like. For example, it's too bad the continent levels come back only as *names* and not as a proper factor, with the original set of levels. This is an example of how the tidyverse smooths transitions where you want the output of step i to become the input of step i + 1.
The `tally()` function is a convenience function that knows to count rows. It honors groups.
```{r}
my_gap %>%
group_by(continent) %>%
tally()
```
The `count()` function is an even more convenient function that does both grouping and counting.
```{r}
my_gap %>%
count(continent)
```
What if we wanted to add the number of unique countries for each continent? You can compute multiple summaries inside `summarize()`. Use the `n_distinct()` function to count the number of distinct countries within each continent.
```{r}
my_gap %>%
group_by(continent) %>%
summarize(n = n(),
n_countries = n_distinct(country))
```
#### General summarization
The functions you'll apply within `summarize()` include classical statistical summaries, like `mean()`, `median()`, `var()`, `sd()`, `mad()`, `IQR()`, `min()`, and `max()`. Remember they are functions that take $n$ inputs and distill them down into 1 output.
Although this may be statistically ill-advised, let's compute the average life expectancy by continent.
```{r}
my_gap %>%
group_by(continent) %>%
summarize(avg_lifeExp = mean(lifeExp))
```
`summarize_at()` applies the same summary function(s) to multiple variables. Let's compute average and median life expectancy and GDP per capita by continent by year ... but only for 1952 and 2007.
```{r}
my_gap %>%
filter(year %in% c(1952, 2007)) %>%
group_by(continent, year) %>%
summarize_at(vars(lifeExp, gdpPercap), funs(mean, median))
```
Let's focus just on Asia. What are the minimum and maximum life expectancies seen by year?
```{r}
my_gap %>%
filter(continent == "Asia") %>%
group_by(year) %>%
summarize(min_lifeExp = min(lifeExp), max_lifeExp = max(lifeExp))
```
Of course it would be much more interesting to see *which* country contributed these extreme observations. Is the minimum (maximum) always coming from the same country? We tackle that with window functions shortly.
### Grouped mutate
Sometimes you don't want to collapse the $n$ rows for each group into one row. You want to keep your groups, but compute within them.
#### Computing with group-wise summaries
Let's make a new variable that is the years of life expectancy gained (lost) relative to 1952, for each individual country. We group by country and use `mutate()` to make a new variable. The `first()` function extracts the first value from a vector. Notice that `first()` is operating on the vector of life expectancies *within each country group*.
```{r}
my_gap %>%
group_by(country) %>%
select(country, year, lifeExp) %>%
mutate(lifeExp_gain = lifeExp - first(lifeExp)) %>%
filter(year < 1963)
```
Within country, we take the difference between life expectancy in year $i$ and life expectancy in 1952. Therefore we always see zeroes for 1952 and, for most countries, a sequence of positive and increasing numbers.
#### Window functions
Window functions take $n$ inputs and give back $n$ outputs. Furthermore, the output depends on all the values. So `rank()` is a window function but `log()` is not. Here we use window functions based on ranks and offsets.
Let's revisit the worst and best life expectancies in Asia over time, but retaining info about *which* country contributes these extreme values.
```{r}
my_gap %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp) %>%
group_by(year) %>%
filter(min_rank(desc(lifeExp)) < 2 | min_rank(lifeExp) < 2) %>%
arrange(year) %>%
print(n = Inf)
```
We see that (min = Afghanistan, max = Japan) is the most frequent result, but Cambodia and Israel pop up at least once each as the min or max, respectively. That table should make you impatient for our upcoming work on tidying and reshaping data! Wouldn't it be nice to have one row per year?
How did that actually work? First, I store and view a partial that leaves off the `filter()` statement. All of these operations should be familiar.
```{r}
asia <- my_gap %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp) %>%
group_by(year)
asia
```
Now we apply a window function -- `min_rank()`. Since `asia` is grouped by year, `min_rank()` operates within mini-datasets, each for a specific year. Applied to the variable `lifeExp`, `min_rank()` returns the rank of each country's observed life expectancy. FYI, the `min` part just specifies how ties are broken. Here is an explicit peek at these within-year life expectancy ranks, in both the (default) ascending and descending order.
For concreteness, I use `mutate()` to actually create these variables, even though I dropped this in the solution above. Let's look at a bit of that.
```{r}
asia %>%
mutate(le_rank = min_rank(lifeExp),
le_desc_rank = min_rank(desc(lifeExp))) %>%
filter(country %in% c("Afghanistan", "Japan", "Thailand"), year > 1995)
```
Afghanistan tends to present 1's in the `le_rank` variable, Japan tends to present 1's in the `le_desc_rank` variable and other countries, like Thailand, present less extreme ranks.
You can understand the original `filter()` statement now:
```{r eval = FALSE}
filter(min_rank(desc(lifeExp)) < 2 | min_rank(lifeExp) < 2)
```
These two sets of ranks are formed on-the-fly, within year group, and `filter()` retains rows with rank less than 2, which means ... the row with rank = 1. Since we do for ascending and descending ranks, we get both the min and the max.
If we had wanted just the min OR the max, an alternative approach using `top_n()` would have worked.
```{r}
my_gap %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp) %>%
arrange(year) %>%
group_by(year) %>%
#top_n(1, wt = lifeExp) ## gets the min
top_n(1, wt = desc(lifeExp)) ## gets the max
```
### Grand Finale
So let's answer that "simple" question: which country experienced the sharpest 5-year drop in life expectancy? Recall that this excerpt of the Gapminder data only has data every five years, e.g. for 1952, 1957, etc. So this really means looking at life expectancy changes between adjacent timepoints.
At this point, that's just too easy, so let's do it by continent while we're at it.
```{r}
my_gap %>%
select(country, year, continent, lifeExp) %>%
group_by(continent, country) %>%
## within country, take (lifeExp in year i) - (lifeExp in year i - 1)
## positive means lifeExp went up, negative means it went down
mutate(le_delta = lifeExp - lag(lifeExp)) %>%
## within country, retain the worst lifeExp change = smallest or most negative
summarize(worst_le_delta = min(le_delta, na.rm = TRUE)) %>%
## within continent, retain the row with the lowest worst_le_delta
top_n(-1, wt = worst_le_delta) %>%
arrange(worst_le_delta)
```
Ponder that for a while. The subject matter and the code. Mostly you're seeing what genocide looks like in dry statistics on average life expectancy.
Break the code into pieces, starting at the top, and inspect the intermediate results. That's certainly how I was able to *write* such a thing. These commands do not [leap fully formed out of anyone's forehead](http://tinyurl.com/athenaforehead) -- they are built up gradually, with lots of errors and refinements along the way. I'm not even sure it's a great idea to do so much manipulation in one fell swoop. Is the statement above really hard for you to read? If yes, then by all means break it into pieces and make some intermediate objects. Your code should be easy to write and read when you're done.
In later tutorials, we'll explore more of dplyr, such as operations based on two datasets.
### Resources
`dplyr` official stuff
* package home [on CRAN](http://cran.r-project.org/web/packages/dplyr/index.html)
- note there are several vignettes, with the [introduction](http://cran.r-project.org/web/packages/dplyr/vignettes/introduction.html) being the most relevant right now
- the [one on window functions](http://cran.rstudio.com/web/packages/dplyr/vignettes/window-functions.html) will also be interesting to you now
* development home [on GitHub](https://github.com/hadley/dplyr)
* [tutorial HW delivered](https://www.dropbox.com/sh/i8qnluwmuieicxc/AAAgt9tIKoIm7WZKIyK25lh6a) (note this links to a DropBox folder) at useR! 2014 conference
[RStudio Data Wrangling cheatsheet](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf), covering `dplyr` and `tidyr`. Remember you can get to these via *Help > Cheatsheets.*
[Data transformation](http://r4ds.had.co.nz/transform.html) chapter of [R for Data Science](http://r4ds.had.co.nz)
[Excellent slides](https://github.com/tjmahr/MadR_Pipelines) on pipelines and `dplyr` by TJ Mahr, talk given to the Madison R Users Group.
Blog post [Hands-on dplyr tutorial for faster data manipulation in R](http://www.dataschool.io/dplyr-tutorial-for-faster-data-manipulation-in-r/) by Data School, that includes a link to an R Markdown document and links to videos
[Cheatsheet](bit001_dplyr-cheatsheet.html) I made for `dplyr` join functions (not relevant yet but soon)