Skip to content

Commit 9f1be42

Browse files
committed
# Conflicts: # .DS_Store # 01-rstudio_and_basics.Rmd # 02-getting_data_in_R.Rmd # 03-pooling_effect_sizes.Rmd # 04-forest_plots.Rmd # 06-Subgroup_Analyses.Rmd # 07-metaregression.Rmd # 08-publication_bias.Rmd # 09-risk_of_bias_summary.Rmd # 10-Effectsizeconverter.Rmd # 10-network_metanalysis.Rmd # 11-Effectsizeconverter.Rmd # 11-power_analysis.Rmd # 12-Effectsizeconverter.Rmd # 12-power_analysis.Rmd # 13-power_analysis.Rmd # 14-references.Rmd # Doing_Meta_Analysis_in_R.log # Doing_Meta_Analysis_in_R.pdf # Doing_Meta_Analysis_in_R.tex # book.bib # index.Rmd # packages.bib
2 parents e8a921c + f3259ea commit 9f1be42

28 files changed

+9807
-1
lines changed

01-rstudio_and_basics.Rmd

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,16 @@ Before we start with our meta-analysis, we have to download and prepare a **comp
99

1010

1111
## Getting RStudio to run on your computer {#RStudio}
12-
12+
<<<<<<< HEAD
13+
14+
=======
15+
```{r, echo=FALSE, fig.width=3,fig.height=2}
16+
library(png)
17+
library(grid)
18+
img <- readPNG("rstudiologo.PNG")
19+
grid.raster(img)
20+
```
21+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
1322
1423
As a prerequisite for this guide, you need to have **RStudio** and a few essential **R packages** installed.
1524

02-getting_data_in_R.Rmd

Lines changed: 155 additions & 0 deletions
Large diffs are not rendered by default.

03-pooling_effect_sizes.Rmd

Lines changed: 255 additions & 0 deletions
Large diffs are not rendered by default.

04-forest_plots.Rmd

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,18 +2,33 @@
22

33
![](forest.jpg)
44

5+
<<<<<<< HEAD
56

67

78
Now that we created the **output of our meta-analysis** using the `metagen`, `metacont` or `metabin` functions in `meta` (see [Chapter 4.1](#fixed),[Chapter 4.2](#random) and [Chapter 4.3](#binary)), it is time to present the data in a more digestable way. **Forest Plots** are an easy way to do this, and it is conventional to report forest plots in meta-analysis publications.
89

910

11+
=======
12+
```{block,type='rmdinfo'}
13+
Now that we created the **output of our meta-analysis** using the `metagen`, `metacont` or `metabin` functions in `meta` (see [Chapter 4.1](#fixed),[Chapter 4.2](#random) and [Chapter 4.3](#binary)), it is time to present the data in a more digestable way.
14+
15+
**Forest Plots** are an easy way to do this, and it is conventional to report forest plots in meta-analysis publications.
16+
```
17+
18+
<br><br>
19+
20+
---
21+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
1022
1123
## Generating a Forest Plot
1224

1325
To produce a forest plot, we use the meta-analysis output we just created (e.g., `m`, `m.raw`) und the `meta::forest()` function. I'll use my `m.hksj.raw` output from [Chapter 4.2.3](#random.raw) to create the forest plot
1426

27+
<<<<<<< HEAD
1528
$~$
1629

30+
=======
31+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
1732
```{r,echo=FALSE,warning=FALSE,message=FALSE}
1833
load("metacont_data.RData")
1934
metacont$Ne<-as.numeric(metacont$Ne)
@@ -40,18 +55,29 @@ m.hksj.raw<-metacont(Ne,
4055
metacont$intervention.type<-c("PCI","PCI","Mindfulness","CBT","CBT","CBT")
4156
```
4257

58+
<<<<<<< HEAD
4359
```{r,fig.width=11,fig.height=3,fig.align='center'}
4460
forest(m.hksj.raw)
4561
```
4662

4763
$~$
4864

4965
Looks good so far. We see that the function plotted a forest plot with a **diamond** (i.e. the overall effect and its confidence interval) and a **prediction interval**. There are plenty of **other parameters** within the `meta::forest` function which we can use to modify the forest plot.
66+
=======
67+
```{r,fig.width=11,fig.height=4,fig.align='center'}
68+
forest(m.hksj.raw)
69+
```
70+
71+
Looks good so far. We see that the function plotted a forest plot with a **diamond** (i.e. the overall effect and its confidence interval) and a **prediction interval**.
72+
73+
There are plenty of **other parameters** within the `meta::forest` function which we can use to modify the forest plot.
74+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
5075
5176
```{r,echo=FALSE}
5277
library(knitr)
5378
library(grid)
5479
load("foresttable.RData")
80+
<<<<<<< HEAD
5581
foresttable1<-foresttable[1:23,]
5682
kable(foresttable1) %>%
5783
column_spec(3, width = "30em")
@@ -78,6 +104,14 @@ kable(foresttable3) %>%
78104
This is again just an overview. For all settings, type `?meta::forest` in your **console** to see more.Let's play around with the function a little now:
79105

80106
$~$
107+
=======
108+
kable(foresttable)
109+
```
110+
111+
This is again just an overview. For all settings, type `?meta::forest` in your **console** to see more.
112+
113+
Let's play around with the function a little now:
114+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
81115
82116
```{r,fig.width=9,fig.height=3.5,fig.align='center'}
83117
forest(m.hksj.raw,
@@ -99,11 +133,19 @@ forest(m.hksj.raw,
99133
100134
```
101135

136+
<<<<<<< HEAD
102137
$~$
103138

104139
Looks good so far! For special **layout types**, proceed to [Chapter 5.2](#layouttypes) now.
105140

106141

142+
=======
143+
Looks good so far! For special **layout types**, proceed to [Chapter 5.2](#layouttypes) now.
144+
145+
<br><br>
146+
147+
---
148+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
107149
108150

109151
## Layout types {#layouttypes}
@@ -115,17 +157,23 @@ The `meta::forest` function also has two **Layouts** preinstalled which we can u
115157

116158
The **RevMan** layout looks like this:
117159

160+
<<<<<<< HEAD
118161
$~$
119162

163+
=======
164+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
120165
```{r,fig.width=10,fig.height=4,fig.align='center'}
121166
forest(m.hksj.raw,
122167
layout = "RevMan5",
123168
digits.sd = 2)
124169
125170
```
171+
<<<<<<< HEAD
126172

127173
$~$
128174

175+
=======
176+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
129177
The **JAMA** layout looks like this:
130178

131179
```{r,fig.width=7,fig.height=3,fig.align='center'}
@@ -136,6 +184,7 @@ forest(m.hksj.raw,
136184
colgap.forest.left = unit(15,"mm"))
137185
```
138186

187+
<<<<<<< HEAD
139188

140189
## Saving the forest plots
141190

@@ -144,6 +193,19 @@ Let's say i want to save the JAMA version of my Forest Plot now. To do this, i h
144193
<br></br>
145194

146195
$~$
196+
=======
197+
<br><br>
198+
199+
---
200+
201+
## Saving the forest plots
202+
203+
Let's say i want to save the JAMA version of my Forest Plot now. To do this, i have to reuse the code with which i plotted my forest plot, and put it between `pdf(file='name_of_the_pdf_i_want_to_create.pdf')` and `dev.off`, both in separate lines. This saves the plot into a PDF in my Working Directory.
204+
205+
This way, i can export the plot in different formats (you can find more details on the saving options [here](#saving)).
206+
207+
<br></br>
208+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
147209
148210
**PDF**
149211

@@ -157,8 +219,11 @@ forest.jama<-forest(m.hksj.raw,
157219
dev.off()
158220
```
159221

222+
<<<<<<< HEAD
160223
$~$
161224

225+
=======
226+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
162227
**PNG**
163228

164229
```{r, eval=FALSE}
@@ -171,8 +236,11 @@ forest.jama<-forest(m.hksj.raw,
171236
dev.off()
172237
```
173238

239+
<<<<<<< HEAD
174240
$~$
175241

242+
=======
243+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
176244
**Scalable Vector Graphic**
177245

178246
```{r, eval=FALSE}
@@ -186,5 +254,11 @@ dev.off()
186254
```
187255

188256

257+
<<<<<<< HEAD
258+
=======
259+
<br><br>
260+
261+
---
262+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
189263
190264

06-Subgroup_Analyses.Rmd

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,30 @@
22

33
![](subgroup.jpg)
44

5+
<<<<<<< HEAD
56
In [Chapter 6](#heterogeneity), we discussed in depth why **between-study heterogeneity** is such an important issue in interpreting the results of our meta-analysis, and how we can **explore sources of heterogeneity** using [outlier](#outliers) and [influence analyses](#influenceanalyses). Another source of between-study heterogeneity making our effect size estimate less precise could be that **there are slight differences in the study design or intervention components between the studies**. For example, in a meta-analysis on the effects of **cognitive behavioral therapy** (CBT) for **depression** in **university students**, it could be the case that some studies delivered the intervention in a **group setting**, while others delivered the therapy to each student **individually**. In the same example, it is also possible that studies used different **criteria** to determine if a student suffers from **depression** (e.g. they either used the *ICD-10* or the *DSM-5* diagnostic manual). Many other differences of this sort are possible, and it seems plausible that such study differences may also be associated with differences in the overall effect.In **subgroup analyses**, we therefore have a look at different **subgroups within the studies of our meta-analysis** and try to determine of the **differ between these subgroups**.
67

78
$~$
89

10+
=======
11+
In [Chapter 6](#heterogeneity), we discussed in depth why **between-study heterogeneity** is such an important issue in interpreting the results of our meta-analysis, and how we can **explore sources of heterogeneity** using [outlier](#outliers) and [influence analyses](#influenceanalyses).
12+
13+
Another source of between-study heterogeneity making our effect size estimate less precise could be that **there are slight differences in the study design or intervention components between the studies**. For example, in a meta-analysis on the effects of **cognitive behavioral therapy** (CBT) for **depression** in **university students**, it could be the case that some studies delivered the intervention in a **group setting**, while others delivered the therapy to each student **individually**. In the same example, it is also possible that studies used different **criteria** to determine if a student suffers from **depression** (e.g. they either used the *ICD-10* or the *DSM-5* diagnostic manual).
14+
15+
Many other differences of this sort are possible, and it seems plausible that such study differences may also be associated with differences in the overall effect.
16+
17+
In **subgroup analyses**, we therefore have a look at different **subgroups within the studies of our meta-analysis** and try to determine of the **differ between these subgroups**.
18+
19+
```{block,type='rmdinfo'}
20+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
921
**The idea behind subgroup analyses**
1022
1123
Basically, a every subgroup analysis consists of **two parts**: (1) **pooling the effect of each subgroup**, and (2) **comparing the effects of the subgroups** [@borenstein2013meta].
1224
25+
<<<<<<< HEAD
1326
$~$
27+
=======
28+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
1429
1530
**1. Pooling the effect of each subgroup**
1631
@@ -19,8 +34,11 @@ This point it rather straightforward, as the same criteria as the ones for a **s
1934
* If you assume that **all studies in subgroup** stem from the same population, and all have **one shared true effect**, you may use the **fixed-effect-model**. As we mention in [Chapter 4](#pool), many **doubt** that this assumption is ever **true in psychological** and **medical research**, even when we partition our studies into subgroups.
2035
* The alternative, therefore, is to use a **random-effect-model** which assumes that the studies within a subgroup are drawn from a **universe** of populations follwing its own distribution, for which we want to estimate the **mean**.
2136
37+
<<<<<<< HEAD
2238
$~$
2339
40+
=======
41+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
2442
**2. Comparing the effects of the subgroups**
2543
2644
After we calculated the pooled effect for each subgroup, **we can compare the size of the effects of each subgroup**. However, to know if this difference is in fact singnificant and/or meaningful, we have to calculate the **Standard Error of the differences between subgroup effect sizes** $SE_{diff}$, to calculate **confidence intervals** and conduct **significance tests**.
@@ -40,8 +58,24 @@ The (simplified) formula for the estimation of $V_{Diff}$ using this model there
4058
4159
$$V_{Diff}=V_A + V_B + \frac{\hat T^2_G}{m} $$
4260
61+
<<<<<<< HEAD
4362
Where $\hat T^2_G$ is the **estimated variance between the subgroups**, and $m$ is the **number of subgroups**. Be aware that subgroup analyses should **always be based on an informed, *a priori* decision** which subgroup differences within the study might be **practically relevant**, and would lead to information gain on relevant **research questions** in your field of research. It is also **good practice** to specify your subgroup analyses **before you do the analysis**, and list them in **the registration of your analysis**. It is also important to keep in mind that **the capabilites of subgroup analyses to detect meaningful differences between studies is often limited**. Subgroup analyses also need **sufficient power**, so it makes no sense to compare two or more subgroups when your entire number of studies in the meta-analysis is smaller than $k=10$ [@higgins2004controlling].
4463
64+
=======
65+
Where $\hat T^2_G$ is the **estimated variance between the subgroups**, and $m$ is the **number of subgroups**.
66+
```
67+
68+
```{block,type='rmdachtung'}
69+
Be aware that subgroup analyses should **always be based on an informed, *a priori* decision** which subgroup differences within the study might be **practically relevant**, and would lead to information gain on relevant **research questions** in your field of research. It is also **good practice** to specify your subgroup analyses **before you do the analysis**, and list them in **the registration of your analysis**.
70+
71+
It is also important to keep in mind that **the capabilites of subgroup analyses to detect meaningful differences between studies is often limited**. Subgroup analyses also need **sufficient power**, so it makes no sense to compare two or more subgroups when your entire number of studies in the meta-analysis is smaller than $k=10$ [@higgins2004controlling].
72+
73+
```
74+
75+
<br><br>
76+
77+
---
78+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
4579
4680

4781
## Subgroup Analyses using the Mixed-Effects-Model {#mixed}
@@ -298,13 +332,22 @@ Description<-c("The output of you meta-analysis. In my case, this is 'm.hksj'",
298332
m<-data.frame(Code,Description)
299333
names<-c("Code","Description")
300334
colnames(m)<-names
335+
<<<<<<< HEAD
301336
kable(m) %>%
302337
column_spec(2, width = "40em")
303338
```
304339

305340
In my `madata` dataset, which i used previously to generate my meta-analysis output `m.hksj`, i stored the subgroup variable `Control`. This variable specifies **which control group type was employed in which study**. There are **three subgroups**: `WLC` (waitlist control), `no intervention` and `information only`. The function to do a subgroup analysis using the mixed-effects-model with these paramters looks like this.
306341

307342
$~$
343+
=======
344+
kable(m)
345+
```
346+
347+
In my `madata` dataset, which i used previously to generate my meta-analysis output `m.hksj`, i stored the subgroup variable `Control`. This variable specifies **which control group type was employed in which study**. There are **three subgroups**: `WLC` (waitlist control), `no intervention` and `information only`.
348+
349+
The function to do a subgroup analysis using the mixed-effects-model with these paramters looks like this.
350+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
308351
309352
```{r,message=FALSE,warning=FALSE}
310353
subgroup.analysis.mixed.effects(data=m.hksj,
@@ -315,11 +358,21 @@ subgroup.analysis.mixed.effects(data=m.hksj,
315358
subgroup3 = "information only")
316359
```
317360

361+
<<<<<<< HEAD
318362
$~$
319363

320364
The results of the subgroup analysis are displayed under `Results for subgroups (fixed effect model)`. We see that, while the **pooled effects of the subgroups differ quite substantially** (*g* = 0.41-0.78), this difference is **not statistically significant**. This can be seen under `Test for subgroup differences` in the `Between groups` row. We can see that $Q=3.03$ and $p=0.2196$. This information can be reported in our meta-analysis paper. Please note that the values displayed under `k` in the `Results for subgroups (fixed effects model)` section are always 1, as the pooled effect of the subgroup is treated as a single study. To determine the actual $k$ of each subgroup, you can use the `count` function from `dplyr` in R.
321365

322366

367+
=======
368+
The results of the subgroup analysis are displayed under `Results for subgroups (fixed effect model)`. We see that, while the **pooled effects of the subgroups differ quite substantially** (*g* = 0.41-0.78), this difference is **not statistically significant**.
369+
370+
This can be seen under `Test for subgroup differences` in the `Between groups` row. We can see that $Q=3.03$ and $p=0.2196$. This information can be reported in our meta-analysis paper.
371+
372+
```{block,type='rmdachtung'}
373+
Please not that the values displayed under `k` in the `Results for subgroups (fixed effects model)` section are always 1, as the pooled effect of the subgroup is treated as a single study. To determine the actual $k$ of each subgroup, you can use the `count` function from `dplyr` in R.
374+
```
375+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
323376
324377
```{r,echo=FALSE}
325378
load("Meta_Analysis_Data.RData")
@@ -332,6 +385,12 @@ library(dplyr)
332385
dplyr::count(madata, vars=madata$Control)
333386
```
334387

388+
<<<<<<< HEAD
389+
=======
390+
<br><br>
391+
392+
---
393+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
335394
336395
## Subgroup Analyses using the Random-Effects-Model
337396

@@ -340,9 +399,15 @@ region<-c("Netherlands","Netherlands","Netherlands","USA","USA","USA","USA","Arg
340399
madata$region<-region
341400
```
342401

402+
<<<<<<< HEAD
343403
Now, let's assume i want to **know if intervention effects in my meta-analysis differ by region**. I use a **random-effects-model** and the selected coutries Argentina, Australia, China, and the Netherlands.Again, i use the `m.hksj` meta-analysis output object. I can perform a random-effects-model for between-subgroup-differences using the `update.meta` function. For this function, we have to **set two parameters**.
344404

345405
$~$
406+
=======
407+
Now, let's assume i want to **know if intervention effects in my meta-analysis differ by region**. I use a **random-effects-model** and the selected coutries Argentina, Australia, China, and the Netherlands.
408+
409+
Again, i use the `m.hksj` meta-analysis output object. I can perform a random-effects-model for between-subgroup-differences using the `update.meta` function. For this function, we have to **set two parameters**.
410+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
346411
347412
```{r,echo=FALSE}
348413
library(knitr)
@@ -351,11 +416,17 @@ Description<-c("Here, we specify the variable in which the subgroup of each stud
351416
m<-data.frame(Code,Description)
352417
names<-c("Code","Description")
353418
colnames(m)<-names
419+
<<<<<<< HEAD
354420
kable(m) %>%
355421
column_spec(2, width = "40em")
356422
```
357423

358424
$~$
425+
=======
426+
kable(m)
427+
```
428+
429+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
359430
360431
```{r,echo=FALSE}
361432
m.hksj<-metagen(TE, seTE, data=madata, method.tau = "SJ", hakn = TRUE, studlab = paste(Author), comb.random = TRUE)
@@ -369,6 +440,7 @@ region.subgroup<-update.meta(m.hksj,
369440
region.subgroup
370441
```
371442

443+
<<<<<<< HEAD
372444
$~$
373445

374446
Here, we get the **pooled effect for each subgroup** (country). Under `Test for subgroup differences (random effects model)`, we can see the **test for subgroup differences using the random-effects-model**, which is **not significant** ($Q=4.52$,$p=0.3405$). This means that we did not find differences in the overall effect between different regions, represented by the country in which the study was conducted.
@@ -380,5 +452,18 @@ $~$
380452
To use a fixed-effect-model in combination with a fixed-effects-model, we can also use the `update.meta` function again. The procedure is the same as the one we described before, but we have to set `comb.random` as `FALSE` and `comb.fixed` as `TRUE`.
381453

382454

455+
=======
456+
Here, we get the **pooled effect for each subgroup** (country). Under `Test for subgroup differences (random effects model)`, we can see the **test for subgroup differences using the random-effects-model**, which is **not significant** ($Q=4.52$,$p=0.3405$). This means that we did not find differences in the overall effect between different regions, represented by the country in which the study was conducted.
457+
458+
```{block,type='rmdachtung'}
459+
**Using a fixed-effect-model for within-subgroup-pooling and a fixed-effects-model for between-subgroup-differences**
460+
461+
To use a fixed-effect-model in combination with a fixed-effects-model, we can also use the `update.meta` function again. The procedure is the same as the one we described before, but we have to set `comb.random` as `FALSE` and `comb.fixed` as `TRUE`.
462+
```
463+
464+
<br><br>
465+
466+
---
467+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
383468
384469

0 commit comments

Comments
 (0)