You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 04-forest_plots.Rmd
+74Lines changed: 74 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -2,18 +2,33 @@
2
2
3
3

4
4
5
+
<<<<<<< HEAD
5
6
6
7
7
8
Now that we created the **output of our meta-analysis** using the `metagen`, `metacont` or `metabin` functions in `meta` (see [Chapter 4.1](#fixed),[Chapter 4.2](#random) and [Chapter 4.3](#binary)), it is time to present the data in a more digestable way. **Forest Plots** are an easy way to do this, and it is conventional to report forest plots in meta-analysis publications.
8
9
9
10
11
+
=======
12
+
```{block,type='rmdinfo'}
13
+
Now that we created the **output of our meta-analysis** using the `metagen`, `metacont` or `metabin` functions in `meta` (see [Chapter 4.1](#fixed),[Chapter 4.2](#random) and [Chapter 4.3](#binary)), it is time to present the data in a more digestable way.
14
+
15
+
**Forest Plots** are an easy way to do this, and it is conventional to report forest plots in meta-analysis publications.
16
+
```
17
+
18
+
<br><br>
19
+
20
+
---
21
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
10
22
11
23
## Generating a Forest Plot
12
24
13
25
To produce a forest plot, we use the meta-analysis output we just created (e.g., `m`, `m.raw`) und the `meta::forest()` function. I'll use my `m.hksj.raw` output from [Chapter 4.2.3](#random.raw) to create the forest plot
Looks good so far. We see that the function plotted a forest plot with a **diamond** (i.e. the overall effect and its confidence interval) and a **prediction interval**. There are plenty of **other parameters** within the `meta::forest` function which we can use to modify the forest plot.
Looks good so far. We see that the function plotted a forest plot with a **diamond** (i.e. the overall effect and its confidence interval) and a **prediction interval**.
72
+
73
+
There are plenty of **other parameters** within the `meta::forest` function which we can use to modify the forest plot.
74
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
50
75
51
76
```{r,echo=FALSE}
52
77
library(knitr)
53
78
library(grid)
54
79
load("foresttable.RData")
80
+
<<<<<<< HEAD
55
81
foresttable1<-foresttable[1:23,]
56
82
kable(foresttable1) %>%
57
83
column_spec(3, width = "30em")
@@ -78,6 +104,14 @@ kable(foresttable3) %>%
78
104
This is again just an overview. For all settings, type `?meta::forest` in your **console** to see more.Let's play around with the function a little now:
79
105
80
106
$~$
107
+
=======
108
+
kable(foresttable)
109
+
```
110
+
111
+
This is again just an overview. For all settings, type `?meta::forest` in your **console** to see more.
@@ -144,6 +193,19 @@ Let's say i want to save the JAMA version of my Forest Plot now. To do this, i h
144
193
<br></br>
145
194
146
195
$~$
196
+
=======
197
+
<br><br>
198
+
199
+
---
200
+
201
+
## Saving the forest plots
202
+
203
+
Let's say i want to save the JAMA version of my Forest Plot now. To do this, i have to reuse the code with which i plotted my forest plot, and put it between `pdf(file='name_of_the_pdf_i_want_to_create.pdf')` and `dev.off`, both in separate lines. This saves the plot into a PDF in my Working Directory.
204
+
205
+
This way, i can export the plot in different formats (you can find more details on the saving options [here](#saving)).
Copy file name to clipboardExpand all lines: 06-Subgroup_Analyses.Rmd
+85Lines changed: 85 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -2,15 +2,30 @@
2
2
3
3

4
4
5
+
<<<<<<< HEAD
5
6
In [Chapter 6](#heterogeneity), we discussed in depth why **between-study heterogeneity** is such an important issue in interpreting the results of our meta-analysis, and how we can **explore sources of heterogeneity** using [outlier](#outliers) and [influence analyses](#influenceanalyses). Another source of between-study heterogeneity making our effect size estimate less precise could be that **there are slight differences in the study design or intervention components between the studies**. For example, in a meta-analysis on the effects of **cognitive behavioral therapy** (CBT) for **depression** in **university students**, it could be the case that some studies delivered the intervention in a **group setting**, while others delivered the therapy to each student **individually**. In the same example, it is also possible that studies used different **criteria** to determine if a student suffers from **depression** (e.g. they either used the *ICD-10* or the *DSM-5* diagnostic manual). Many other differences of this sort are possible, and it seems plausible that such study differences may also be associated with differences in the overall effect.In **subgroup analyses**, we therefore have a look at different **subgroups within the studies of our meta-analysis** and try to determine of the **differ between these subgroups**.
6
7
7
8
$~$
8
9
10
+
=======
11
+
In [Chapter 6](#heterogeneity), we discussed in depth why **between-study heterogeneity** is such an important issue in interpreting the results of our meta-analysis, and how we can **explore sources of heterogeneity** using [outlier](#outliers) and [influence analyses](#influenceanalyses).
12
+
13
+
Another source of between-study heterogeneity making our effect size estimate less precise could be that **there are slight differences in the study design or intervention components between the studies**. For example, in a meta-analysis on the effects of **cognitive behavioral therapy** (CBT) for **depression** in **university students**, it could be the case that some studies delivered the intervention in a **group setting**, while others delivered the therapy to each student **individually**. In the same example, it is also possible that studies used different **criteria** to determine if a student suffers from **depression** (e.g. they either used the *ICD-10* or the *DSM-5* diagnostic manual).
14
+
15
+
Many other differences of this sort are possible, and it seems plausible that such study differences may also be associated with differences in the overall effect.
16
+
17
+
In **subgroup analyses**, we therefore have a look at different **subgroups within the studies of our meta-analysis** and try to determine of the **differ between these subgroups**.
18
+
19
+
```{block,type='rmdinfo'}
20
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
9
21
**The idea behind subgroup analyses**
10
22
11
23
Basically, a every subgroup analysis consists of **two parts**: (1) **pooling the effect of each subgroup**, and (2) **comparing the effects of the subgroups** [@borenstein2013meta].
12
24
25
+
<<<<<<< HEAD
13
26
$~$
27
+
=======
28
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
14
29
15
30
**1. Pooling the effect of each subgroup**
16
31
@@ -19,8 +34,11 @@ This point it rather straightforward, as the same criteria as the ones for a **s
19
34
* If you assume that **all studies in subgroup** stem from the same population, and all have **one shared true effect**, you may use the **fixed-effect-model**. As we mention in [Chapter 4](#pool), many **doubt** that this assumption is ever **true in psychological** and **medical research**, even when we partition our studies into subgroups.
20
35
* The alternative, therefore, is to use a **random-effect-model** which assumes that the studies within a subgroup are drawn from a **universe** of populations follwing its own distribution, for which we want to estimate the **mean**.
21
36
37
+
<<<<<<< HEAD
22
38
$~$
23
39
40
+
=======
41
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
24
42
**2. Comparing the effects of the subgroups**
25
43
26
44
After we calculated the pooled effect for each subgroup, **we can compare the size of the effects of each subgroup**. However, to know if this difference is in fact singnificant and/or meaningful, we have to calculate the **Standard Error of the differences between subgroup effect sizes** $SE_{diff}$, to calculate **confidence intervals** and conduct **significance tests**.
@@ -40,8 +58,24 @@ The (simplified) formula for the estimation of $V_{Diff}$ using this model there
40
58
41
59
$$V_{Diff}=V_A + V_B + \frac{\hat T^2_G}{m} $$
42
60
61
+
<<<<<<< HEAD
43
62
Where $\hat T^2_G$ is the **estimated variance between the subgroups**, and $m$ is the **number of subgroups**. Be aware that subgroup analyses should **always be based on an informed, *a priori* decision** which subgroup differences within the study might be **practically relevant**, and would lead to information gain on relevant **research questions** in your field of research. It is also **good practice** to specify your subgroup analyses **before you do the analysis**, and list them in **the registration of your analysis**. It is also important to keep in mind that **the capabilites of subgroup analyses to detect meaningful differences between studies is often limited**. Subgroup analyses also need **sufficient power**, so it makes no sense to compare two or more subgroups when your entire number of studies in the meta-analysis is smaller than $k=10$ [@higgins2004controlling].
44
63
64
+
=======
65
+
Where $\hat T^2_G$ is the **estimated variance between the subgroups**, and $m$ is the **number of subgroups**.
66
+
```
67
+
68
+
```{block,type='rmdachtung'}
69
+
Be aware that subgroup analyses should **always be based on an informed, *a priori* decision** which subgroup differences within the study might be **practically relevant**, and would lead to information gain on relevant **research questions** in your field of research. It is also **good practice** to specify your subgroup analyses **before you do the analysis**, and list them in **the registration of your analysis**.
70
+
71
+
It is also important to keep in mind that **the capabilites of subgroup analyses to detect meaningful differences between studies is often limited**. Subgroup analyses also need **sufficient power**, so it makes no sense to compare two or more subgroups when your entire number of studies in the meta-analysis is smaller than $k=10$ [@higgins2004controlling].
72
+
73
+
```
74
+
75
+
<br><br>
76
+
77
+
---
78
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
45
79
46
80
47
81
## Subgroup Analyses using the Mixed-Effects-Model {#mixed}
@@ -298,13 +332,22 @@ Description<-c("The output of you meta-analysis. In my case, this is 'm.hksj'",
298
332
m<-data.frame(Code,Description)
299
333
names<-c("Code","Description")
300
334
colnames(m)<-names
335
+
<<<<<<< HEAD
301
336
kable(m) %>%
302
337
column_spec(2, width = "40em")
303
338
```
304
339
305
340
In my `madata` dataset, which i used previously to generate my meta-analysis output `m.hksj`, i stored the subgroup variable `Control`. This variable specifies **which control group type was employed in which study**. There are **three subgroups**: `WLC` (waitlist control), `no intervention` and `information only`. The function to do a subgroup analysis using the mixed-effects-model with these paramters looks like this.
306
341
307
342
$~$
343
+
=======
344
+
kable(m)
345
+
```
346
+
347
+
In my `madata` dataset, which i used previously to generate my meta-analysis output `m.hksj`, i stored the subgroup variable `Control`. This variable specifies **which control group type was employed in which study**. There are **three subgroups**: `WLC` (waitlist control), `no intervention` and `information only`.
348
+
349
+
The function to do a subgroup analysis using the mixed-effects-model with these paramters looks like this.
The results of the subgroup analysis are displayed under `Results for subgroups (fixed effect model)`. We see that, while the **pooled effects of the subgroups differ quite substantially** (*g* = 0.41-0.78), this difference is **not statistically significant**. This can be seen under `Test for subgroup differences` in the `Between groups` row. We can see that $Q=3.03$ and $p=0.2196$. This information can be reported in our meta-analysis paper. Please note that the values displayed under `k` in the `Results for subgroups (fixed effects model)` section are always 1, as the pooled effect of the subgroup is treated as a single study. To determine the actual $k$ of each subgroup, you can use the `count` function from `dplyr` in R.
321
365
322
366
367
+
=======
368
+
The results of the subgroup analysis are displayed under `Results for subgroups (fixed effect model)`. We see that, while the **pooled effects of the subgroups differ quite substantially** (*g* = 0.41-0.78), this difference is **not statistically significant**.
369
+
370
+
This can be seen under `Test for subgroup differences` in the `Between groups` row. We can see that $Q=3.03$ and $p=0.2196$. This information can be reported in our meta-analysis paper.
371
+
372
+
```{block,type='rmdachtung'}
373
+
Please not that the values displayed under `k` in the `Results for subgroups (fixed effects model)` section are always 1, as the pooled effect of the subgroup is treated as a single study. To determine the actual $k$ of each subgroup, you can use the `count` function from `dplyr` in R.
374
+
```
375
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
323
376
324
377
```{r,echo=FALSE}
325
378
load("Meta_Analysis_Data.RData")
@@ -332,6 +385,12 @@ library(dplyr)
332
385
dplyr::count(madata, vars=madata$Control)
333
386
```
334
387
388
+
<<<<<<< HEAD
389
+
=======
390
+
<br><br>
391
+
392
+
---
393
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
335
394
336
395
## Subgroup Analyses using the Random-Effects-Model
Now, let's assume i want to **know if intervention effects in my meta-analysis differ by region**. I use a **random-effects-model** and the selected coutries Argentina, Australia, China, and the Netherlands.Again, i use the `m.hksj` meta-analysis output object. I can perform a random-effects-model for between-subgroup-differences using the `update.meta` function. For this function, we have to **set two parameters**.
344
404
345
405
$~$
406
+
=======
407
+
Now, let's assume i want to **know if intervention effects in my meta-analysis differ by region**. I use a **random-effects-model** and the selected coutries Argentina, Australia, China, and the Netherlands.
408
+
409
+
Again, i use the `m.hksj` meta-analysis output object. I can perform a random-effects-model for between-subgroup-differences using the `update.meta` function. For this function, we have to **set two parameters**.
410
+
>>>>>>> f3259eafbebf95ffc6044d4af3f61e06d59c7876
346
411
347
412
```{r,echo=FALSE}
348
413
library(knitr)
@@ -351,11 +416,17 @@ Description<-c("Here, we specify the variable in which the subgroup of each stud
Here, we get the **pooled effect for each subgroup** (country). Under `Test for subgroup differences (random effects model)`, we can see the **test for subgroup differences using the random-effects-model**, which is **not significant** ($Q=4.52$,$p=0.3405$). This means that we did not find differences in the overall effect between different regions, represented by the country in which the study was conducted.
@@ -380,5 +452,18 @@ $~$
380
452
To use a fixed-effect-model in combination with a fixed-effects-model, we can also use the `update.meta` function again. The procedure is the same as the one we described before, but we have to set `comb.random` as `FALSE` and `comb.fixed` as `TRUE`.
381
453
382
454
455
+
=======
456
+
Here, we get the **pooled effect for each subgroup** (country). Under `Test for subgroup differences (random effects model)`, we can see the **test for subgroup differences using the random-effects-model**, which is **not significant** ($Q=4.52$,$p=0.3405$). This means that we did not find differences in the overall effect between different regions, represented by the country in which the study was conducted.
457
+
458
+
```{block,type='rmdachtung'}
459
+
**Using a fixed-effect-model for within-subgroup-pooling and a fixed-effects-model for between-subgroup-differences**
460
+
461
+
To use a fixed-effect-model in combination with a fixed-effects-model, we can also use the `update.meta` function again. The procedure is the same as the one we described before, but we have to set `comb.random` as `FALSE` and `comb.fixed` as `TRUE`.
0 commit comments