Skip to content

Commit 9f56f77

Browse files
author
LegrandNico
committed
Docs
1 parent 39dc5cd commit 9f56f77

File tree

3 files changed

+31
-30
lines changed

3 files changed

+31
-30
lines changed

docs/source/cite.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# How to cite?
2+
3+
If you are using [metadpy](https://github.com/embodied-computation-group/metadpy) for your research, we ask you to cite the GitHub repository in the final publication.
4+
5+
If you are using the Bayesian models, you might also refer to the original publication:
6+
7+
Fleming, S. M. (2017). HMeta-d: hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings. In Neuroscience of Consciousness (Vol. 2017, Issue 1). Oxford University Press (OUP). https://doi.org/10.1093/nc/nix007
8+
9+
*In BibTeX format:*
10+
11+
```text
12+
@article{10.1093/nc/nix007,
13+
author = {Fleming, Stephen M},
14+
title = "{HMeta-d: hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings}",
15+
journal = {Neuroscience of Consciousness},
16+
volume = {2017},
17+
number = {1},
18+
year = {2017},
19+
month = {04},
20+
abstract = "{Metacognition refers to the ability to reflect on and monitor one’s cognitive processes, such as perception, memory and decision-making. Metacognition is often assessed in the lab by whether an observer’s confidence ratings are predictive of objective success, but simple correlations between performance and confidence are susceptible to undesirable influences such as response biases. Recently, an alternative approach to measuring metacognition has been developed (Maniscalco and Lau 2012) that characterizes metacognitive sensitivity (meta-d') by assuming a generative model of confidence within the framework of signal detection theory. However, current estimation routines require an abundance of confidence rating data to recover robust parameters, and only provide point estimates of meta-d’. In contrast, hierarchical Bayesian estimation methods provide opportunities to enhance statistical power, incorporate uncertainty in group-level parameter estimates and avoid edge-correction confounds. Here I introduce such a method for estimating metacognitive efficiency (meta-d’/d’) from confidence ratings and demonstrate its application for assessing group differences. A tutorial is provided on both the meta-d’ model and the preparation of behavioural data for model fitting. Through numerical simulations I show that a hierarchical approach outperforms alternative fitting methods in situations where limited data are available, such as when quantifying metacognition in patient populations. In addition, the model may be flexibly expanded to estimate parameters encoding other influences on metacognitive efficiency. MATLAB software and documentation for implementing hierarchical meta-d’ estimation (HMeta-d) can be downloaded at https://github.com/smfleming/HMeta-d.}",
21+
issn = {2057-2107},
22+
doi = {10.1093/nc/nix007},
23+
url = {https://doi.org/10.1093/nc/nix007},
24+
note = {nix007},
25+
eprint = {https://academic.oup.com/nc/article-pdf/2017/1/nix007/25024086/nix007.pdf},
26+
}
27+
```

docs/source/index.ipynb

Lines changed: 2 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -52,21 +52,7 @@
5252
"\n",
5353
"For an extensive introduction to metadpy, you can navigate the following notebooks that are Python adaptations of the introduction to the [hMeta-d toolbox](https://github.com/metacoglab/HMeta-d) written in Matlab by Olivia Faul for the [Zurich Computational Psychiatry course](https://github.com/metacoglab/HMeta-d/tree/master/CPC_metacog_tutorial).\n",
5454
"\n",
55-
"## Examples \n",
56-
"\n",
57-
"| Notebook | Colab | nbViewer |\n",
58-
"| --- | ---| --- |\n",
59-
"| Example 1 - Fitting MLE - Subject and group level | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%201%20-%20Fitting%20MLE%20-%20Subject%20and%20group%20level.ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%201%20-%20Fitting%20MLE%20-%20Subject%20and%20group%20level.ipynb)\n",
60-
"| Example 2 - Fitting Bayesian - Subject level (pymc) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%202%20-%20Fitting%20Bayesian%20-%20Subject%20level%20(pymc).ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%202%20-%20Fitting%20Bayesian%20-%20Subject%20level%20(pymc).ipynb)\n",
61-
"\n",
62-
"\n",
63-
"## Tutorials\n",
64-
"\n",
65-
"| Notebook | Colab | nbViewer |\n",
66-
"| --- | ---| --- |\n",
67-
"| 1. What metacognition looks like? | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/1-What%20metacognition%20looks%20like.ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/1-What%20metacognition%20looks%20like.ipynb)\n",
68-
"| 2. Fitting the model | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/2-Fitting%20the%20model-MLE.ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/2-Fitting%20the%20model-MLE.ipynb)\n",
69-
"| 3. Comparison with the HMeta-d toolbox | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/3-Comparison%20with%20the%20hmeta-d%20toolbox.ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/3-Comparison%20with%20the%20hmeta-d%20toolbox.ipynb)"
55+
"✏️ [Tutorials and examples](https://embodied-computation-group.github.io/metadpy/tutorials.html) "
7056
]
7157
},
7258
{
@@ -869,6 +855,7 @@
869855
"hidden:\n",
870856
"---\n",
871857
"API <api.rst>\n",
858+
"Tutorials <tutorials.md>\n",
872859
"Cite <cite.md>\n",
873860
"References <references.md>\n",
874861
"```"

docs/source/index.md

Lines changed: 2 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -55,21 +55,7 @@ metadpy first aims to be the Python equivalent of the [hMeta-d toolbox](https://
5555

5656
For an extensive introduction to metadpy, you can navigate the following notebooks that are Python adaptations of the introduction to the [hMeta-d toolbox](https://github.com/metacoglab/HMeta-d) written in Matlab by Olivia Faul for the [Zurich Computational Psychiatry course](https://github.com/metacoglab/HMeta-d/tree/master/CPC_metacog_tutorial).
5757

58-
## Examples
59-
60-
| Notebook | Colab | nbViewer |
61-
| --- | ---| --- |
62-
| Example 1 - Fitting MLE - Subject and group level | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%201%20-%20Fitting%20MLE%20-%20Subject%20and%20group%20level.ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%201%20-%20Fitting%20MLE%20-%20Subject%20and%20group%20level.ipynb)
63-
| Example 2 - Fitting Bayesian - Subject level (pymc) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%202%20-%20Fitting%20Bayesian%20-%20Subject%20level%20(pymc).ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%202%20-%20Fitting%20Bayesian%20-%20Subject%20level%20(pymc).ipynb)
64-
65-
66-
## Tutorials
67-
68-
| Notebook | Colab | nbViewer |
69-
| --- | ---| --- |
70-
| 1. What metacognition looks like? | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/1-What%20metacognition%20looks%20like.ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/1-What%20metacognition%20looks%20like.ipynb)
71-
| 2. Fitting the model | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/2-Fitting%20the%20model-MLE.ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/2-Fitting%20the%20model-MLE.ipynb)
72-
| 3. Comparison with the HMeta-d toolbox | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/3-Comparison%20with%20the%20hmeta-d%20toolbox.ipynb) | [![View the notebook](https://img.shields.io/badge/render-nbviewer-orange.svg)](https://nbviewer.jupyter.org/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/3-Comparison%20with%20the%20hmeta-d%20toolbox.ipynb)
58+
✏️ [Tutorials and examples](https://embodied-computation-group.github.io/metadpy/tutorials.html)
7359

7460
+++ {"id": "w0EklNnNf6Ms"}
7561

@@ -299,6 +285,7 @@ az.summary(trace, var_names=["meta_d", "cS2", "cS1"])
299285
hidden:
300286
---
301287
API <api.rst>
288+
Tutorials <tutorials.md>
302289
Cite <cite.md>
303290
References <references.md>
304291
```

0 commit comments

Comments
 (0)