metadpy.bayesian.hmetad#

metadpy.bayesian.hmetad(data: None, nR_S1: Union[List, ndarray], nR_S2: Union[List, ndarray], nRatings: Optional[int], subject: None, within: None, nbins: int, padding: bool = False, padAmount: Optional[float] = None, output: str = 'model') Tuple[Union[Model, Callable], Optional[Union[InferenceData, MultiTrace]]][source]#
metadpy.bayesian.hmetad(data: DataFrame, stimuli: str, accuracy: str, confidence: str, nRatings: Optional[int], subject: None, within: None, nbins: int, padding: bool = False, padAmount: Optional[float] = None, output: str = 'model') Tuple[Union[Model, Callable], Optional[Union[InferenceData, MultiTrace]]]
metadpy.bayesian.hmetad(data: DataFrame, stimuli: str, accuracy: str, confidence: str, nRatings: Optional[int], subject: str, within: None, nbins: int, padding: bool = False, padAmount: Optional[float] = None, output: str = 'model') Tuple[Union[Model, Callable], Optional[Union[InferenceData, MultiTrace]]]
metadpy.bayesian.hmetad(data: DataFrame, stimuli: str, accuracy: str, confidence: str, nRatings: Optional[int], subject: str, within: str, nbins: int, padding: bool = False, padAmount: Optional[float] = None, sample_model: bool = True, output: str = 'model') Tuple[Union[Model, Callable], Optional[Union[InferenceData, MultiTrace]]]

Bayesian meta-d’ model with hyperparametes at the group level.

Parameters
data

Dataframe. Note that this function can also directly be used as a Pandas method, in which case this argument is no longer needed.

nR_S1

Confience ratings (stimuli 1, correct and incorrect).

nR_S2

Confience ratings (stimuli 2, correct and incorrect).

stimuli

Name of the column containing the stimuli.

accuracy

Name of the columns containing the accuracy.

confidence

Name of the column containing the confidence ratings.

nRatings

Number of discrete ratings. If a continuous rating scale was used, and the number of unique ratings does not match nRatings, will convert to discrete ratings using metadpy.utils.discreteRatings().

within

Name of column containing the within factor (condition comparison).

between

Name of column containing the between subject factor (group comparison).

subject

Name of column containing the subject identifier (only required if a within-subject or a between-subject factor is provided).

nbins

If a continuous rating scale was using, nbins define the number of discrete ratings when converting using metadpy.utils.discreteRatings(). The default value is 4.

padding

If True, each response count in the output has the value of padAmount added to it. Padding cells is desirable if trial counts of 0 interfere with model fitting. If False, trial counts are not manipulated and 0s may be present in the response count output. Default value for padding is False.

padAmount

The value to add to each response count if padCells is set to 1. Default value is 1/(2*nRatings)

sample_model

If False, only the model is returned without sampling.

output

The kind of outpute expected. If “model”, will return the model function and the traces. If “dataframe”, will return a dataframe containing d (dprime), c (criterion), meta_d (the meta-d prime) and m_ratio (meta_d/d).

num_samples

The number of samples per chains to draw (defaults to 1000).

num_chains

The number of chains (defaults to 4).

**kwargskeyword arguments

All keyword arguments are passed to func::pymc.sampling.sample.

Returns
If output=”model”:
model

The model PyMC as a pymc.Model.

traces

A MultiTrace or ArviZ InferenceData object that contains the samples. Only returned if sample_model is set to True, otherwise set to None.

or
results

If output=”dataframe”, pandas.DataFrame containing the values for the following variables:

  • d-prime (d)

  • criterion (c)

  • meta-d’ (meta_d)

  • m-ratio (m_ratio)

Raises
ValueError

When the number of ratings is not provided. If data is None and nR_S1 or nR_S2 not provided. If the backend is not “numpyro” or “pymc”.

Notes

This function will compute hierarchical Bayesian estimation of metacognitive efficiency as described in [1]. The model can be fitter at the subject level, at the group level and can account for repeated measures by providing the corresponding subject, between and within factors.

If the confidence levels have more unique values than nRatings, the confience column will be discretized using py:func:metadpy.utils.discreteRatings.

References

1

Fleming, S.M. (2017) HMeta-d: hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings, Neuroscience of Consciousness, 3(1) nix007, https://doi.org/10.1093/nc/nix007.

Examples

  1. Subject-level