.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "examples/rsa/1_rsa_intro.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_examples_rsa_1_rsa_intro.py: 1. Computing RDMs ================= As we did for classification, we will use an MEG dataset of humans performing a visual categorization task. Briefly, participants saw a list of 92 images. Effectively, these images are faces, not faces, human, not human, artificial, etc. For more information, consult `MNE's documentation `_ or the `original paper `_. Our goal here will be to create a neural RDM and a hypothesis RDM based on the neural data and the corresponding categories and see if we find some similarity between the two. First, we will have to load the data---To do this, we use MNE's sample code. Be aware that this will download roughly 6GB of data, which may take a while. As we did in the classification example, we will again be using only gradiometers (for convenience). .. GENERATED FROM PYTHON SOURCE LINES 10-65 .. code-block:: Python import matplotlib.pyplot as plt import numpy as np from pandas import read_csv import mne from mne.datasets import visual_92_categories from mne.io import concatenate_raws, read_raw_fif print(__doc__) data_path = visual_92_categories.data_path() # Define stimulus - trigger mapping fname = data_path / "visual_stimuli.csv" conds = read_csv(fname) print(conds.head(5)) max_trigger = 92 conds = conds[:max_trigger] # take only the first 24 rows conditions = [] for c in conds.values: cond_tags = list(c[:2]) cond_tags += [ ("not-" if i == 0 else "") + conds.columns[k] for k, i in enumerate(c[2:], 2) ] conditions.append("/".join(map(str, cond_tags))) print(conditions[:10]) event_id = dict(zip(conditions, conds.trigger + 1)) event_id["0/human bodypart/human/not-face/animal/natural"] n_runs = 4 # 4 for full data (use less to speed up computations) fnames = [data_path / f"sample_subject_{b}_tsss_mc.fif" for b in range(n_runs)] raws = [ read_raw_fif(fname, verbose="error", on_split_missing="ignore") for fname in fnames ] # ignore filename warnings raw = concatenate_raws(raws) events = mne.find_events(raw, min_duration=0.002) events = events[events[:, 2] <= max_trigger] picks = mne.pick_types(raw.info, meg=True) epochs = mne.Epochs( raw, events=events, event_id=event_id, baseline=None, picks='grad', tmin=-0.1, tmax=0.500, preload=True, ) .. rst-class:: sphx-glr-script-out .. code-block:: none Using default location ~/mne_data for visual_92_categories... Creating /home/runner/mne_data Downloading file 'MNE-visual_92_categories-data-part1.tar.gz' from 'https://osf.io/8ejrs/download?version=1' to '/home/runner/mne_data'. 0%| | 0.00/3.72G [00:00` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: 1_rsa_intro.py <1_rsa_intro.py>` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: 1_rsa_intro.zip <1_rsa_intro.zip>` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_