The BOLD signal tracks the internal temporal structure of these 3-second events, meaning early and late parts of the signal correspond to early and late parts of the video.

Data and pre-trained models (like the TSM ResNet50 used in the study) are available on GitHub .

The dataset contains 1,102 three-second naturalistic videos sampled from the Moments in Time (MiT) and Memento10k datasets.

Video-evoked responses are reliably mapped across occipital, temporal, and parietal cortices.

The "complete paper" associated with this dataset and its corresponding video clips is: published in Nature Communications (July 2024). Paper & Dataset Overview

The study identifies specific brain regions in the parietal and high-level visual cortex that correlate with how memorable a video clip is. 🎥 Related Resources

The filename maddsmr_shortclip912.mp4 follows the naming schema used in the MAD (Movie Audio Descriptions) or related sub-collections (like Memento10k/MiT) that feed into the BOLD Moments research. Key Findings:

The video file is a specific stimulus from the BOLD Moments Dataset (BMD) , a large-scale fMRI dataset designed to study how the human brain processes short visual events.