Back to Projects

Decoding Hand Kinematics in Brain-Computer Interfaces with State-Space Models

Overview

This project focused on decoding continuous hand movements from brain activity. While most BCI models classify discrete movement intentions, we tackled the more complex task of predicting continuous hand trajectories — essential for fine motor control in real-world applications.


Dataset & Task

We used the Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology dataset, which contains spike recordings from two macaques (Indy and Loco) reaching for targets arranged in an 8×8 grid.

  • 96–192 channels from motor and somatosensory cortices
  • No pre-movement delay; free-paced reaching
  • Predict 2D hand position using only the last 50 timesteps of spiking activity

Note: The above link is for reference, the actual loading and processing are done using the neurobench code harness


Models Trained

We trained 2 State-Space Models on this dataset:

  1. Legendre Memory Units (LMUs)
  2. Structured State Space Sequence Models (S4)

Refer to my blog post on State-Space Models if you are new to the topic.


Model Architecture

  • The LMU was trained with hidden size 16 and memory size 32
  • The S4 was trained with hidden size 64 and memory size 64

Note: The models were trained with a sub window binning method on the spikes.


Results

ModelParametersTest R2 Score
ANN 2D5K0.62
ANN 3D24K0.65
LSTM44K0.58
EEGNet11K0.56
LMU7K0.70
S470K0.75

*The results shown above are for the 3rd session of Monkey 1