Background
- transcranial magnetic stimulation (TMS) noninvasively
activates or inhibits brain activity in targeted regions
- FDA-approved for treatment of major depressive disorder,
obsessive-compulsive disorder, migraines, and to help people stop smoking
- has also been used for other substance use disorders, anxiety, PTSD,
traumatic brain injury, ADHD, and Parkinson's Disease,
and in research on brain function
- a current pulse in the coil generates an “E-field”
by electromagnetic induction
(not the same as electroconvulsive therapy (ECT),
in which an electrical current is applied directly to a patient's head)
- the field values and how they map onto specific parts of the brain
depend on both coil position and the subject's specific anatomy
- visualizing the predicted field is needed for accurate and
customized coil placement
- existing methods for predicting the field are either very slow or
require high-performance GPUs not typically available to clinicians
What does TMS look/sound like?
SlicerTMS Components
- the server runs real-time E-field prediction with a
neural network either locally or as a remote service communicating with the
client, the SlicerTMS user interface, via
OpenIGTLinkIF (another Slicer module)
- SlicerTMS is integrated into 3D Slicer using Kitware's VTK framework
and can be used on standard desktop monitors with different operating systems
- optionally, VR or AR can be used for further visualization and user
interaction; a secure WebSocket with the Tornado networking library
supports browser connection to WebXR on a VR headset or mobile device for AR
(using JavaScript to access WebXR and ThreeJS for client-side rendering)
- deep neural network: multiscale 3D-ResUnet with
reduced field of view (details of training elsewhere)
Neuronavigation
- in Slicer, the TMS coil object
(defined in an STL file, figure-8-type in this example)
can be moved relative to the brain mesh
interactively via mouse, smartphone with a depth sensor, or coordinate
entry, with continuous updating of the computed field.
Patient-specific conductivity data, skin mesh (surface), brain mesh,
and MRI data are read from files.
- See also: short demo videos on their github showing real-time response
- Left: E-field visualized on the gray matter surface
- Center: E-field visualized on the volumetric data of the MRI scan
- Right: E-field visualized on tractography data (from dMRI)
with the help of the SlicerDRMI module
- Far right: axial, coronal, sagittal slice views
Performance Tests
- MRI, conductivity, skin, and brain mesh data from
10 randomly chosen subjects in the Human Connectome Project
- 50 runs each combination (subject x hardware configuration)
- Apple M1 with 16 GB memory (would be ~3.5-15x faster if/when
Pytorch supports 3D convolutions with Apple Metal Performance shaders);
36-CPU Intel Core i9-9980XE workstation;
local NVIDIA GeForce RTX 2080 GPU and
remote (cloud) NVIDIA A100 GPU are the fastest
Results Like SimNIBS But Much Faster
- Left: SlicerTMS; Right: SimNIBS
"current state-of-the-art TMS visualization"
"does not rely on deep learning but on visualizing an E-field
based on manual coil placement" (?)
"Despite our comparison... SimNIBS is a valuable tool...
with additional functionalities not available in SlicerTMS"
Future Directions
- measuring distance from coil to cortex
- showing the vector fields
- improving the current WebXR face filter projecting the brain on a subject
in AR