Optimizing Acquisition Signal-to-Noise Ratios for System Identification.

Human neuroimaging research has historically relied upon amplitude and correlational statistics. In obtaining neuroimaging data optimized specifically for characterization via differential equations, however, we had to grapple with less than ideal signal-to-noise ratios, a problem inherent with fMRI.  While techniques such as electroencephalography (EEG), magnetoencephalography (MEG), and—to a lesser extent—near-infrared spectroscopy (NIRS), have superior temporal resolution, all are inherently cortical measures; as such, they cannot be used to derive entire control circuits, as they would fail to include key subcortical components. We thus moved from 3T fMRI to ultra-high-field (7T) fMRI to increase signal strength, combined with simultaneous multi-slice pulse sequences to increase temporal resolution, to make possible single-subject, non-trial-averaged, sub-second sampled time-courses, which we found to retain dynamic information up to an order of magnitude over traditional fMRI methods.

Optimizing acquisition parameters on the 7T fMRI at the Martinos Center for Biomedical Imaging, we were able to increase our temporal resolution from 2000ms to 800ms. Expected bilateral synchronization of the nucleus accumbens [26] showed more than an order of magnitude difference:  coupling for standard 3T/TR=2100ms was r=0.17 (r2=0.03), p=0.04; while coupling for our enhanced acquisition using 7T/TR=802ms was r=0.66 (r2=0.44), p<0.0000001.   As a final step in validating the integrity of our time-series dynamics, we designed and built a novel dynamic phantom (patent pending, Figure 3). The dynamic phantom allows not only my team, but also the fMRI field, for the first time to quantify and therefore to optimize for dynamic fidelity, as well as to develop methods to clean data of artifact that would distort the time-series dynamics.  

Why Do Task-Free (“Resting-State”) Paradigms Require a Dynamic Phantom?

As neuroimaging transitions from activation maps to connections between nodes we not only produce a conceptual shift with respect to the role of functional neuroimaging, but also radically increase dependence upon time-series dynamics. Intuitively, activation maps compare the amplitude of a signal against a background of undesired physiological, thermal, and scanner noise present in all fMRI studies.  Thus, for task-based fMRI, subtracting noise from signal is straightforward, since a task activates the brain reliably more under one condition (signal) than another (noise).  However, for task-free analyses, the ‘baseline’ fluctuations themselves also include the ‘signal.’  This means that one needs an independent ground-truth for signal and noise in order to remove one from the other. In neuroimaging, an instrument that produces a known MR signal—and therefore can provide this ground-truth—is called a phantom.

Many sources of noise influence fMRI images: patient motion, magnetic field inhomogeneities (particularly with gradient echo images), signal drift, aliasing, scanner variability (particularly with functional imaging), signal dropout at air-tissue interfaces, signal distortion from fat, blood flow, peristaltic motion, cardiac motion, phase wrap-around, Gibb’s artifact, zipper artifact, and others [3]. Though many of these sources of artifact are caused specifically by patient physiology, many are a direct result of equipment shortcomings. Thus, tight calibration and quality assurance procedures are necessary to ensure optimal MRI function.

To date, the only phantoms that are commercially available are static phantoms (typically, a sphere or cylinder of electrolyte solution with embedded geometric features), which are designed primarily for structural MRI, but can also be used in functional MRI in order to assess and minimize spontaneous scanner fluctuations due to noise.  However, task-free fMRI depends not only upon suppressing fluctuations due to noise, but equally upon promoting fluctuations due to signal, which can only be assessed by a phantom that produces a known and changing (dynamic) signal.  The importance of a dynamic phantom is that it is the only calibration method that can quantifiably assess the most basic assumption underlying all task-free fMRI: fidelity between input (brain) dynamics and output (fMRI time-series) dynamics.  Because a dynamic phantom is uniquely capable of dissociating between signal fluctuations and noise fluctuations, it can be designed to increase detection sensitivity, accuracy, and reliability for the task-free paradigms that will increasingly dominate the clinical neuroimaging field.  We developed the dynamic phantom because the analytic methods used by LCNeuro, which involve fitting fMRI time-series with differential equations, require even greater dynamic fidelity than do standard statistical analyses.  In addressing this problem for ourselves, we produced a device that may assist other labs as well; therefore, with generous support from the NIH and NSF, we have partnered with ALA Scientific Instruments, Inc. for the further R&D, manufacturing, and commercialization of our prototype.