[Defense] Classification of 4D images using ML, Focusing on resource efficiency: memory, “work”, energy
Tuesday, November 23, 2021
11:15 am - 12:30 pm
In
Partial
Fulfillment
of
the
Requirements
for
the
Degree
of
Doctor
of
Philosophy
Nazanin
Beheshti
will
defend
her
dissertation
Classification
of
4D
images
using
ML,
Focusing
on
resource
efficiency:
memory,
“work”,
energy
Abstract
Alzheimer’s disease is progressively degenerative with a character of memory loss, mood and behavior changes, and deepening confusion about time and place. It is estimated that worldwide about 50 million people are affected by AD. The lifetime per patient care cost of AD is estimated to be about $250k and the total cost of care of AD patients could exceed $1 trillion by 2050. In this research we use novel data reduction techniques in determining functional brain connectivity from Resting State fMRI data and show that small Machine Leaning models can with good accuracy classify subjects with respect Alzheimer’s disease (AD) or Mild Cognitive Impairment (MCI) or being Cognitive Normal (CN). In fMRI, brain activity is captured from Blood Oxygen Level-Dependent (BOLD) magnetization detected by the MRI scanner. The functional connectivity is inferred from correlations of the observed BOLD signals from typically cubic voxels with sides in the 3 – 4 mm range. The BOLD signals are typically sampled every 2 – 3 seconds for a duration of five to six minutes generating a data set of 5 – 10 million voxel BOLD signal values per subject. To reduce the computational effort classification is typically carried out based on signal aggregates for anatomical regions defined in brain atlases. In this research we use the 90 region Automated Anatomical Labeling atlas, AAL-90, in establishing Regions of Interest, ROIs that are subsets of voxels in the AAL-90 atlas. The functional connectivity is measured by correlation of BOLD signal aggregates for the ROIs. In the data reduction step we represent the 4D data set for a region with a vector that on average reduces the data set for a region from about 100,000 voxel signal values to 100 to 200 values in our Spatial representation and in the order 15,000 – 30,000 in our Spatial-Temporal representation. We show that a small Convolutional Neural Network (CNN) with a model size of about 240 kiB and a Transformer model of only 37 kiB yields classification accuracies of 80 – 90% for AD, MCI and CN subject classification. We further show that our region data aggregation technique is more robust to BOLD signal artifacts than the commonly used aggregation technique. The training time for the CNN and Transformer on a data set of 551 subjects required 184 and 23.73 seconds respectively. The experiments are conducted on Opuntia Cluster using Pytorch.1.5.0, Python 3.7.7 and CUDA 10.1 on a 2.8GHz Intel Xeon E5-2670v2 processor with 2 CPU sockets and 20 cores, and Nvidia Tesla K40 GPU.
Tuesday,
November
23,
2021
11:15AM
-
12:30PM
CT
Online
via
Zoom
Dr. Lennar Johnsson, dissertation advisor
Faculty, students and the general public are invited.