AI Insights

Unlocking Secrets of the Mind: AI’s Potential in Early Alzheimer’s Detection

May 7, 2024


article featured image

The Problem

Alzheimer’s disease, a debilitating brain condition, leads to a progressive decline in essential cognitive functions, impairing an individual’s ability to perform daily tasks. According to the World Health Organization, neurological disorders like Alzheimer’s rank among the top ten leading causes of death worldwide, accounting for 9% of global mortality.

Early detection and accurate diagnosis of Alzheimer’s present significant challenges due to the brain’s complexity. While predominantly affecting those over 65, a rare subset of patients in their 40s or 50s are diagnosed with early-onset Alzheimer’s.

The Background

elderly

Globally, Alzheimer’s affects approximately 24 million people, with prevalence increasing with age. Nearly one in ten individuals over 65 and a third of those over 85 suffer from the condition.

A study by Ron Brookmeyer et al., published in ScienceDirect, projects that the global incidence of Alzheimer’s will quadruple by 2050, with 1 in 85 individuals worldwide living with the disease. An estimated 43% of those affected will require high-level care, equivalent to that provided in nursing homes.

The alarming trajectory of Alzheimer’s prevalence underscores the urgent need for advancements in prevention, diagnosis, and treatment to manage the growing global burden and support the millions of affected individuals and their caregivers.

Importance of Early Detection of Alzheimer’s Disease

The global burden of Alzheimer’s disease is staggering, with cases projected to quadruple by 2050, according to a study by Ron Brookmeyer, Elizabeth Johnson, Kathryn Ziegler-Graham, and H. Michael Arrighi. This alarming trend underscores the critical importance of early detection.

Early diagnosis provides access to treatments that can slow disease progression and enables participation in clinical trials.

It encourages healthier lifestyles, reduces anxiety by explaining symptoms, and allows families to plan ahead and cherish their time together. The extra time gained through early detection is invaluable for making informed decisions about legal, financial, and end-of-life matters, ensuring the wishes of those diagnosed are respected.

Moreover, proactively diagnosing Alzheimer’s during the mild cognitive impairment stage can yield substantial cost savings in medical and long-term care expenses, potentially saving a collective $7 trillion for all Americans currently alive who will develop the disease.

The Goal

brain

The goal of this project was to use machine learning and computer vision to revolutionize early detection and diagnosis of Alzheimer’s disease.

Omdena’s Toronto Chapter decided to create an AI model that could analyze brain scans to identify patterns that may indicate neurological disorders, ultimately making highly accurate predictions.

By harnessing AI technology, we aimed to develop a tool that complemented existing diagnostic methods, providing a more objective and potentially earlier indication of Alzheimer’s and related conditions. This innovative approach could improve patient outcomes by enabling timely intervention and management strategies, thereby advancing the battle against these devastating diseases.

Our Approach

Step 1: EDA, Pre-Processing and Augmentation

For the exploratory data analysis (EDA), preprocessing, and augmentation stages, we will focus on a Kaggle dataset containing 6400 grayscale MRI images (208, 176) categorized into four classes: MildDemented, ModerateDemented, NonDemented, and VeryMildDemented. Pixel intensities range between 0 and 255.

This dataset was chosen for its ample size compared to others like ADNI. During EDA, we will explore image distribution across classes, analyze pixel intensity, and visualize samples. Preprocessing will standardize dimensions, normalize intensities, and address noise or artifacts. Augmentation techniques like flipping, rotation, zooming, and adjusting brightness and contrast will increase training data diversity and improve model generalization.

However, the dataset is highly class imbalanced, potentially leading to biased results. Data augmentation or weighted loss functions will be necessary to address this issue.

Leveraging this dataset will lay a solid foundation for model development and evaluation, facilitating early Alzheimer’s detection through machine learning.

ADNI Data Exploration

The ADNI dataset exploration focused on three classes:

  1. AD (Alzheimer’s Disease)
  2. MCI (Mild Cognitive Impairment), and
  3. CN (Control Normal).

The structural MRI (sMRI) data was three-dimensional, with images along the Axial, Sagittal, and Coronal axes, and dimensions of (256, 256, 170).

Subject information and data were stored in a .csv file, with unique Image Data IDs and potentially repeated Subject fields based on scan categories. This dataset enabled comprehensive analysis of clinical information and volumetric MRI data, examining class distribution, relationships between variables, and variability across subjects and scans.

Preprocessing standardized the data, including normalization, registration, and handling missing or corrupted data. ITK-SNAP was used to visualize and analyze neuroimaging data, particularly NIFTI (.nii) files, providing an interface to view volume slices across the three-dimensional planes.

ITK-SNAP Toolbox

Kaggle Data Preprocessing

The Kaggle dataset preprocessing pipeline converted DICOM to PNG, resampled images for consistent voxel size, and applied filters to reduce noise. Intensity normalization standardized pixel intensity. Feature extraction and segmentation identified key structures, while augmentation increased dataset diversity. PSNR quantified image quality by comparing signal strength to noise, ensuring fidelity for analysis and modeling.

Comparison of different filtering techniques on a sample of .dcm images

Comparison of different filtering techniques on a sample of .dcm images

Intensity Normalization

Intensity normalization is key for preparing medical images for analysis. Common techniques include:

  • Histogram Equalization and CLAHE: Enhance contrast by stretching the pixel intensity histogram. CLAHE preserves features locally.
  • Z-score normalization: Standardizes pixel intensities to a normal distribution.
  • Zero-One normalization: Scales pixel values between 0 and 1 for consistency.
  • Percentile normalization: Scales pixel values based on percentiles, accommodating variations in pixel intensity distribution for robust analysis.

Feature extraction

Medical images for analysis

Feature extraction is crucial in analyzing medical images, enabling the identification of key structures and patterns. Edge detection plays a pivotal role in delineating object boundaries and highlighting structural details.

Roberts, Sobel, Scharr, and Prewitt edge detection algorithms detect edges by analyzing changes in pixel intensity. Canny edge detection, a multi-stage algorithm, produces high-quality edge maps with minimal noise.

Corner and keypoint detection are fundamental in computer vision for identifying distinctive features. Harris Corner Detection and Shi-Tomasi Corner Detection algorithms pinpoint corners by analyzing intensity variations.

For keypoint detection, the Scale-Invariant Feature Transform (SIFT) algorithm is robust in detecting corners, blobs, and circles across different scales. The Oriented FAST and Rotated BRIEF (ORB) algorithm leverages FAST and BRIEF techniques for efficient keypoint detection and matching.

Edge detection for feature extraction

Edge detection for feature extraction

Image Segmentation

Image segmentation is crucial for identifying regions of interest in medical scans, enabling precise analysis and diagnosis. Two key methods are employed:

  • Multi-Otsu Thresholding classifies pixels into distinct intensity levels based on multiple thresholds, typically represented by a red line in the histogram. This facilitates the identification of different structures or tissues.
  • Region-based Segmentation combines techniques to delineate regions of interest. A Sobel filter generates an elevation map, markers guide the segmentation process, and the watershed algorithm segments the image into distinct regions. This method also fills holes and labels connected components, resulting in comprehensive segmentation that accurately represents the underlying anatomy or abnormalities.
Region based Segmentation: Showing the elevation map, markers and segmentation binary (top) and the final segmentation image (bottom)

Region based Segmentation: Showing the elevation map, markers and segmentation binary (top) and the final segmentation image (bottom).

Image Augmentation

Image augmentation enhances the robustness and generalization of deep learning models, especially when dealing with limited data or class imbalances. This project uses various augmentation techniques to increase dataset diversity and mitigate class imbalance issues.

During training, the Keras Image Generator Augmentation technique applies scaling and geometric transformations to the images, increasing dataset variability and improving model performance.

The Cut, Paste, and Learn Synthesis method generates synthetic images, ensuring the model learns from a balanced distribution of classes.

The imgaug package applies a comprehensive set of augmentations, including flipping, scaling, translation, brightness and contrast adjustments, Gaussian blurring, Gaussian noise, saturation adjustment, shear transformations, and CLAHE. These methods significantly enhance dataset variability, contributing to the robustness and effectiveness of the trained models in accurately identifying and diagnosing Alzheimer’s disease from brain scans.

Step 2: Model Development

Architecture

The custom CNN architecture used in this project consists of two convolutional layers, each followed by max-pooling, batch normalization, and dropout layers. These layers are crucial for feature extraction, allowing the model to capture complex patterns. ReLU activation introduces non-linearity, enhancing learning capacity.

Dense layers are employed for classification, with the softmax activation in the final layer representing the dataset’s classes. Max pooling downsample the spatial dimensions, retaining essential features while reducing computational complexity. Batch normalization normalizes activations, accelerating training convergence and stabilizing learning. Dropout mitigates overfitting by randomly dropping connections, promoting generalization and robustness.

This combination of layers enables effective feature extraction and classification, making it well-suited for diagnosing Alzheimer’s disease from brain scans.

Schematic diagram of the CNN architecture

Data Imbalance

The Synthetic Minority Over-sampling Technique (SMOTE) is a valuable method for tackling class imbalance in machine learning. By synthesizing new instances for the minority class through interpolation, SMOTE balances the class distribution and prevents model bias towards the majority class. This augmentation promotes equitable representation, improving the model’s ability to learn and generalize effectively across different class distributions. Overall, SMOTE is a powerful technique for addressing class imbalance and enhancing performance in classification tasks.

Hyperparameter Optimization using KerasTuner ​

Keras Tuner is a powerful tool for finding the best hyperparameters for a convolutional neural network. It uses Bayesian optimization to search the hyperparameter space and maximize the model’s validation accuracy.

The tuner varies parameters like the number of convolutional layers (1-15), filters per layer (16-128), filter sizes (3×3 to 5×5), dropout rate, and dense layer neurons. It trains the model for 10 epochs and runs up to 20 iterations to find the optimal hyperparameter combination.

By thoroughly exploring the parameter space, Keras Tuner helped create highly optimized CNN architectures for detecting Alzheimer’s disease from brain scans.

Model Training

The dataset was split into training (80%) and testing (20%) sets, with 20% of the training data used for validation. This helped assess generalization and prevent overfitting.

After addressing data imbalance with techniques like SMOTE, the model was trained for 100 epochs with a batch size of 16. KerasTuner was used for hyperparameter tuning to find the optimal CNN architecture.

The learning rate was set to 0.002, and the Adam optimizer was used for efficient convergence. The categorical cross-entropy loss function was chosen for the multi-class classification problem of diagnosing Alzheimer’s disease.

Accuracy metrics for both training and validation datasets were monitored to assess performance, generalization, and identify any overfitting or underfitting issues.

Training and Validation Accuracy

Training and Validation Accuracy

Experiments with other models

Several alternative approaches were explored to enhance the Alzheimer’s disease detection system:

  • Transfer learning with pre-trained models like VGG19 and EfficientNetV2 was applied to leverage knowledge from large-scale datasets and potentially improve classification accuracy.
  • The fast.ai library was used to experiment with various pre-trained models, including ResNet18, ConvNext_tiny_in22k, VGG16, and RegNetX_080, which were fine-tuned and evaluated for their suitability.
  • Error Level Analysis (ELA) with ResNet50 was employed to identify regions of interest in images that may have been digitally manipulated, providing insights for Alzheimer’s disease diagnosis.
  • Alternative loss functions, such as weighted cross-entropy loss, were investigated to address class imbalance issues and improve model performance in multi-class classification tasks.

These exploratory approaches highlight the iterative nature of model development, testing various techniques to identify the most effective strategies for Alzheimer’s disease detection from brain scan images, ultimately leading to the refinement and optimization of the final model architecture.

Step 3: Testing and Validation of the Model

After the model was trained for 100 epochs, it performed relatively well on the training and validation data, as could be inferred from the metrics and loss values.

Training Data Validation Data
Loss 0.0175 0.0404
Accuracy 0.9946 0.9922

To check how well the model generalizes to unseen data or if the model overfit the training data we need to compute the metrics on the test dataset. ​

Upon testing the model on our test data, we achieved an impressive accuracy of 0.9922 or 99.22%.​

This demonstrates the model’s robustness and effectiveness in accurately predicting outcomes on unseen data, indicating that it has successfully learned the underlying patterns and can generalize well to new examples.

While a high test accuracy is a strong indicator of performance, it’s important to also consider other relevant metrics such as precision, recall, and F1 score, depending on the project’s specific goals. Additionally, ensuring that our test data is representative of real-world data is crucial to avoid biases or overfitting. Overall, the accuracy of 0.9922 on the test set is a promising sign of the model’s capabilities and potential for successful deployment in practical applications.

Confusion Matrix - The confusion matrix shows that the model predicts most of the labels correctly.​ Also we note that the minority class has been well classified.​

Confusion Matrix – The confusion matrix shows that the model predicts most of the labels correctly.​ Also we note that the minority class has been well classified.​

Step 4: Deployment

The Alzheimer’s disease detection system followed a modular deployment architecture:

Front-End: Developed using Streamlit, the front-end offered an interactive, visually appealing user interface with data visualization, input forms, and result displays.

Back-End: A Docker container encapsulated the machine learning model, serving it as a REST API. Docker ensured consistency and reproducibility across deployment environments, enhancing scalability.

Deployment: Hugging Face Spaces streamlined deployment, providing a platform for hosting and sharing machine learning applications as RESTful APIs.

This architecture separated front-end and back-end components, leveraging Streamlit, Docker, and Hugging Face Spaces for flexibility, scalability, and efficiency in deploying the user-friendly Alzheimer’s disease detection system.

Model Hosting as REST API

Key Achievements

This project has made significant strides in advancing early Alzheimer’s detection using AI:

Robust Preprocessing Pipeline

We developed a comprehensive pipeline that effectively handles brain scan image challenges, ensuring high-quality data for model training.

Optimized CNN Architecture

Through experimentation and tuning, we arrived at an optimized CNN architecture that accurately classifies brain scans into different Alzheimer’s stages.

Addressing Class Imbalance

We employed techniques like SMOTE to generate synthetic samples for minority classes, achieving a balanced dataset and improving model generalization.

High Accuracy and Generalization

Our model achieved an impressive 0.9922 accuracy on the test dataset, demonstrating its ability to accurately classify brain scans and generalize to unseen data.

Modular Deployment Architecture

We implemented a modular architecture using Streamlit, Docker, and Hugging Face Spaces, creating a scalable and user-friendly system for healthcare professionals.

These achievements showcase the project’s success in developing an AI-driven solution for early Alzheimer’s detection, combining state-of-the-art techniques with a robust pipeline and modular deployment architecture.

Potential Applications and Industries

The AI-driven methodology and technology developed in this project for early Alzheimer’s detection have far-reaching applications beyond neurodegenerative disorders. The preprocessing pipeline, CNN architecture, and modular deployment can be adapted to various industries, offering significant benefits and advancements.

Medical Imaging and Diagnostics

The AI model can be extended to detect other neurological disorders, improving diagnostic accuracy and efficiency for earlier interventions and better patient outcomes.

Drug Discovery and Development

By analyzing brain scans, researchers can gain insights into Alzheimer’s mechanisms, guiding drug discovery efforts and monitoring treatment efficacy in clinical trials.

Personalized Medicine

The modular architecture and user-friendly interface enable personalized treatment plans tailored to each patient’s needs, predicting Alzheimer’s likelihood and recommending preventive measures.

Research and Academia

The optimized CNN architecture and preprocessing techniques serve as a foundation for further medical image analysis studies, pushing the boundaries of AI in healthcare.

Insurance and Risk Assessment

Insurers can leverage the AI model to assess Alzheimer’s risk, make informed coverage decisions, and develop strategies to mitigate risks while supporting affected individuals and families.

Want to work with us too?

media card
Top 66 Innovative Medical Imaging Companies to Follow in 2024
media card
UK WellnessTech Company GoodBoost and Omdena Deploy Web App to Gamify Exercises for MSK Conditions
media card
Visualizing Pathologies in Ultrasound Images Using OpenCV and Streamlit
media card
Deploying a Model Using Docker as Endpoint in a Pathology Mobile App