Go back to the homepage
Artificial Intelligence As a Diagnostic Tool for Clinical Imaging of Cancer"
This project aims to build off of an existing AI image classification structure that can screen for diabetic retinopathy using non-professional photographs, paving the way for streamlined treatment processes for the leading cause of blindness worldwide (Link to source code repository). Using the same convolutional neural network methods as well as similar image-processing processes, I attempted to expand this machine learning application to the identification of cancerous CT scans. Despite still requiring medical grade photographing tools, this program, if implemented correctly, could ideally cut down on diagnosis time effectively enough to be relevant in a clinical setting.
Source Code Architecture
The code linked in the description uses a Convolutional Neural Network as well as several pre-processing steps (cropping, resizing, rotating, and converting into NumPy arrays) to feed images of eyes into a Keras model, with Tensorflow structures in the backend. It uses 3 convolutional layers (each with depth 32), after which "A Max Pooling layer is applied after all three convolutional layers with size (2,2). After pooling, the data is fed through a single dense layer of size 128, and finally to the output layer, consisting of 2 softmax nodes." With a backbone of CNN and image classification and a high success rate, EyeNet was an ideal starting point for investigating quickly how cancer diagnoses may be outsourced to machine learning tools.

EyeNet Code Metrics:


Accuracy (Training): 82%

Accuracy (Testing): 80%

Precision: 88%

Source Data / Roadblocks


While EyeNet utilizied downsampled images of eyes into 256 by 256 squares, the data I attempted to use would be medical scan files (in the .svs format) which were 10s of Mb even in compressed form, which took roughly a minute to download per slide. While evidence of Diabetic Retinopathy in eye pictures was preserved for the most part even after this downsampling, I was worried that the details of cancer scan files would need to be preserved in close to their entirety. Combined with the inability to download identical tissue scans and cancer types in large numbers, it seemed almost all training data would have to be hand-selected.


Because the file format was not readily accessible, I had to implement Python API of OpenSlide link) , and write code to extract the files as well as their annotations to begin training the correlation of the two for image classification. The goal from there would be to take scans of non-cancerous cells as well as cancerous cells of the same tissue and the same cancer, leaving the dense, 128 layer CNN to correlate simply cancerous cells vs. not. However, even in extracting the files for programatic manipulation, I ran into overflow issues.

Image (left to right): Breast Cancer cell, Lung Carcinoma cell (1), Lung Carcinoma (2) Extraction code lines, Error in image extraction.




Conclusion
While the EyeNet foundation was certainly helpful in determining the best ways to classify medical images for clinical implementation, it wasn't as easy as I had expected to access large swaths of cancerous cell slides and access them (along with their annotations) programmatically. The beginnings of that work are on a Github Repository link) . Next steps would be to test feasibility in much smaller training data sets on possibly a cropped section of the breast or lung tissue slides, and in general scaling the scope of the project down before tackling larger issues such as the logistics of sequestering and organizing large batches of training data with high specificity toward cancer diagnosis.