PhD Thesis Defense
Since their inception, microscopes have evolved significantly, becoming essential tools across various fields, from pathology diagnosis to biological studies. Morphological information that cannot be otherwise observed has always been regarded as the primary data a microscope could deliver. Yet microscopy data embodies further valuable information worth exploring. This thesis demonstrates extracting three types of information beyond morphology by modifying microscope systems, incorporating physical models, and applying image processing: 1) depth information, 2) object size information, and 3) object developmental information.
The first part of the thesis describes an all-in-focus technique based on Fourier
Ptychographic Microscopy (FPM) for depth information extraction. It synthesizes an all-in-focus image and depth map from an FPM-reconstructed multi-focal image stack. This technique benefits thyroid fine needle aspiration samples, relieving pathologists from the need to constantly adjust focal planes, enabling convenient data transfer, and potentially aiding machine learning tasks on cytology specimens.
The second part of the thesis focuses on a non-destructive subvisible particle (SbVPs) analyzer for estimating size and concentrations of SbVPs in drug products. This analyzer aims to estimate the size and concentrations of SbVPs within a drug product while keeping the sample intact. Incorporating a light-sheet microscope with custom housings to compensate for container-induced astigmatism, it uses side-scattered light as a size indicator based on Mie scattering theory. Its functionality is demonstrated on polystyrene beads and biological drug products. Additionally, a new metric named the strip density is discovered from the same microscope images, which could serve as a more precise and robust size indicator beyond scattering light intensity. This new size indicator is used to train a particle detection neural network, verifying its effectiveness through good performance.
For the final part, we focus on a embryo sex classification project, aiming to extract subtle developmental differences between male and female embryos from early development videos taken by Embryoscope. A combined convolutional and recurrent neural network structure is employed. While the prediction accuracy reaches 61%, which is not high, the deep learning model outperforms both human and random predictions, demonstrating its ability to acquire embryo developmental information from the Embryoscope videos to some extent.