The American Cancer Society (ACS) estimates that nearly 20,000 cases of ovarian cancer will be diagnosed in 2022, with close to 13,000 women losing their lives to ovarian cancer this year. The 5-year survival rate for ovarian cancer is more than 93 percent when diagnosed and treated in its earliest stages, according to the ACS. However, only 20 percent of all cases are found early, meaning in Stage I or Stage II, the organization stated, noting that the survival rate for ovarian cancers diagnosed in Stage III or higher can be as low as 30 percent.
As a study recently published in Photoacoustics pointed out, photoacoustic imaging (PAI) is an emerging imaging modality for non-invasive, non-ionizing, real-time measurement of the optical properties of biological tissue (2022; https://doi.org/10.1016/j.pacs.2022.100420).
"Compared with other optical imaging modalities, PAI can image deeper because acoustic scattering in tissue is an order of magnitude smaller than that of optical scattering. Additionally, PAI inherently possesses ultrasound resolution because received photoacoustic waves are used to form images," wrote the authors, led by Quing Zhu, PhD, Professor of Biomedical Engineering at Washington University in St. Louis' McKelvey School of Engineering. "Photoacoustic tomography (PAT) uses a broad laser beam for illumination and an array of ultrasound transducers to measure the photoacoustic waves generated by the targeted biological tissue."
As Zhu and her colleagues write, quantitative photoacoustic tomography (QPAT) "is a valuable tool in characterizing ovarian lesions for accurate diagnosis." However, they added, accurately reconstructing a lesion's optical absorption distributions from photoacoustic signals measured with multiple wavelengths is challenging, as it creates issues with factors such as the absorption distribution, for example.
Zhu and members of her lab have used a variety of imaging methods in an effort to more accurately diagnose ovarian cancer. They have developed a machine learning fusion model that utilizes existing ultrasound features of ovarian lesions to train the model to recognize whether a lesion is benign or cancerous, relying on reconstructed images taken with photoacoustic tomography.
Study Details
For this study, Zhu and her colleagues used the new machine learning fusion model to study 35 patients with more than 600 regions of interest. In the study, ultrasound images of ovarian lesions were used to fine-tune the ResNet-18 machine learning model.
"Since we only had 1,200 ultrasound images after augmentation, we chose to use a pre-trained ResNet-18 to avoid training from scratch with random-initial weights," the researchers wrote. "Three cross-validations were used to evaluate the performance of the model."
To reconstruct PAT images, the team designed an ultrasound-enhanced Unet model. It was trained using simulation and phantom data to learn the PAT reconstruction process, implementing the mean square error loss function. The ResNet-18 model was trained with clinical ultrasound images for classification, utilizing binary cross entropy loss.
After training for 100 epochs, the ResNet-18 model could extract morphology features from US images well and classify lesions accurately. The features taken from the ResNet-18 model were integrated into the Unet model to complete PAT reconstruction. Thus, the US-enhanced Unet model could recreate PAT images with ovarian morphology features. Overall, the model's accuracy was 90 percent, according to the authors.
There are two primary challenges in ovarian cancer detection and diagnosis, Zhu told Oncology Times.
"The first is an earlier cancer detection to save lives, and the second is an accurate diagnosis to reduce unnecessary surgeries on patients with low malignancy risk to reduce surgery complications and health care costs," Zhu noted. "We are tackling both problems. This study is focused on accurate diagnosis of ovarian cancer."
With this study, Zhu and her co-authors introduced a fusion approach that integrates two machine learning models: ultrasound to extract ovarian lesion morphology features such as shape and size, and photoacoustic imaging to extract lesion functional features. She added that the fusion model improved the ovarian lesion diagnostic accuracy to 0.9, as compared to 0.8 using only ultrasound.
Going forward, Zhu and her team intend to validate the model with more patient data, and provide an improved diagnosis to the radiology and obstetrician and gynecologist teams "for accurate diagnosis of ovarian lesion malignancy," she noted. "We also plan to explore and improve the model for the detection and diagnosis of earlier ovarian cancers."
Mark McGraw is a contributing writer.