By Tami Freeman, PhD, Physics World editor (medical and biophysics)

December 17, 2018 -- The introduction of dedicated breast CT systems can help improve breast cancer detection by overcoming the tissue superposition problems associated with two-dimensional mammography. To develop this technology further, computer simulations of the image acquisition process provide an invaluable tool. And, ideally, such simulations should employ digital phantoms that reflect the 3D structure of human breast tissue.

Voxelized digital phantoms based on clinical breast CT images benefit from incorporating the realism and variability of actual patient data. Their accuracy is, however, limited by the spatial resolution of the imaging device. If the clinical images do not fully capture the long edges and fine strands of glandular tissue, this leads to a loss in glandularity in the digital phantoms, thereby limiting the accuracy of all subsequent analysis.

To accurately simulate breast imaging, phantoms with higher spatial resolution than that of the simulated projections are needed. Now, a team from Radboud University Medical Center (RUMC) in Nijmegen, the Netherlands, and the University of Trieste in Italy has used machine-learning algorithms to generate such "superresolution" digital breast phantoms, which have a higher resolution than that of the system used to acquire the underlying patient images.

"We developed these phantoms to be used for computer simulations of breast imaging," explained lead author Ioannis Sechopoulos, PhD, from RUMC. "We can simulate the acquisition of a dedicated breast CT or breast tomosynthesis image using these phantoms as representative of the patient's breast. In this way, we can optimize the design and settings of new imaging systems before they are actually built and verify the accuracy of new image analysis methods."

Upsampled images

Sechopoulos and colleagues acquired images using a clinical breast CT system with a nominal pixel size of 194 µm and reconstructed with a voxel size of 273 µm. Aiming to increase the phantom resolution by a factor of four from the patient image, their first step was to reduce the voxel size in the clinical image from 273 µm to 68 µm.

Example of an original, segmented breast CT slice from a patient case and the corresponding slice of the generated superresolution phantom
Example of an original, segmented breast CT slice from a patient case (a) and the corresponding slice of the generated superresolution phantom (b). Panels c and d show a magnification of the same region of both slices.

Simply reducing voxel size, however, does not recover fine glandular details. Instead, first author Marco Caballo, a doctoral student at RUMC, used a machine learning-based regression algorithm to calculate the glandularity at the original resolution, the expected glandularity at 68 µm and the estimated glandular tissue loss. They then employed a second algorithm -- a convolutional neural network -- to iteratively recover glandular details by mapping between low- and high-resolution images until reaching the predicted glandularity. (Physics in Medicine & Biology, 13 November 2018, Vol. 63:22)

Both algorithms were trained on high-resolution images of human breast tissue, acquired by a synchrotron system with a nominal pixel size of 60 µm. "The availability of high-resolution breast tissue CT images made this work more realistic than using simulated data," Sechopoulos said.

Superresolution phantom

Superresolution phantoms generated using this approach demonstrated that glandular detail loss tended to increase with the amount of glandular tissue voxels. For example, the total absolute loss from 60 µm to 480 µm was 1.52% for volumes of interest (VOIs) starting with a glandularity of 8.03%, and it was 7.7% for VOIs starting with a glandularity of 48.04%.

To evaluate their proposed method, the researchers used 10 clinical breast CT images (reconstructed at 273 µm) to generate superresolution phantoms with 68 µm voxels. They then downsampled these phantoms back to 273 µm. Comparing calculated glandularity values between the downsampled super-resolution phantoms and original breast CT images resulted in an average error of 0.27%.

Members of the Advanced X-ray Tomographic Imaging group at Radboud University Medical Center
Lead author Ioannis Sechopoulos (back, far left), first author Marco Caballo (front, far right) and other members of the Advanced X-ray Tomographic Imaging (AXTI) group at Radboud University Medical Center.

The team also reconstructed 10 clinical images at both 273 µm and 194 µm. They used the algorithm to upsample the 273 µm images into 194 µm images and then compared the glandularity in the two 194-µm images. This also resulted in a small average error of 0.15%.

Finally, to evaluate the realism of the generated phantoms, the researchers showed pairs of images to an experienced breast radiologist, who was asked to distinguish true images from the corresponding phantoms. The comparison resulted in random outcomes (47% accuracy), confirming the realism of the generated superresolution breast phantoms.

The researchers are now working to develop dynamic contrast-enhanced dedicated breast CT, which they believe will prove useful for many stages of breast cancer care. "So we developed these phantoms, together with previous work in which we added the dynamic aspects of contrast enhancement to the phantom, to optimize the image acquisition and analysis stages of this new modality," Sechopoulos told Physics World.

© IOP Publishing Limited. Republished with permission from Physics World, a website that helps scientists working in academic and industrial research stay up to date with the latest breakthroughs in physics and interdisciplinary science.


Copyright © 2018 AuntMinnieEurope.com
 

To read this and get access to all of the exclusive content on AuntMinnieEurope.com create a free account or sign-in now.

Member Sign In:
MemberID or Email Address:  
Do you have a AuntMinnieEurope.com password?
No, I want a free membership.
Yes, I have a password:  
Forgot your password?
Sign in using your social networking account:
Sign in using your social networking
account: