In a paper published online recently in the European Journal of Radiology, a team led by Dr. Fabian Rengier of the University of Heidelberg reported on how their use of 3D postprocessing software in a training course for medical students led to significant improvements in radiological knowledge, diagnostic skills, and spatial ability. Interestingly, diagnostic skills for imaging modalities not covered in the course also climbed 14%.
"The integration of interactive three-dimensional image postprocessing software into undergraduate radiology education effectively improves radiological reasoning, diagnostic skills, and visual-spatial ability, and thereby even diagnostic skills for imaging modalities not included in the course," the authors wrote.
Since it's been suggested that interacting with 3D computer visualizations and 3D CT and MRI datasets may lead to spatial ability, integrating interactive 3D postprocessing software into undergraduate radiology teaching might be a promising approach to improve both spatial ability and radiological skills. As a result, student's deficiencies in image interpretation could be improved, according to the authors (EJR, 13 February 2013).
The research team sought to test their hypothesis that a hands-on radiology course for medical students that utilized interactive 3D postprocessing software could produce these improvements.
The course, which consisted of seven seminars held on a weekly basis, was given by experienced radiologists and collegiate tutors to 25 fourth- and fifth-year medical students. During the course, students learned to systematically analyze cross-sectional imaging data and correlate the 2D images with 3D reconstructions.
Each working place in the classroom had one computer equipped with AnatomyMap Basic Edition 3D image postprocessing software (VitalRecon, Frankfurt, Germany) and one additional computer screen connected to a demonstration computer. Standard DICOM images could be transferred to all computers from the institution's PACS after anonymizing of patient data was performed.
The software offered features such as: interactive viewing of source data; multiplanar reformation with axial, sagittal, and coronal views, as well as arbitrarily adjusted double-oblique views; 3D volume rendering with visualization of, for example, bones or contrast-enhanced vessels; advanced 3D reconstructions like curved multiplanar reformations, and maximum intensity projections.
At the beginning of the course, students took a 64-question multiple-choice test to assess their radiological knowledge, diagnostic skills, and visual-spatial ability. Upon conclusion of the course, the students took the same test and the results were evaluated to assess for improvement. Students were also surveyed to record their anonymous evaluation of the course.
|Mean improvement in correct test answers
|Overall test (64 questions)
||36.9 ± 4.8
||49.5 ± 5.4
|Radiological knowledge (12 questions)
||5.0 ± 2.0
||9.3 ± 1.7
|Total skills, including CT, MRI, other imaging, and anatomy (36 questions)
||21.8 ± 3.2
||28.3 ± 3.0
|CT, MRI (12 questions)
||5.8 ± 1.5
||10.5 ± 1.1
|Other imaging (12 questions)
||7.2 ± 1.5
||8.9 ± 1.0
|Anatomy (12 questions)
||8.7 ± 1.5
||8.9 ± 1.9
|Spatial ability (16 questions)
||10.1 ± 2.6
||11.9 ± 2.4
The researchers noted the test performance also correlated with the students' own impressions on their radiological knowledge and skills.
"Furthermore, students felt better prepared for every day clinical practice," the authors wrote.
The study findings advocate the wider use of hands-on radiology courses for medical students to overcome deficiencies in image interpretation, the authors concluded.
"Further research is needed to fully understand the relationship between visual-spatial abilities and radiological interpretation skills regarding both cross-sectional and radiography images," they wrote.