Researchers from the Fraunhofer Institute for Digital Medicine MEVIS will present the latest technologies aimed at optimizing clinical workflows and improving patient care at the annual meeting of the Radiological Society of North America (RSNA 2024), held from December 1 to 5 in Chicago.
With innovative software solutions for radiology, researchers at Fraunhofer MEVIS support physicians, medical institutions, and companies in digital medicine. At RSNA 2024, they are presenting four solutions that optimize the radiological workflow and lead to better outcomes.
Fraunhofer MEVIS experts will present the SAFIR (Software Assistance for Interventional Radiology) platform at RSNA 2024 in Chicago. This integrated system supports interventional radiology from preoperative planning to real-time support during the procedure and efficient postoperative control. SAFIR provides segmentation algorithms for detecting structures in radiological images and registration algorithms for overlaying two different images of the same site. The platform is designed flexibly. The user interface can be used either as a stand-alone solution or in conjunction with external algorithms. Alternatively, individual algorithms can also be extracted for independent use.
At RSNA 2024, researchers will demonstrate three applications of SAFIR. In local minimally invasive tumor therapy, the system provides on-screen confirmation to physicians, showing whether thermal ablation has completely destroyed a tumor. In endovascular interventions, the algorithms generate 3D information from 2D X-ray images. Overlaying the X-ray projections with these 3D insights enables real-time navigation through blood vessels. Additionally, the segmentation and registration algorithms are demonstrated for planning automated access routes for spinal interventions, as they can accurately identify and fuse vertebral structures in radiological images. The reliability of SAFIR’s algorithms has been validated in clinical trials.
This year, Fraunhofer MEVIS will demonstrate the hardware-independent MRI framework gammaSTAR. A setup consisting of multiple 3D-printed MRI scanners, which can be interactively controlled, will be demonstrated.
The gammaSTAR framework enables the generation of universal MRI pulse sequences for controlling MRI scanners. A driver software translates these universal sequences into manufacturer-specific commands to ensure compatibility across different MRI systems. The images generated this way are more easily comparable to each other than those produced using device-specific MRI pulse sequences. This simplifies planning of multi-center studies because there are no restrictions on specific devices or manufacturers. Using a universal MRI pulse sequence also eliminates the time-consuming porting of sequence libraries after a software update by the MRI manufacturer. An intuitive user interface with a state-of-the-art MRI pulse sequence library enables even users not trained in MRI physics to design their own sequences. The library provides sequence modules that can be combined as needed. gammaSTAR is a comprehensive solution supporting the entire process, from the prototype generation of MRI pulse sequences to their conversion into production-ready versions for use in clinical studies.
The innovative AVIS visualization algorithm uses volume rendering to create realistic 3D representations of medical imaging data without image noise and at high frame rates, making it particularly suitable for virtual reality or augmented reality applications. In this process, virtual light rays are passed through a dataset, such as a CT scan of the abdomen. Local lighting is estimated at specific points along the light rays. From this data, the algorithm calculates realistic lighting and shadow effects for the entire dataset. This realistic representation enables clinicians to better estimate the distances between structures, such as how close a tumor is to a vital blood vessel.
Lighting calculations typically require significant computational resources. By performing lighting calculations adaptively—that is, only where necessary—AVIS is highly efficient and compatible with a wide range of systems. At RSNA 2024, researchers will demonstrate the use of AVIS in combination with automatic segmentations through deep learning models, using examples of the liver, liver vessels, and pancreas in context of visceral surgery.
Before an algorithm can be trained, data must be prepared for the learning process. Fraunhofer MEVIS introduces a software tool for efficiently curating data for training segmentation algorithms. A time-consuming aspect of curation is the manual annotation of images, where experts mark the structures, the algorithm should extract.
The new tool reduces this annotation effort by helping to focus only on the images necessary for advancing the segmentation algorithm. For this purpose, the AI evaluates cases where it is uncertain and generates a sorted list based on these instances. This makes annotation easier for users by allowing them to focus on the relevant cases. The algorithm then learns from the areas previously marked as uncertain and corrected by the experts. In an iterative process, the results are reviewed and corrected with each cycle. Fraunhofer researchers initially developed this training loop for radiologists in clinical research. The gradually refined AI model is then intended to extract these areas automatically and reliably for practical applications.
You can find the Fraunhofer Institute for Digital Medicine MEVIS at RSNA 2024 in the South Hall, Level 3, Booth 2609.