Japanese

Research News

Technology/Materials

High-Precision Superimposition of X-ray Fluoroscopic Images and 3D CT Data

image picture

X-ray fluoroscopic images are commonly employed in orthopedic surgery. However, surgeons often struggle to accurately perceive the three-dimensional (3D) shape of the target area when working with two-dimensional (2D) X-ray images. Addressing this challenge, a team of researchers from the University of Tsukuba has developed a technology capable of automatically and precisely overlaying 3D computed tomography (CT) data onto intraoperative X-ray fluoroscopic images.

Tsukuba, Japan—The X-ray fluoroscopy machine is a medical device frequently used in orthopedic surgery. Despite its imaging capabilities, physicians heavily rely on their experience and knowledge to align the 3D shape of the target area using the 2D X-ray image. If X-ray images captured during surgery could be superimposed onto a pre-surgical 3D model (CT model) obtained from a CT scan, it would alleviate the cognitive load associated with visualizing the 3D shape from the 2D image, enabling surgeons to focus more on the surgical procedure. To achieve this goal, the superimposition of the X-ray image and CT model data should "work seamlessly, even when focusing on a specific body part (local image)," and the process should to be "fully automated." In this study, researchers met these criteria by employing a convolutional neural network capable of regressing the spatial coordinates of X-ray images. By incorporating correspondence formulation with deep learning techniques, they achieved highly accurate superimposition for local X-ray images.


The efficacy of this technique was verified using a dataset that included CT data and X-ray images of the pelvis. The superimposition of X-ray images and CT data yielded an error of 3.79 mm (with a standard deviation of 1.67 mm) for simulated X-ray images and 9.65 mm (with a standard deviation of 4.07 mm) for actual X-ray images. In the case of actual X-ray images, the superimposition of the X-ray image and CT data produced an error of 9.65 mm (with a standard deviation of 4.07 mm).


###
This work was partially supported by a grant from JSPS KAKENHI grant number JP23K08618.



Original Paper

Title of original paper:
X-Ray to CT Rigid Registration Using Scene Coordinate Regression
Journal:
The 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023)
DOI:
10.1007/978-3-031-43999-5_74

Correspondence

Professor KITAHARA Itaru
Center for Computational Sciences (CCS), University of Tsukuba

Pragyan SHERESTHA
Doctral Program in Empowerment Informatics, School of Integrative and Global Majors, University of Tsukuba

Professor YOSHII Yuichi
Tokyo Medical University Ibaraki Medical Center


Related Link

Center for Computational Sciences (CCS), University of Tsukuba




Celebrating the 151st 50th Anniversary of the University of Tsukuba
Celebrating the 151st 50th Anniversary of the University of Tsukuba