NeuralBoneReg: A Novel Self-Supervised Method for Robust and Accurate Multi-Modal Bone Surface Registration

Luohong Wu1, Matthias Seibold1, Nicola A. Cavalcanti1, Yunke Ao1,2, Roman Flepp1, Aidana Massalimova1, Lilian Calvet1, Philipp Fürnstahl1,2
1Balgrist University Hospital· 2ETH Zurich·
NeuralBoneReg

Abstract

Background: In computer- and robot-assisted orthopedic surgery (CAOS), patient-specific surgical plans are generated from preoperative medical imaging data to define target locations and implant trajectories. During surgery, these plans must be precisely transferred to the intraoperative setting to guide accurate execution. The accuracy and success of this transfer relies on cross-registration between preoperative and intraoperative data. However, the substantial heterogeneity across imaging modalities and devices renders this registration process challenging and error-prone, leading to inaccuracies. Consequently, more robust and accurate methods for automatic, modality-agnostic multimodal registration of bone surfaces would have a substantial clinical impact.

Methods: We propose NeuralBoneReg, a self-supervised, surface-based framework for bone surface registration using 3D point clouds as a modality-agnostic representation. NeuralBoneReg comprises two key components: an implicit neural unsigned distance field (UDF) module and a multilayer perceptron (MLP)-based registration module. The UDF module learns a neural representation of the preoperative bone model. The registration module solves both global initialization and local refinement by generating a set of transformation hypotheses to register the intraoperative point cloud with the preoperative neural UDF. Compared to state-of-the-art (SOTA) supervised registration, NeuralBoneReg operates in a self-supervised manner, without requiring inter-subject training data with ground truth transformations. We evaluated NeuralBoneReg against baseline methods on two publicly available multi-modal datasets: a CT--ultrasound dataset of the fibula and tibia (UltraBones100k) and a CT--RGB-D dataset of spinal vertebrae (SpineDepth). The evaluation also includes a newly introduced CT--ultrasound dataset of cadaveric subjects containing femur and pelvis (UltraBones-Hip), which will be made publicly available.

Results: Quantitative and qualitative results demonstrate the effectiveness of NeuralBoneReg, matching or surpassing existing methods across all datasets. It achieves a mean relative rotation error (RRE) of 1.68° and a mean relative translation error (RTE) of 1.86~mm on the UltraBones100k dataset; a mean RRE of 1.88° and a mean RTE of 1.89~mm on the UltraBones-Hip dataset; and a mean RRE of 3.79° and a mean RTE of 2.45~mm on the SpineDepth dataset. These results highlight the method's strong generalizability across different anatomies and imaging modalities.

Conclusions: NeuralBoneReg achieves robust, accurate, and modality-agnostic registration of bone surfaces, offering a promising solution for reliable cross-modal alignment in computer- and robot-assisted orthopedic surgery.

Code and dataset

The repository containing the code and dataset will be made public upon acceptance. For early access, please contact Luohong.wu@balgrist.ch.

Visualization of Results

Dataset 1: SpineDepth

Registered source point cloud: Blue; Target point cloud: Yellow

L1 L3 L5
Ground Truth
RANSAC250M
+ ICP
FAST + ICP
PCA + ICP
Predator
GeoTransformer
Ours

Dataset 2: UltraBones-Hip

Registered source point cloud: Blue; Target point cloud: Yellow

Left Femur Right Femur Pelvis
Ground Truth
RANSAC250M
+ ICP
FAST + ICP
PCA + ICP
Predator
GeoTransformer
Ours

Dataset 3: UltraBones100k

Registered source point cloud: Blue; Target point cloud: Yellow

Tibia Fibula
Ground Truth
RANSAC250M
+ ICP
FAST + ICP
PCA + ICP
Predator
GeoTransformer
Ours