Advances in 3D printing of biocompatible materials make patient-specific implants increasingly popular. The design of these implants is, however, still a tedious and largely manual process. Existing approaches to automate implant generation are mainly based on 3D U-Net architectures on downsampled or patch-wise data, which can result in a loss of detail or contextual information. Following the recent success of Diffusion Probabilistic Models, we propose a novel approach for implant generation based on a combination of 3D point cloud diffusion models and voxelization networks. Due to the stochastic sampling process in our diffusion model, we can propose an ensemble of different implants per defect, from which the physicians can choose the most suitable one. We evaluate our method on the SkullBreak and SkullFix datasets, generating high-quality implants and achieving competitive evaluation scores.
Since performing the anatomy reconstruction task in a high resolution voxel space would be memory inefficient and computationally expensive, we propose a method that builds upon a sparse surface point cloud representation of the input anatomy. This point cloud, which can be obtained from the defective segmentation mask, serves as input to a Denoising Diffusion Probabilistic Model (DDPM) that, conditioned on this input, reconstructs the complete anatomy by generating missing points. This conditional generation process is shown below.
The forward, as well as the reverse diffusion process are modeled as Markov chains with the transitions \(q(\boldsymbol{\tilde{x}}_t|\boldsymbol{\tilde{x}}_{t-1})\) and \(p_{\theta}(\boldsymbol{\tilde{x}}_{t-1}|\boldsymbol{\tilde{x}}_t, \boldsymbol{c}_0)\) being parameterized Gaussians with a pre-defined variance schedule \(\beta_1, ..., \beta_T\). The network \(\epsilon_\theta(\boldsymbol{\tilde{x}}_t, \boldsymbol{c}_0, t)\) learns the noise to be removed from a noisy point cloud \(\boldsymbol{\tilde{x}}_t\). This denoising is conditioned on the points belonging to the defective anatomical structure \(\boldsymbol{c}_0\). During inference, we randomly sample some points \(\boldsymbol{\tilde{x}}_T \sim \mathcal{N}(0, \boldsymbol{I})\) and go through the whole reverse diffusion process with \(t = T, ..., 1\): $$ \boldsymbol{\tilde{x}}_{t-1} =\frac{1}{\sqrt{\alpha_t}}\biggl(\boldsymbol{\tilde{x}}_t-\frac{1-\alpha_t}{\sqrt{1-\tilde{\alpha}}_t}\epsilon_\theta(\boldsymbol{\tilde{x}}_t, \boldsymbol{c}_0, t)\biggr)+\sqrt{\beta_t}\boldsymbol{z} $$ We thereby produce a completed version of the defective anatomical structure. The points \(\boldsymbol{c}_0\) belonging to the defective anatomical structure remain unchanged during the whole process.
Since we determine the implant design by the Boolean subtraction between completed and defective skull in voxel space, the point cloud representing the completed shape must be transferred back to a voxel representation. We therefore train an additional neural network combined with a differentiable Poisson solver (DPSR) proposed in "Shape As Points: A Differentiable Poisson Solver ". The network learns to upsample the input point cloud and estimate normal vectors for every point. This upsampled point cloud is then passed through the differentiable Poisson solver to obtain a dense indicator grid/ voxel representation of the input. Training this upsampling and normal estimation network as well as the used encoder is particularly easy due to the differentiability of the used Poisson solver.
We trained and tested our method on the SkullBreak and SkullFix datasets. Our method reliably reconstructs defects of various sizes, as well as complicated geometric structures. Some examples for different defects from the SkullBreak dataset are shown below. Further results from the different datasets can be found underneath.
@InProceedings{10.1007/978-3-031-43996-4_11,
author="Friedrich, Paul and Wolleb, Julia and Bieder, Florentin and Thieringer, Florian M. and Cattin, Philippe C.",
title="Point Cloud Diffusion Models for Automatic Implant Generation",
booktitle="Medical Image Computing and Computer Assisted Intervention -- MICCAI 2023",
year="2023",
pages="112--122",
}
This work was financially supported by the Werner Siemens Foundation through the MIRACLE II project.