Research in medical imaging primarily focuses on discrete data representations that poorly scale with grid resolution and fail to capture the often continuous nature of the underlying signal. Neural Fields (NFs) offer a powerful alternative by modeling data as continuous functions. While single-instance NFs have successfully been applied in medical contexts, extending them to large-scale medical datasets remains an open challenge. We therefore introduce MedFuncta, a unified framework for large-scale NF training on diverse medical signals. Building on Functa, our approach encodes data into a unified representation, namely a 1D latent vector, that modulates a shared, meta-learned NF, enabling generalization across a dataset. We revisit common design choices, introducing a non-constant frequency parameter \(\omega\) in widely used SIREN activations, and establish a connection between this \(\omega\)-schedule and layer-wise learning rates, relating our findings to recent work in theoretical learning dynamics. We additionally introduce a scalable meta-learning strategy for shared network learning that employs sparse supervision during training, thereby reducing memory consumption and computational overhead while maintaining competitive performance. Finally, we evaluate MedFuncta across a diverse range of medical datasets and show how to solve relevant downstream tasks on our neural data representation. To promote further research in this direction, we release our code, model weights and the first large-scale dataset - MedNF - of > 500 k latents for multi-instance medical NFs.
This work introduces MedFuncta, a framework that generalizes medical Neural Fields from isolated, single-instance models to dataset-level neural representations. The central idea is to meta-learn a shared neural representation across the dataset, in which each signal is represented by a unique, signal-specific parameter vector that conditions a shared network. This structure enables the model to capture and reuse redundancies across different signals, drastically improving computational efficiency and scalability. Unlike prior methods that rely on patch-based representations, our proposed framework represents each signal, from 1D time series to 3D volumetric data, with a single 1D latent vector. This abstraction enables consistent downstream processing across diverse data types, and is especially advantageous in medical applications, where the ability to unify multiple data modalities under a common representation is desirable, and where the inherent capability of Neural Fields to handle irregularly sampled, heterogeneous data provides further benefits.
@article{friedrich2025medfuncta,
title={MedFuncta: A Unified Framework for Learning Efficient Medical Neural Fields},
author={Friedrich, Paul and Bieder, Florentin and McGinnis, Julian and Wolleb, Julia and Rueckert, Daniel and Cattin, Philippe C},
journal={arXiv preprint arXiv:2502.14401},
year={2025}
}
This work was financially supported by the Werner Siemens Foundation through the MIRACLE II project. JM is supported by Bavarian State Ministry for Science and Art (Collaborative Bilateral Research Program Bavaria – Québec: AI in medicine, grant F.4-V0134.K5.1/86/34).