Project Page | Paper | Data
INCODE is a new method that improves Implicit Neural Representations (INRs) by dynamically adjusting activation functions using deep prior knowledge. Specifically, INCODE comprises a harmonizer network and a composer network, where the harmonizer network dynamically adjusts key parameters of the composer's activation function. It excels in signal representation, handles various tasks such as audio, image, and 3D reconstructions, and tackles complex challenges like neural radiance fields (NeRFs) and inverse problems (denoising, super-resolution, inpainting, CT reconstruction).
Amirhossein Kazerouni, Reza Azad, Alireza Hosseini, Dorit Merhof, Ulas Bagci
30.10.2023
| Code is released!24.10.2023
| Accepted in WACV 2024! 🥳
You can download the data utilized in the paper from this link.
Install the requirements with:
pip install -r requirements.txt
The image experiment can be reproduced by running the train_image.ipynb
notebook.
The audio experiment can be reproduced by running the train_audio.ipynb
notebook.
The shape experiment can be reproduced by running the train_sdf.ipynb
notebook. For your convenience, we have included the occupancy volume of Lucy with regular sampling in 512x512x512 cubes in the data file.
To test the model with custom input data, you can run thepreprocess_sdf.ipynb
notebook, which will generate a pre-processed.npy
file for your desired input.
The output is a.dae
file that can be visualized using software such as Meshlab (a cross-platform visualizer and editor for 3D models).
The denoising experiment can be reproduced by running the train_denoising.ipynb
notebook.
The super-resolution experiment can be reproduced by running the train_sr.ipynb
notebook.
The CT reconstruction experiment can be reproduced by running the train_ct_reconstruction.ipynb
notebook.
The inpainting experiment can be reproduced by running the train_inpainting.ipynb
notebook.
If you would like to replace the INCODE with other methods, including SIREN
, Gauss
, ReLU
, SIREN
, WIRE
, WIRE2D
,FFN
, please refer to the Readme in the documentation folder.
We thank the authors of WIRE, MINER_pl, torch-ngp, and SIREN for inpainting for their code repositories.
@inproceedings{kazerouni2024incode,
title={INCODE: Implicit Neural Conditioning with Prior Knowledge Embeddings},
author={Kazerouni, Amirhossein and Azad, Reza and Hosseini, Alireza and Merhof, Dorit and Bagci, Ulas},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={1298--1307},
year={2024}
}