Can we turn whole-head MRIs to photorealistic avatars? Maybe...
This repository provides a toy pipeline, which, given a NIfTI file and a text prompt, generates a 3D mesh with texture mapped on it. The current pipeline is developed with nii2mesh (for converting NIfTI to mesh) and Text2Tex (for generating texture from mesh and text prompt). Below are some example results generated from a whole-head T1-weighted MRI image data/example.nii.gz with scripts bash/*.sh, visualized as rotating heads. Other body parts have not been tested.
|
untextured |
Batman |
bear |
Bumblebee |
|
panda |
porcelain |
raccoon |
T-800 |
|
tiger |
Witcher |
Witcher_long |
wolf |
1. Install nii2mesh
# go to the location to store the software
cd /home-local/software
# download and compile
git clone https://github.com/neurolabusc/nii2mesh
cd nii2mesh/src
make
# add the executable to PATH
export PATH="/home-local/software/nii2mesh/src:$PATH"2. Clone Text2Tex and create conda environment
Tested on Ubuntu 22.04.5 LTS
cd /home-local/software
git clone https://github.com/daveredrum/Text2Tex.git
# download ControlNet Depth2img model weights
wget -O ./Text2Tex/models/ControlNet/models/control_sd15_depth.pth "https://huggingface.co/lllyasviel/ControlNet/resolve/main/models/control_sd15_depth.pth?download=true"
# create conda environment
conda create -n text2tex python=3.9
conda activate text2tex
# install pytorch bundles
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
# install PyTorch3D
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
conda install pytorch3d -c pytorch3d -c pytorch -c nvidia
# install the remaining packages
pip install trimesh scikit-learn opencv-python matplotlib imageio diffusers einops transformers==4.56.2 open-clip-torch gradio pytorch-lightning==1.9.1 omegaconf triton accelerate objaverse iopath==0.1.10 xatlas
pip install mathutilsIf error occurs when installing mathutils, this solution by mfp0610 might solve the issue:
git clone https://gitlab.com/ideasman42/blender-mathutils.git
cd blender-mathutils/
# change line 79 of src/mathutils/mathutils.c to:
y = _Py_HashDouble((double)(array[i++]));
# and then
pip install .cd /home-local/software
git clone https://github.com/MASILab/mri2avatar.git
cd mri2avatar# basic usage:
python mri2avatar.py --path_input <path_to_nifti> --outdir <output_directory> --prompt "<text_prompt>"
# to see all options
python mri2avatar.py --helpNotes:
--intermediateto preserve intermediate preprocessing outputs, which might be useful for quality check.--generate_gifto generate two rotating head gifs (one with texture, one without texture).--use_binary_maskto use binary mask of the main object (e.g., the head) in the NIfTI for extracting the mesh. It involves Otsu's thresholding and morphological operations to smooth out the noise, which is recommended when the boundary between air/skull/brain is not clear for straightforward meshing.