Skip to content

MASILab/mri2avatar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mri2avatar

Can we turn whole-head MRIs to photorealistic avatars? Maybe...

This repository provides a toy pipeline, which, given a NIfTI file and a text prompt, generates a 3D mesh with texture mapped on it. The current pipeline is developed with nii2mesh (for converting NIfTI to mesh) and Text2Tex (for generating texture from mesh and text prompt). Below are some example results generated from a whole-head T1-weighted MRI image data/example.nii.gz with scripts bash/*.sh, visualized as rotating heads. Other body parts have not been tested.

rendering of the mesh without texture
untextured
rendering of the mesh with batman texture
Batman
rendering of the mesh with bear texture
bear
rendering of the mesh with Bumblebee texture
Bumblebee
rendering of the mesh with panda texture
panda
rendering of the mesh with porcelain texture
porcelain
rendering of the mesh with raccoon texture
raccoon
rendering of the mesh with T-800 texture
T-800
rendering of the mesh with tiger texture
tiger
rendering of the mesh with Witcher texture
Witcher
rendering of the mesh with Witcher (longer prompt) texture
Witcher_long
rendering of the mesh with wolf texture
wolf

Setup the environment

1. Install nii2mesh

# go to the location to store the software
cd /home-local/software

# download and compile
git clone https://github.com/neurolabusc/nii2mesh
cd nii2mesh/src
make

# add the executable to PATH
export PATH="/home-local/software/nii2mesh/src:$PATH"

2. Clone Text2Tex and create conda environment

Tested on Ubuntu 22.04.5 LTS

cd /home-local/software
git clone https://github.com/daveredrum/Text2Tex.git

# download ControlNet Depth2img model weights
wget -O ./Text2Tex/models/ControlNet/models/control_sd15_depth.pth "https://huggingface.co/lllyasviel/ControlNet/resolve/main/models/control_sd15_depth.pth?download=true"

# create conda environment
conda create -n text2tex python=3.9
conda activate text2tex

# install pytorch bundles
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

# install PyTorch3D
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
conda install pytorch3d -c pytorch3d -c pytorch -c nvidia

# install the remaining packages
pip install trimesh scikit-learn opencv-python matplotlib imageio diffusers einops transformers==4.56.2 open-clip-torch gradio pytorch-lightning==1.9.1 omegaconf triton accelerate objaverse iopath==0.1.10 xatlas
pip install mathutils

If error occurs when installing mathutils, this solution by mfp0610 might solve the issue:

git clone https://gitlab.com/ideasman42/blender-mathutils.git
cd blender-mathutils/

# change line 79 of src/mathutils/mathutils.c to:
y = _Py_HashDouble((double)(array[i++]));

# and then
pip install .

3. Clone this repository

cd /home-local/software
git clone https://github.com/MASILab/mri2avatar.git
cd mri2avatar

How to run

# basic usage:
python mri2avatar.py --path_input <path_to_nifti> --outdir <output_directory> --prompt "<text_prompt>"

# to see all options
python mri2avatar.py --help

Notes:

  1. --intermediate to preserve intermediate preprocessing outputs, which might be useful for quality check.
  2. --generate_gif to generate two rotating head gifs (one with texture, one without texture).
  3. --use_binary_mask to use binary mask of the main object (e.g., the head) in the NIfTI for extracting the mesh. It involves Otsu's thresholding and morphological operations to smooth out the noise, which is recommended when the boundary between air/skull/brain is not clear for straightforward meshing.

About

Can we turn whole-head MRIs to photorealistic avatars?

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published