8000
Skip to content

wlsdzyzl/flemme

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

110 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Flemme: A FLExible and Modular Learning Platform for MEdical Images

Overview

Flemme is a flexible and modular learning platform for medical images. In Flemme, we separate encoders from the model architectures, enabling fast model construction via different combinations for medical image segmentation, reconstruction and generation. In addition, a general hierarchical architecture with a pyramid loss is proposed for vertical feature refinement and integration. Please see documentation of Flemme for more information.

We are also working on Flemme to support point cloud modeling. An illustration of models built with PointMamba2Encoder is presented in the following figure:

Supported architectures

Hierarchical architecture with a pyramid Loss (H-SeM + UNet)

Get started with Flemme

Requirement list

Basic:

torch torchvision simpleitk nibabel matplotlib scikit-image scikit-learn tensorboard tqdm

For vision transformer:

einops

For vision mamba:

mamba-ssm (CUDA version >= 11.6)

If you have trouble to install mamba-ssm, you can download the corresponding .whl from https://github.com/state-spaces/mamba/releases based on your cuda and torch version. We recommend you to use the version with ABI=FALSE. Then run the following command:

python install mamba-ssmxxxxx.whl -i https://pypi.tuna.tsinghua.edu.cn/simple

For point cloud:

POT plyfile KNN-CUDA fpsample geomloss trimesh binvox_rw

KNN-CUDA is from https://github.com/unlimblue/KNN_CUDA. binvox_rw is from https://github.com/wangqiang9/binvox_rw.

For graph:

torch_geometric torch-cluster

You can modify flemme/config.py to disable some components of Flemme so that you don't need to install the corresponding required packages.

Setup

Git clone from git@github.com:wlsdzyzl/flemme.git

Run following commands in terminal to setup Flemme to your environment:

cd flemme
python setup.py install

Usage

Creating a deep learning model with Flemme is quite straightforward; you don't need to write any code. All things can be down through a yaml config file. An example of constructing a segmentation model with UNet encoder using ResConvBlock looks like:

model:
  ### architecture
  name: SeM
  ### encoder
  encoder:
    name: UNet
    image_size: [320, 256]
    in_channel: 3
    out_channel: 1
    patch_channel: 32
    patch_size: 2
    down_channels: [64, 128, 256]
    middle_channels: [512, 512]
    building_block: res_conv
    normalization: batch
  ### loss function
  segmentation_losses: 
    - name: Dice
    - name: BCEL

You may also need to specify the data-loader, optimizer, checkpoint path and other hyper-parameters. A full configuration refers to resources/img/biomed_2d/cvccdb/train_unet_sem.yaml. To train the model, run command:

train_flemme --config path/to/train_config.yaml

For visualization of the training process:

tensorboard --logdir path/to/ckp/

For testing:

test_flemme --config path/to/test_config.yaml

Supported encoders:

  • [CNN, UNet, ViT, ViTU, Swin, SwinU, VMamba, VMambaU] for 2D/3D image
  • [PointNet, PointTrans, PointMamba, PointNet2, PointTrans2, PointMamba2] for point cloud.

A encoder named as XXU indicates it's a U-shaped encoder.

UNet is an alias of CNNU.

Supported Architectures:

  • [ClM, ] for classification,
  • [SeM, HSeM] for segmentation,
  • [AE, HAE, SDM] for reconstruction,
  • [VAE, DDPM, DDIM, EDM, LDM] for generation.

DDIM refers to denoising diffusion implicit model, which is a fast sample strategy.

EDM refers to elucidated diffusion models from paper "Elucidating the Design Space of Diffusion-Based Generative Models"

SDM refers to supervised diffusion model (use input as a input condition of ddpm).

LDM refers to latent diffusion model, constructed with a auto-encoder/vae and ddpm.

A detailed instruction of supported encoders, context embeddings, model architectures and training process can refer to documentation of flemme.

Results

2D/3D Image

For segmentation, we evaluate our methods on six public datasets: CVC-ClinicDB, Echonet, ISIC, TN3K, BraTS21 (3D), ImageCAS (3D).

For reconstruction, we evaluate our methods on FastMRI.

Configuration files are in resources/img/biomed_2d and resources/img/biomed_3d.

Segmentation results

Reconstruction & Generation results

Point Cloud

Completion & Segmentation results

To train and evaluate the model proposed by [2], run the following commands (You can change flemme to an old commit c978a59):

## classification
train_flemme --config /path/to/project/flemme/resources/pcd/medpoints/cls/train_pointmamba2knn_clm.yaml
test_flemme --config /path/to/project/flemme/resources/pcd/medpoints/cls/test_pointmamba2knn_clm.yaml
## completion
train_flemme --config /path/to/project/flemme/resources/pcd/medpoints/cpl/train_pointmamba2knn_cpl.yaml
test_flemme --config /path/to/project/flemme/resources/pcd/medpoints/cpl/test_pointmamba2knn_cpl.yaml
## segmentation
train_flemme --config /path/to/project/flemme/resources/pcd/medpoints/seg/train_pointmamba2knn_sem.yaml
test_flemme --config /path/to/project/flemme/resources/pcd/medpoints/seg/test_pointmamba2knn_sem.yaml

MedPointS Dataset

MedPointS is a large-scale medical point cloud dataset based on MedShapeNet for anatomy classification, completion, and segmentation.

An overview of MedPointS is presented int the following figure:

You can download MedPointS from this link.

Alternatively, you can use load the dataset from Hugging Face: MedPoints-cls , MedPoints-cpl, and MedPoints-seg for classification, completion, and segmentation tasks.

ATTENTION!!!

If you want to load completion dataset from MedPoints through the current version of Flemme, please make sure the dataset samples are stored as /dataset_path/subfold/class_name/data_sample1.ply. You can use script/reorganize_cpl_cls_datasets.py to reorganize the directory structure. The following command will transfer /dataset_path/class_name/subfold/data_sample1.ply to /dataset_path/subfold/class_name/data_sample1.ply.

python reorganize_cls_datasets.py --dataset_path /data/guoqingzhang/datasets/MedPointS/completion/fold5

Play with Flemme

Toy Example for Diffusion model

Configuration file: resources/toy_ddpm.yaml

train_flemme --config resources/toy_ddpm.yaml

MINST

Configuration files are in resources/img/mnist AutoEncoder & Variational AutoEncoder

Denoising Diffusion Probabilistic Model

CIFA10

Configuration files are in resources/img/cifar10 AutoEncoder & Variational AutoEncoder

Denoising Diffusion Probabilistic Model (conditional)

BibTeX

If you find our project helpful, please consider to cite the following works:

[1] Flemme: A Flexible and Modular Learning Platform for Medical Images; BIBM 2024.

@misc{zhang2024flemmeflexiblemodularlearning,
      title={Flemme: A Flexible and Modular Learning Platform for Medical Images}, 
      author={Guoqing Zhang and Jingyun Yang and Yang Li},
      year={2024},
      eprint={2408.09369},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2408.09369}, 
}

[2] Hierarchical Feature Learning for Medical Point Clouds via State Space Model; MICCAI 2025.

@misc{zhang2025hierarchicalfeaturelearningmedical,
      title={Hierarchical Feature Learning for Medical Point Clouds via State Space Model}, 
      author={Guoqing Zhang and Jingyun Yang and Yang Li},
      year={2025},
      eprint={2504.13015},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.13015}, 
}

Acknowledgement

Thanks to mamba, swin-transformer, diffusion model, and pointnet2 for their wonderful works.

Releases

No releases published

Packages

 
 
 

Contributors

0