atommic 1.0.1

Last updated:

0 purchases

atommic 1.0.1 Image
atommic 1.0.1 Images
Add to Cart

Description:

atommic 1.0.1

Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC)














๐Ÿ“œ
๐Ÿค—
๐Ÿณ
๐Ÿ“ฆ
๐Ÿ“š

๐Ÿ‘‹ Introduction
The Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC) is a toolbox for applying AI methods for accelerated MRI reconstruction (REC), MRI segmentation (SEG), quantitative MR imaging (qMRI), as well as multitask learning (MTL), i.e., performing multiple tasks simultaneously, such as reconstruction and segmentation. Each task is implemented in a separate collection consisting of data loaders, transformations, models, metrics, and losses. ATOMMIC is designed to be modular and extensible on new tasks, models, and datasets. ATOMMIC uses PyTorch Lightning for feasible high-performance multi-GPU/multi-node mixed-precision training.

The schematic overview of ATOMMIC showcases the main components of the toolbox. First, we need an MRI Dataset (e.g., CC359). Next, we need to define the high-level parameters, such as the task and the model, the undersampling, the transforms, the optimizer, the scheduler, the loss, the trainer parameters, and the experiment manager. All these parameters are defined in a .yaml file using Hydra and OmegaConf.
The trained model is an .atommic module, exported with ONNX and TorchScript support, which can be used for inference. The .atommic module can also be uploaded on HuggingFace. Pretrained models are available on our HF account and can be downloaded and used for inference.
๐Ÿ› ๏ธ Installation
ATOMMIC is best to be installed in a Conda environment.
๐Ÿ Conda
conda create -n atommic python=3.10
conda activate atommic

๐Ÿ“ฆ Pip
Use this installation mode if you want the latest released version.
pip install atommic

From source
Use this installation mode if you are contributing to atommic.
git clone https://github.com/wdika/atommic
cd atommic
bash ./reinstall.sh

๐Ÿณ Docker containers
An atommic container is available at dockerhub, you can pull it with:
docker pull wdika/atommic

You can also build an atommic container with:
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t atommic:latest .

You can run the container with:
docker run --gpus all -it --rm -v /home/user/configs:/config atommic:latest atommic run -c /config/config.yaml

where /config/config.yaml is the path to your local configuration file.
Or you can run it interactively with:
docker run --gpus all -it --rm -p 8888:8888 atommic:latest /bin/bash -c "./start-jupyter.sh"

๐Ÿš€ Quick Start Guide
The best way to get started with ATOMMIC is with one of the tutorials:

ATOMMIC Primer - demonstrates how to use ATOMMIC.
ATOMMIC MRI transforms - demonstrates how to use ATOMMIC to undersample MRI data.
ATOMMIC MRI undersampling - demonstrates how to use ATOMMIC to apply transforms to MRI data.
ATOMMIC Upload Model on HuggingFace - demonstrates how to upload a model on HuggingFace.

You can also check the projects page to see how to use ATOMMIC for specific tasks and public datasets.
Pre-trained models are available on HuggingFace ๐Ÿค—.
ATOMMIC paper is fully reproducible. Please check here for more information.
๐Ÿค– Training & Testing
Training and testing models in ATOMMIC is intuitive and easy. You just need to properly configure a .yaml file and run the following command:
atommic run -c path-to-config-file

โš™๏ธ Configuration


Choose the task and the model, according to the collections.


Choose the dataset and the dataset parameters, according to the datasets or your own dataset.


Choose the undersampling.


Choose the transforms.


Choose the losses.


Choose the optimizer.


Choose the scheduler.


Choose the trainer parameters.


Choose the experiment manager.


You can also check the projects page to see how to configure the .yaml file for specific tasks.
๐Ÿ—‚๏ธ Collections
ATOMMIC is organized into collections, each of which implements a specific task. The following collections are currently available, implementing various models as listed:
MultiTask Learning (MTL)

End-to-End Recurrent Attention Network (SERANet), 2. Image domain Deep Structured Low-Rank Network (IDSLR), 3. Image domain Deep Structured Low-Rank UNet (IDSLRUNet), 4. Multi-Task Learning for MRI Reconstruction and Segmentation (MTLRS), 5. Reconstruction Segmentation method using UNet (RecSegUNet), 6. Segmentation Network MRI (SegNet).

Quantitative MR Imaging (qMRI)

Quantitative Recurrent Inference Machines (qRIMBlock), 2. Quantitative End-to-End Variational Network (qVarNet), 3. Quantitative Cascades of Independently Recurrent Inference Machines (qCIRIM).

MRI Reconstruction (REC)

Cascades of Independently Recurrent Inference Machines (CIRIM), 2. Convolutional Recurrent Neural Networks (CRNNet), 3. Deep Cascade of Convolutional Neural Networks (CascadeNet), 4. Down-Up Net (DUNet), 5. End-to-End Variational Network (VarNet), 6. Independently Recurrent Inference Machines (RIMBlock), 7. Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (JointICNet), 8. KIKINet, 9. Learned Primal-Dual Net (LPDNet), 10. Model-based Deep Learning Reconstruction (MoDL), 11. MultiDomainNet, 12. ProximalGradient, 13. Recurrent Inference Machines (RIMBlock), 14. Recurrent Variational Network (RecurrentVarNet), 15. UNet, 16. Variable Splitting Network (VSNet), 17. XPDNet, 18. Zero-Filled reconstruction (ZF).

MRI Segmentation (SEG)

SegmentationAttentionUNet, 2. SegmentationDYNUNet, 3. SegmentationLambdaUNet, 4. SegmentationUNet, 5. Segmentation3DUNet, 6. SegmentationUNetR, 7. SegmentationVNet.

MRI Datasets
ATOMMIC supports public datasets, as well as private datasets. The following public datasets are supported natively:

AHEAD: Supports the (qMRI) and (REC) tasks.
BraTS 2023 Adult Glioma: Supports the (SEG) task.
CC359: Supports the (REC) task.
fastMRI Brains Multicoil: Supports the (REC) task.
fastMRI Knees Multicoil: Supports the (REC) task.
fastMRI Knees Singlecoil: Supports the (REC) task.
ISLES 2022 Sub Acute Stroke: Supports the (SEG) task.
SKM-TEA: Supports the (REC), (SEG), and (MTL) tasks.
Stanford Knees: Supports the (REC) task.

๐Ÿ“š API Documentation

Access the API Documentation here
๐Ÿ“„ License
ATOMMIC is under
๐Ÿ“– Citation
If you use ATOMMIC in your research, please cite as follows:
@article{Karkalousos_2024,
title={Atommic: An Advanced Toolbox for Multitask Medical Imaging Consistency to Facilitate Artificial Intelligence Applications from Acquisition to Analysis in Magnetic Resonance Imaging},
url={http://dx.doi.org/10.2139/ssrn.4801289},
DOI={10.2139/ssrn.4801289},
publisher={Elsevier BV},
author={Karkalousos, Dimitrios and Iลกgum, Ivana and Marquering, Henk and Caan, Matthan W.A.},
year={2024}}

๐Ÿ”— References
ATOMMIC has been used or is referenced in the following papers:


Karkalousos, Dimitrios and Iลกgum, Ivana and Marquering, Henk and Caan, Matthan W.A., Atommic: An Advanced Toolbox for Multitask Medical Imaging Consistency to Facilitate Artificial Intelligence Applications from Acquisition to Analysis in Magnetic Resonance Imaging. Available at SSRN: https://ssrn.com/abstract=4801289 or http://dx.doi.org/10.2139/ssrn.4801289


Karkalousos, D., Iลกgum, I., Marquering, H. A., & Caan, M. W. A. (2024). ATOMMIC: An Advanced Toolbox for Multitask Medical Imaging Consistency to facilitate Artificial Intelligence applications from acquisition to analysis in Magnetic Resonance Imaging. https://doi.org/10.2139/ssrn.4801289


Karkalousos, D., Isgum, I., Marquering, H., & Caan, M. W. A. (2024, April 27). The Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC): A Deep Learning framework to facilitate Magnetic Resonance Imaging. Medical Imaging with Deep Learning. https://openreview.net/forum?id=HxTZr9yA0N


Karkalousos, D., Isgum, I., Marquering, H. & Caan, M.W.A.. (2024). MultiTask Learning for accelerated-MRI Reconstruction and Segmentation of Brain Lesions in Multiple Sclerosis. Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 227:991-1005 Available from https://proceedings.mlr.press/v227/karkalousos24a.html.


Zhang, C., Karkalousos, D., Bazin, P. L., Coolen, B. F., Vrenken, H., Sonke, J. J., Forstmann, B. U., Poot, D. H. J., & Caan, M. W. A. (2022). A unified model for reconstruction and R2* mapping of accelerated 7T data using the quantitative recurrent inference machine. NeuroImage, 264. DOI


Karkalousos, D., Noteboom, S., Hulst, H. E., Vos, F. M., & Caan, M. W. A. (2022). Assessment of data consistency through cascades of independently recurrent inference machines for fast and robust accelerated MRI reconstruction. Physics in Medicine & Biology. DOI


๐Ÿ“ง Contact
For any questions, please contact Dimitris Karkalousos @ [email protected].
โš ๏ธ๐Ÿ™ Disclaimer & Acknowledgements

Note: ATOMMIC is built on top of NeMo. NeMo is under Apache 2.0 license, so we are allowed to use it. We also assume that we can use the NeMo documentation basis as long as we cite it and always refer to the baselines everywhere in the code and docs. ATOMMIC also includes implementations of reconstruction methods from fastMRI and DIRECT, and segmentation methods from MONAI, as well as other codebases which are always cited on the corresponding files. All methods in ATOMMIC are reimplemented and not called from the original libraries, allowing for full reproducibility, support, and easy extension. ATOMMIC is an open-source project under the Apache 2.0 license.

License:

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Customer Reviews

There are no reviews.