Last updated:
0 purchases
intensitynormalization 2.2.4
intensity-normalization
This package contains various methods to normalize the intensity of various modalities of magnetic resonance (MR)
images, e.g., T1-weighted (T1-w), T2-weighted (T2-w), FLuid-Attenuated Inversion Recovery (FLAIR), and Proton
Density-weighted (PD-w).
The basic functionality of this package can be summarized in the following image:
where the left-hand side are the histograms of the intensities for a set of unnormalized images (from the same scanner
with the same protocol!) and the right-hand side are the histograms after (FCM) normalization.
We used this package to explore the impact of intensity normalization on a synthesis task (pre-print
available here).
Note that while this release was carefully inspected, there may be bugs. Please submit an issue if you encounter a
problem.
Methods
We implement the following normalization methods (the names of the corresponding command-line interfaces are to the
right in parentheses):
Individual time-point normalization methods
Z-score normalization (zscore-normalize)
Fuzzy C-means (FCM)-based tissue-based mean normalization (fcm-normalize)
Kernel Density Estimate (KDE) WM mode normalization (kde-normalize)
WhiteStripe [1] (ws-normalize)
Sample-based normalization methods
Least squares (LSQ) tissue mean normalization (lsq-normalize)
Piecewise Linear Histogram Matching (Nyúl & Udupa) [2] [3] (nyul-normalize)
RAVEL [4] (ravel-normalize)
Individual image-based methods normalize images based on one time-point of one subject.
Sample-based methods normalize images based on a set of images of (usually) multiple subjects of the same
modality.
Recommendation on where to start: If you are unsure which one to choose for your application, try FCM-based WM-based
normalization (assuming you have access to a T1-w image for all the time-points). If you are getting odd results in
non-WM tissues, try least squares tissue normalization (which minimizes the least squares distance between CSF, GM, and
WM tissue means within a set).
Read about the methods and how they work.
If you have a non-standard modality, e.g., a contrast-enhanced image, read about how the methods work and
determine which method would work for your use case. Make sure you plot the foreground intensities (with
the -p option in the CLI or the HistogramPlotter in the Python API) to validate the normalization results.
All algorithms except Z-score (zscore-normalize) and the Piecewise Linear Histogram Matching
(nyul-normalize) are specific to images of the brain.
Motivation
Intensity normalization is an important pre-processing step in many image processing applications regarding MR images
since MR images have an inconsistent intensity scale across (and within) sites and scanners due to, e.g.,:
the use of different equipment,
different pulse sequences and scan parameters,
and a different environment in which the machine is located.
Importantly, the inconsistency in intensities isn’t a feature of the data (unless you want to classify the
scanner/site from which an image came)—it’s an artifact of the acquisition process. The inconsistency causes a problem
with machine learning-based image processing methods, which usually assume the data was gathered iid from some
distribution.
Install
The easiest way to install the package is through the following command:
pip install intensity-normalization
To install from the source directory, clone the repo and run:
python setup.py install
Note the package antspy is required for the RAVEL normalization routine, the
preprocessing tool as well as the co-registration tool, but all other normalization and processing tools work without
it. To install the antspy package along with the RAVEL, preprocessing, and co-registration CLI, install with:
pip install "intensity-normalization[ants]"
Basic Usage
See the 5 minute overview
for a more detailed tutorial.
In addition to the above small tutorial, here is consolidated
documentation.
Call any executable script with the -h flag to see more detailed instructions about the proper call.
Note that brain masks (or already skull-stripped images) are required for most of the normalization methods. The
brain masks do not need to be perfect, but each mask needs to remove most of the tissue outside the brain. Assuming you
have T1-w images for each subject, an easy and robust method for skull-stripping
is ROBEX [5].
If the images are already skull-stripped, you don’t need to provide a brain mask. The foreground will be
automatically estimated and used.
You can install ROBEX—and get python bindings for it at the same time–with the
package pyrobex (installable via pip install pyrobex).
Individual time-point normalization methods
Example call to a individual time-point normalization CLI:
fcm-normalize t1w_image.nii -m brain_mask.nii
Sample-based normalization methods
Example call to a sample-based normalization CLI:
nyul-normalize images/ -m masks/ -o nyul_normalized/ -v
where images/ is a directory full of N MR images and masks/ is a directory full of N corresponding brain masks,
nyul_normalized is the output directory for the normalized images, and -v controls the verbosity of the output.
The command line interface is standard across all sampled-based normalization routines (i.e., you should be able to run
all sample-based normalization routines with the same call as in the above example); however, each has unique
method-specific options.
Potential Pitfalls
This package was developed to process adult human MR images; neonatal, pediatric, and animal MR images should
also work but—if the data has different proportions of tissues or differences in relative intensity among tissue
types compared with adults—the normalization may fail. The nyul-normalize method, in particular, will fail hard if
you train it on adult data and test it on non-adult data (or vice versa). Please open an issue if you encounter a
problem with the package when normalizing non-adult human data.
When we refer to any specific modality, it is referring to a non-contrast version unless otherwise stated. Using
a contrast image as input to a method that assumes non-contrast will produce suboptimal results. One potential way to
normalize contrast images with this package is to 1) find a tissue that is not affected by the contrast (e.g., grey
matter) and normalize based on some summary statistic of that (where the tissue mask was found on a non-contrast
image); 2) use a simplistic (but non-robust) method like Z-score normalization.
Read about the methods and how they work
to decide which method would work best for your contrast-enhanced images.
Contributing
Help wanted! See CONTRIBUTING.rst
for details and/or reach out to me if you’d like to contribute. Credit will be given!
If you want to add a method, I’ll be happy to add your reference to the citation
section below.
Test Package
Unit tests can be run from the main directory as follows:
pytest tests
Citation
If you use the intensity-normalization package in an academic paper, please cite the
corresponding paper:
@inproceedings{reinhold2019evaluating,
title={Evaluating the impact of intensity normalization on {MR} image synthesis},
author={Reinhold, Jacob C and Dewey, Blake E and Carass, Aaron and Prince, Jerry L},
booktitle={Medical Imaging 2019: Image Processing},
volume={10949},
pages={109493H},
year={2019},
organization={International Society for Optics and Photonics}}
References
[1]
R. T. Shinohara, E. M. Sweeney, J. Goldsmith, N. Shiee, F. J. Mateen, P. A. Calabresi, S. Jarso, D. L. Pham, D. S.
Reich, and C. M. Crainiceanu, “Statistical normalization techniques for magnetic resonance imaging,” NeuroImage Clin.,
vol. 6, pp. 9–19, 2014.
[2]
N. Laszlo G and J. K. Udupa, “On Standardizing the MR Image Intensity Scale,” Magn. Reson. Med., vol. 42, pp.
1072–1081, 1999.
[3]
M. Shah, Y. Xiao, N. Subbanna, S. Francis, D. L. Arnold, D. L. Collins, and T. Arbel, “Evaluating intensity
normalization on MRIs of human brain with multiple sclerosis,” Med. Image Anal., vol. 15, no. 2, pp. 267–282, 2011.
[4]
J. P. Fortin, E. M. Sweeney, J. Muschelli, C. M. Crainiceanu, and R. T. Shinohara, “Removing inter-subject technical
variability in magnetic resonance imaging studies,” NeuroImage, vol. 132, pp. 198–212, 2016.
[5]
Iglesias, Juan Eugenio, Cheng-Yi Liu, Paul M. Thompson, and Zhuowen Tu. “Robust brain extraction across datasets and
comparison with publicly available methods.” IEEE transactions on medical imaging 30, no. 9 (2011): 1617-1634.
History
2.2.4 (2023-05-31)
Update to allow Python >=3.9
2.2.3 (2022-03-15)
Revert error on different image shapes from RavelNormalize; it is required!
2.2.2 (2022-03-15)
Remove plural from Modality and TissueType enumerations.
Update tutorials to use the Modality and TissueType enumerations.
Remove error on different image shapes from RavelNormalize when registration enabled.
2.2.1 (2022-03-14)
Update documentation to support modifications to Python API
Update dependencies
Remove incorrect warning from WhiteStripe normalization
2.2.0 (2022-02-25)
Change backend to pymedio to support more medical image formats
2.1.4 (2022-01-17)
Fix testing bugs in 2.1.3 and cleanup some interfaces
2.1.3 (2022-01-17)
Cleanup Makefile and dependencies
Add py.typed file
2.1.2 (2022-01-03)
Updates for security
2.1.1 (2021-10-20)
Fix warning about negative values when not passing in a mask
Remove redundant word from Nyul normalize keyword arguments
2.1.0 (2021-10-13)
Restructure CLIs for faster startup and improve handling of missing antspy
add option to CLIs to print version
2.0.3 (2021-10-11)
Improve Nyul docs and argument/keyword names
2.0.2 (2021-09-27)
Fix and improve documentation
Add an escape-hatch “other” modality
Add peak types as modalities in KDE & WS
2.0.1 (2021-08-31)
Save and load fit artifacts for LSQ and Nyul for both the CLIs and Python API
2.0.0 (2021-08-22)
Major refactor to reduce redundancy, make code more usable outside of the CLIs, and generally improve code quality.
1.4.0 (2021-03-16)
First release on PyPI.
For personal and professional use. You cannot resell or redistribute these repositories in their original state.
There are no reviews.