Universal Lesion Segmentation [ULS23 Baseline]


Logo for Universal Lesion Segmentation [ULS23 Baseline]

About

Creator:

User Mugshot MJJdG 

Contact email:
Image Version:
db127463-072c-4b88-b831-953821d4e320
Last updated:
March 4, 2024, 1:08 p.m.
Inputs:
  • Stacked 3D Volumetric Spacings  (JSON describing the 3D volumetric spacing to accurately reduce a 4D stack to multiple 3D volumes. Example format: [[1.0, 1.0, 1.0], [3.4,3.4,6.5]] describes spacing of two 3D volumes on t=0 and t=1 (i.e. in the 4th dimension).)
  • Stacked 3D CT volumes of lesions  (3D CT volumes with universal lesions, stacked in the t-dimension. Includes a padding of intensity -1.)
Outputs:
  • CT Universal Lesion Binary Segmentation 

Challenge Performance

Date Challenge Phase Rank
March 4, 2024 ULS23 Development Phase (Limited Inference Time Models) 11
April 8, 2024 ULS23 Open Test Phase (Limited Inference Time Models) 4

Model Facts

Summary

This is the baseline algorithm for the ULS23 Challenge, and can be used to 3D segment the various lesion types present in the thorax-abdomen area of CT scans. The model has been pre-trained on pseudo masks generated from partially-annotated data using the GrabCut algorithm and subsequently fine-tuned on fully-annotated lesion data. On GrandChallenge the model takes approximately 2.5 seconds to segment a VOI. Please see the challenge paper for more details.

Mechanism

This model was trained using the nnUnetv2 framework. It consists of a residual encoder 3D fullres Unet with 6 stages, a feature size ranging from 32 to 384 and a batch size of 3 VOI's. The full VOI is used as input for the model without patching or resampling.

Inputs:

  • The algorithm takes as input either a single CT volume-of-interest of (128z , 256x , 256y) voxels or a stack of n VOI's concatenated in the z-dimension, i.e. (128 z*n , 256x, 256y) voxels. The center of each VOI is expected to contain a lesion to be segmented by the algorithm.
  • Additionally, a .json file containing the spacings of each of the n VOI's must be provided in the following format: [ [x-spacing (e.g. 0.74), y-spacing (e.g. 0.74), z-spacing (e.g. 3)], [..., ..., ...]]

Output:

  • The algorithm outputs a binary segmentation mask (0 = background, 1 = lesion). If multiple stacked volumes are provided the output also consists of masks stacked in the same format as the input.

Source Code is available in the challenge repository on GitHub.

Validation and Performance

This model was evaluated using 10% of each fully-annotated dataset in the ULS23 training dataset, split on a patient-level. The scores are aggregated per lesion type. For the results on the ULS23 test set please refer to our publication.

Lesion Type DICE N
Kidney 0.724 ± 0.257 33
Lung 0.754 ± 0.149 267
Lymph Node 0.698 ± 0.182 77
Bone 0.648 ± 0.267 86
Liver 0.650 ± 0.184 54
Pancreas 0.616 ± 0.196 38
Colon 0.518 ± 0.208 12

Uses and Directions

This algorithm was developed for research purposes only.
  • The intended use for this model is to automatically 3D segment a lesion selected by either a human or detection model. As such the algorithm always expects the VOI to contain a lesion to be segmented in the center of the volume.
  • This model was trained using a variety of pixel spacings, scanners and scanning protocols with data from multiple institutes.
  • For optimal performance, when padding the VOI to the required size use the minimum intensity - 1.

Warnings

  • GC (as of Jan. 2024) does not handle uncompressed .mha files > 4 GB, to prevent your jobs from timing out during image import you should not batch more than 100 VOI's per job.

Common Error Messages

Please contact the editors if you receive error messages during usage of the algorithm.

Information on this algorithm has been provided by the Algorithm Editors, following the Model Facts labels guidelines from Sendak, M.P., Gao, M., Brajer, N. et al. Presenting machine learning model information to clinical end users with model facts labels. npj Digit. Med. 3, 41 (2020). 10.1038/s41746-020-0253-3