[go: nahoru, domu]

Skip to content
Andres Diaz-Pinto edited this page Apr 8, 2022 · 29 revisions

FAQ Summary

What is a MONAI Label App?

It is a software application that developers designed to run on the MONAI Label server. It is where researchers or developers define their own pipeline to facilitate the image annotation process. They can use the provided Slicer MONAI Label plugin or customize their own to process inputs and outputs sent to the App. An example of this is DeepEdit, which uses clicks provided by the user/clinician to facilitate the image annotation.

What is the difference between DeepGrow and DeepEdit?

DeepGrow is a click-based interactive segmentation model, where the user can guide the segmentation with positive and negative clicks. The positive clicks are intended to guide the segmentation towards the region of interest while the negative clicks are used for neglecting the background (Sakinis, T., "Interactive segmentation of medical images through fully convolutional neural networks", arXiv e-prints, 2019).

DeepEdit is an algorithm that combines the power of DeepGrow (interactive model) model with a standard segmentation model that only uses the only images to segment the region of interest. This means DeepEdit allows the user to do both interactive and standard segmentation.

Does MONAI Label work for multiple labels?

Yes, MONAI Label supports multiple label segmentation algorithms (Please see heart ventricles App). The designed endpoints allow the user to create dynamic UI to interact with their own App. However, available interactive paradigms, such as DeepGrow and DeepEdit, currently work on single-label annotation.

Which image modalities MONAI Label supports?

Users can develop MONAI Label Apps for any modality they want. Version 0.1 of MONAI Label made Apps available to work on CT and MR images. There is also a use case showing how MONAI Label works on ultrasound images (Link 1 and Link 2).

Where should I have the dataset? in the server or client-side?

Researchers/Clinicians can use either the file archive or the DICOMweb server (Orthanc) to place their images/studies. When using the file archive, users can place their dataset on either side: the MONAI Label server or the client. Currently, the Slicer plugin allows users to upload both images and manual labels to the server. If the dataset is in the file archive, researchers should locate the labels in a subfolder called labels/final and use the same image names.

Does MONAI Label support multimodality images?

Yes, but this depends more on the UI (viewer) than the MONAI Label Apps themselves. Users can easily create Apps that process multimodality images, but the issue is to find the right viewer that visualizes multimodality images.

What is the difference between MONAI Label and NVIDIA Clara AI-Assisted Annotation (AIAA) tool?

The main difference is that MONAI Label allows the user to create Apps to train and test their own model or workflow. NVIDIA Clara AIAA was mainly developed to perform inference. Another difference is that MONAI label is targeted at a single researchers/clinician user with a workstation. Whereas NVIDIA Clara AIAA can scale for multiple annotators being served from the same Annotation server using different clients.

Can I use other libraries different from MONAI to create a MONAILabel App?

Yes, researchers can use any library they like. However, MONAI Label team encourage the use of MONAI as the primary library to create AI models.

Does MONAI Label support other inputs such as ROI, Line, or closed curves from Slicer?

Yes, users can send to the server any file their App uses as input. It can be points, segmentation nodes, ROIs, etc. To do this, users can start from the current Slicer module and create their custom Slicer module that communicates with a MONAI Label App. See how to create more dynamic Slicer plugins page

How to create a fixed validation split while training and updating models?

Currently the 'Model Update' process based on the validation split hyper-parameter randomly selects the 3D volumes for the validation. However, it is possible that a user/developer would want to fix the validation split as they might be interested in observing the consistent performance improvements over a fixed set.

This can be done easily in the custom app that is being created by the user. The template for a custom app always has main.py. For e.g please look at the following lines of code in the interface app.py:

# Datalist for train/validation partition_dataset(datalist, ratios=[(1 - val_split), val_split], shuffle=shuffle)

By redefining the method partition_datalist in the main.py class. Users can assign any split for train and validation. Give your own dictionary as a datastore [{'image': ImageTensor1, 'label': LabelTensor1}, {'image': ImageTensor2, 'label': LabelTensor2}] for both train and validation.