-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dev clean tta #134
Dev clean tta #134
Conversation
- speed up tta
- refactored tta in pipelines
src/pipelines.py
Outdated
cache_dirpath=config.env.cache_dirpath, | ||
save_output=save_output, | ||
cache_output=True) | ||
cache_dirpath=config.env.cache_dirpath, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jakubczakon Here we would like to cache mask_resize output and only this output, because we reuse it in score_builder. Other caches would only occupy memory. That's why I think we should set cache_output parameter separately, especially for mask_resize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@taraspiotr agreed, on it.
'unet_padded': {'inference': unet_padded, | ||
}, | ||
'unet_padded_tta': {'inference': unet_padded_tta, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jakubczakon I think unet_padded_tta pipeline is still going to be used. Right now it gives worse results, because scale problems, but I think for generalizing, especially if we will add scaling to TTA, padding will still improve our score.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* initial restructure * clean structure (#126) * clean structure * correct readme * further cleaning * Dev apply transformer (#131) * clean structure * correct readme * further cleaning * resizer docstring * couple docstrings * make apply transformer, memory cache * fixes * postprocessing docstrings * fixes in PR * Dev repo cleanup (#132) * cleanup * remove src. * Dev clean tta (#134) * added resize padding, refactored inference pipelines * refactored piepliens * added color shift augmentation * reduced caching to just mask_resize * updated config * Dev-repo_cleanup models and losses docstrings (#135) * models and losses docstrings * small fixes in docstrings * resolve conflicts in with TTA PR (#137)
* added gmean tta, experimented with thresholding (#125) * Dev repo cleanup (#138) * initial restructure * clean structure (#126) * clean structure * correct readme * further cleaning * Dev apply transformer (#131) * clean structure * correct readme * further cleaning * resizer docstring * couple docstrings * make apply transformer, memory cache * fixes * postprocessing docstrings * fixes in PR * Dev repo cleanup (#132) * cleanup * remove src. * Dev clean tta (#134) * added resize padding, refactored inference pipelines * refactored piepliens * added color shift augmentation * reduced caching to just mask_resize * updated config * Dev-repo_cleanup models and losses docstrings (#135) * models and losses docstrings * small fixes in docstrings * resolve conflicts in with TTA PR (#137) * refactor in stream mode (#139) * hot fix of mask_postprocessing in tta with new make transformer * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * local * Update README.md * Update README.md * Update README.md * Update README.md * Dev preparation path fix (#140) * local * cleaned up paths in the masks and metadata generation * dropped debug stuff * Dev non trainable transformer flag (#141) * local * added is_trainable flag to models
* initial restructure * thresholds on unet output * added gmean tta, experimented with thresholding (#125) * feature exractor and lightgbm * pipeline is running ok * tmp commit * lgbm ready for tests * tmp * faster nms and feature extraction * small fix * cleaning * Dev repo cleanup (#138) * initial restructure * clean structure (#126) * clean structure * correct readme * further cleaning * Dev apply transformer (#131) * clean structure * correct readme * further cleaning * resizer docstring * couple docstrings * make apply transformer, memory cache * fixes * postprocessing docstrings * fixes in PR * Dev repo cleanup (#132) * cleanup * remove src. * Dev clean tta (#134) * added resize padding, refactored inference pipelines * refactored piepliens * added color shift augmentation * reduced caching to just mask_resize * updated config * Dev-repo_cleanup models and losses docstrings (#135) * models and losses docstrings * small fixes in docstrings * resolve conflicts in with TTA PR (#137) * refactor in stream mode (#139) * hot fix of mask_postprocessing in tta with new make transformer * finishing merge * finishing merge v2 * finishing merge v3 * finishing merge v4 * tmp commit * lgbm train and evaluate pipelines run correctly * something is not yes * fix * working lgbm training with ugly train_mode=True * back to pipelines.py * small fix * preparing PR * preparing PR v2 * preparing PR v2 * fix * fix_2 * fix_3 * fix_4
* initial restructure * thresholds on unet output * added gmean tta, experimented with thresholding (#125) * feature exractor and lightgbm * pipeline is running ok * tmp commit * lgbm ready for tests * tmp * faster nms and feature extraction * small fix * cleaning * Dev repo cleanup (#138) * initial restructure * clean structure (#126) * clean structure * correct readme * further cleaning * Dev apply transformer (#131) * clean structure * correct readme * further cleaning * resizer docstring * couple docstrings * make apply transformer, memory cache * fixes * postprocessing docstrings * fixes in PR * Dev repo cleanup (#132) * cleanup * remove src. * Dev clean tta (#134) * added resize padding, refactored inference pipelines * refactored piepliens * added color shift augmentation * reduced caching to just mask_resize * updated config * Dev-repo_cleanup models and losses docstrings (#135) * models and losses docstrings * small fixes in docstrings * resolve conflicts in with TTA PR (#137) * refactor in stream mode (#139) * hot fix of mask_postprocessing in tta with new make transformer * finishing merge * finishing merge v2 * finishing merge v3 * finishing merge v4 * tmp commit * lgbm train and evaluate pipelines run correctly * something is not yes * fix * working lgbm training with ugly train_mode=True * back to pipelines.py * small fix * preparing PR * preparing PR v2 * preparing PR v2 * fix * fix_2 * fix_3 * fix_4