[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev clean tta #134

Merged
merged 6 commits into from
Jun 15, 2018
Merged

Dev clean tta #134

merged 6 commits into from
Jun 15, 2018

Conversation

jakubczakon
Copy link
Collaborator
  • speed up tta
  • refactored tta in pipelines

src/pipelines.py Outdated
cache_dirpath=config.env.cache_dirpath,
save_output=save_output,
cache_output=True)
cache_dirpath=config.env.cache_dirpath, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jakubczakon Here we would like to cache mask_resize output and only this output, because we reuse it in score_builder. Other caches would only occupy memory. That's why I think we should set cache_output parameter separately, especially for mask_resize.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taraspiotr agreed, on it.

'unet_padded': {'inference': unet_padded,
},
'unet_padded_tta': {'inference': unet_padded_tta,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jakubczakon I think unet_padded_tta pipeline is still going to be used. Right now it gives worse results, because scale problems, but I think for generalizing, especially if we will add scaling to TTA, padding will still improve our score.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taraspiotr you can still use it by setting

   loader_mode: crop_and_pad

in the neptune.yaml

@jakubczakon jakubczakon merged commit 7d9a08c into dev-repo_cleanup Jun 15, 2018
jakubczakon added a commit that referenced this pull request Jun 15, 2018
* initial restructure

* clean structure (#126)

* clean structure

* correct readme

* further cleaning

* Dev apply transformer (#131)

* clean structure

* correct readme

* further cleaning

* resizer docstring

* couple docstrings

* make apply transformer, memory cache

* fixes

* postprocessing docstrings

* fixes in PR

* Dev repo cleanup (#132)

* cleanup

* remove src.

* Dev clean tta (#134)

* added resize padding, refactored inference pipelines

* refactored piepliens

* added color shift augmentation

* reduced caching to just mask_resize

* updated config

* Dev-repo_cleanup models and losses docstrings (#135)

* models and losses docstrings

* small fixes in docstrings

* resolve conflicts in with TTA PR (#137)
@jakubczakon jakubczakon deleted the dev-clean_tta branch June 15, 2018 14:24
jakubczakon added a commit that referenced this pull request Jun 19, 2018
* added gmean tta, experimented with thresholding (#125)

* Dev repo cleanup (#138)

* initial restructure

* clean structure (#126)

* clean structure

* correct readme

* further cleaning

* Dev apply transformer (#131)

* clean structure

* correct readme

* further cleaning

* resizer docstring

* couple docstrings

* make apply transformer, memory cache

* fixes

* postprocessing docstrings

* fixes in PR

* Dev repo cleanup (#132)

* cleanup

* remove src.

* Dev clean tta (#134)

* added resize padding, refactored inference pipelines

* refactored piepliens

* added color shift augmentation

* reduced caching to just mask_resize

* updated config

* Dev-repo_cleanup models and losses docstrings (#135)

* models and losses docstrings

* small fixes in docstrings

* resolve conflicts in with TTA PR (#137)

* refactor in stream mode (#139)

* hot fix of mask_postprocessing in tta with new make transformer

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* local

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Dev preparation path fix (#140)

* local

* cleaned up paths in the masks and metadata generation

* dropped debug stuff

* Dev non trainable transformer flag (#141)

* local

* added is_trainable flag to models
jakubczakon pushed a commit that referenced this pull request Jun 21, 2018
* initial restructure

* thresholds on unet output

* added gmean tta, experimented with thresholding (#125)

* feature exractor and lightgbm

* pipeline is running ok

* tmp commit

* lgbm ready for tests

* tmp

* faster nms and feature extraction

* small fix

* cleaning

* Dev repo cleanup (#138)

* initial restructure

* clean structure (#126)

* clean structure

* correct readme

* further cleaning

* Dev apply transformer (#131)

* clean structure

* correct readme

* further cleaning

* resizer docstring

* couple docstrings

* make apply transformer, memory cache

* fixes

* postprocessing docstrings

* fixes in PR

* Dev repo cleanup (#132)

* cleanup

* remove src.

* Dev clean tta (#134)

* added resize padding, refactored inference pipelines

* refactored piepliens

* added color shift augmentation

* reduced caching to just mask_resize

* updated config

* Dev-repo_cleanup models and losses docstrings (#135)

* models and losses docstrings

* small fixes in docstrings

* resolve conflicts in with TTA PR (#137)

* refactor in stream mode (#139)

* hot fix of mask_postprocessing in tta with new make transformer

* finishing merge

* finishing merge v2

* finishing merge v3

* finishing merge v4

* tmp commit

* lgbm train and evaluate pipelines run correctly

* something is not yes

* fix

* working lgbm training with ugly train_mode=True

* back to pipelines.py

* small fix

* preparing PR

* preparing PR v2

* preparing PR v2

* fix

* fix_2

* fix_3

* fix_4
jakubczakon added a commit that referenced this pull request Jun 21, 2018
* initial restructure

* thresholds on unet output

* added gmean tta, experimented with thresholding (#125)

* feature exractor and lightgbm

* pipeline is running ok

* tmp commit

* lgbm ready for tests

* tmp

* faster nms and feature extraction

* small fix

* cleaning

* Dev repo cleanup (#138)

* initial restructure

* clean structure (#126)

* clean structure

* correct readme

* further cleaning

* Dev apply transformer (#131)

* clean structure

* correct readme

* further cleaning

* resizer docstring

* couple docstrings

* make apply transformer, memory cache

* fixes

* postprocessing docstrings

* fixes in PR

* Dev repo cleanup (#132)

* cleanup

* remove src.

* Dev clean tta (#134)

* added resize padding, refactored inference pipelines

* refactored piepliens

* added color shift augmentation

* reduced caching to just mask_resize

* updated config

* Dev-repo_cleanup models and losses docstrings (#135)

* models and losses docstrings

* small fixes in docstrings

* resolve conflicts in with TTA PR (#137)

* refactor in stream mode (#139)

* hot fix of mask_postprocessing in tta with new make transformer

* finishing merge

* finishing merge v2

* finishing merge v3

* finishing merge v4

* tmp commit

* lgbm train and evaluate pipelines run correctly

* something is not yes

* fix

* working lgbm training with ugly train_mode=True

* back to pipelines.py

* small fix

* preparing PR

* preparing PR v2

* preparing PR v2

* fix

* fix_2

* fix_3

* fix_4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants