Open solution to the CrowdAI Mapping Challenge
Deliver open source, ready-to-use and extendable solution to this competition. This solution should - by itself - establish solid benchmark, as well as provide good base for your custom ideas and experiments.
- clone this repository:
git clone https://github.com/neptune-ml/open-solution-mapping-challenge.git
- install requirements
- register to Neptune (if you wish to use it) login via:
$ neptune login
- download/upload competition data and change data-related paths in the configuration file
neptune.yaml
- Prepare the target masks and data:
$ neptune experiment run main.py prepare_masks
$ neptune experiment run main.py prepare_metadata \
--train_data \
--valid_data \
--test_data
- Put your competition API key in the configuration file
- run experiment (for example via neptune):
$ neptune experiment run \
main.py train_evaluate_predict --pipeline_name unet --chunk_size 5000 --submit
- check your leaderboard score!
- clone this repository:
git clone https://github.com/minerva-ml/open-solution-talking-data.git
- install PyTorch and
torchvision
- install requirements:
pip3 install -r requirements.txt
- register to Neptune (if you wish to use it) login via:
$ neptune login
- open Neptune and create new project called:
Mapping Challenge
with project key:MC
- download the data from the competition site
- upload the data to neptune (if you want to run computation in the cloud) via:
$ neptune data upload YOUR/DATA/FOLDER
- change paths in the
neptune.yaml
.
data_dir: /path/to/data
meta_dir: /path/to/data
masks_overlayed_dir: /path/to/masks_overlayed
experiment_dir: /path/to/work/dir
-
run experiment:
- local machine with neptune
$ neptune login $ neptune experiment run \ main.py -- train_evaluate_predict --pipeline_name unet --chunk_size 5000
- cloud via neptune
$ neptune login $ neptune experiment send --config neptune.yaml \ --worker gcp-large \ --environment pytorch-0.2.0-gpu-py3 \ main.py -- train_evaluate_predict --pipeline_name solution_1 --chunk_size 5000
- local pure python
$ python main.py train_evaluate_predict --pipeline_name unet --chunk_size 5000
There are several ways to seek help:
- crowdai discussion is our primary way of communication.
- You can submit an issue directly in this repo.
- Check CONTRIBUTING for more information.
- Check issues and project to check if there is something you would like to contribute to.