[go: nahoru, domu]

Skip to content

The code and the DIW dataset for "Learning From Documents in the Wild to Improve Document Unwarping" (SIGGRAPH 2022)

License

Notifications You must be signed in to change notification settings

cvlab-stonybrook/PaperEdge

Repository files navigation

PaperEdge


The code and the DIW dataset for "Learning From Documents in the Wild to Improve Document Unwarping" (SIGGRAPH 2022)

[paper] [supplementary material] image

Documents In the Wild (DIW) dataset (2.13GB)

link

Pretrained models (139.7MB each)

Enet

Tnet

DocUNet benchmark results

docunet_benchmark_paperedge.zip

The last row of adres.txt is the evaluation results. The values in the last 3 columns are AD, MS-SSIM, and LD.

Infer one image.

  1. Download the pretrained model to the models directory.
  2. Run the demo.py by the following code:
    $ python demo.py --Enet_ckpt 'models/G_w_checkpoint_13820.pt' \
                     --Tnet_ckpt 'models/L_w_checkpoint_27640.pt' \
                     --img_path 'images/1.jpg' \
                     --out_dir 'output'
  3. The final result: compare

About

The code and the DIW dataset for "Learning From Documents in the Wild to Improve Document Unwarping" (SIGGRAPH 2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published