[go: nahoru, domu]

Skip to content
/ DFE Public

Core codes for DUAL-HEAD FUSION NETWORK FOR IMAGE ENHANCEMENT

Notifications You must be signed in to change notification settings

zyhrainbow/DFE

Repository files navigation

Overview

Framework of our proposed method which consists of

  • a dual-head feature extraction module
  • a global color rendering module
  • a context-aware retouching module
  • a spatial attention based fusion

All modules are jointly learned from the annotated data in an end-to-end manner.

Preparation

Environment

pip install -r requirements.txt

Data

Prepare the dataset in the following format and you could use the provided FiveK Dataset class.

- <data_root>
    - input_train
    - input_test
    - target_train
    - target_test

Or you need to implement your own Class for your customed data format / directory arrangement.

Training

The default settings of the most hyper-parameters are written in the parameter.py file. To get started as soon as possible (with the FiveK dataset), only the 'data_root' needs to be modified before training.

python train.py --data_root <path>

By default, the images, models, and logs generated during training are saved in save_root/dataset/name.

Evaluation

To evaluate your own trained model of a specific epoch, specify the epoch and keep the other parameters the same as training.

For example,

python evaluate.py --model *** --epoch 397

Visualization & Analysis

  • Compared with other methods

  • For video ./video

About

Core codes for DUAL-HEAD FUSION NETWORK FOR IMAGE ENHANCEMENT

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published