[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training recipe for specialized model? #5

Closed
Dundalia opened this issue Aug 2, 2023 · 2 comments
Closed

Training recipe for specialized model? #5

Dundalia opened this issue Aug 2, 2023 · 2 comments

Comments

@Dundalia
Copy link
Dundalia commented Aug 2, 2023

In the paper you claim:

During training, we employ a batch size of 32 and utilize the AdamW optimizer with a constant learn- ing rate of 5e − 5. The model is trained for two epochs.

But in the specialization.py script the gradient_accumulation_steps parameter is set to 8 and you are training for three epochs. Which is the true training recipe for the deberta-10k-rank_net model?

@sunnweiwei
Copy link
Owner

The training recipe for deberta-10k-rank_net is as described in the paper. We use 4 GPUs in training and the global batch size is 4 * 8 = 32. We evaluate the checkpoint of 2 epochs.

@Dundalia
Copy link
Author
Dundalia commented Aug 2, 2023

Thanks for the quick response! Crystal clear!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants