[go: nahoru, domu]

Skip to content

Latest commit

 

History

History
123 lines (96 loc) · 8.22 KB

tip-635.md

File metadata and controls

123 lines (96 loc) · 8.22 KB
tip: 635
title: Optimize Reward Withdrawal To Improve TPS
author: halibobo1205@gmail.com
discussions-to: https://github.com/tronprotocol/tips/issues/635
category : Core
status: Final
type: Standards Track 
created: 2023-12-21

Simple Summary

This TIP aims to optimize the algorithm of the voting reward withdrawal in Phase1 (since TIP-53, before TIP-465 took effect) to improve the TPS of such transactions.

Abstract

There are currently two phases of the TRON protocol voting reward:

  • Phase1: Since TIP-53 and before TIP-465 took effect, the voting reward withdrawal is calculated accumulatively, and the time complexity is O(n).
  • Phase2: After TIP-465, its voting reward withdrawal adopts a more efficient calculation method, and the time complexity is O(1).

Due to the poor performance of the current Phase1 voting reward withdrawal algorithm, when a user who has Phase1 reward initiates the transactions triggering the algorithm, the performance of the network would degrade.

This TIP proposes a scheme that optimizes the relevant calculation complexity to O(1) for Phase1 without an additional reward withdrawal logic, greatly optimizing the calculation performance of Phase1 reward and improving the TPS.

Motivation

Performance

According to statistics, there are currently about 160k users on the TRON mainnet who have Phase1 rewards to be withdrawn. By comparing the performance of the Phase1 reward algorithm before and after optimization in the test environment, it is expected that this optimization plan can improve performance by >20 times.

User Reward

The current Phase1 voting reward withdrawal uses truncation at the end of the decimal, resulting in the current actual value being slightly smaller than the expected value. The optimized algorithm will adopt higher calculation accuracy, and the calculation value will be closer to the expected values. At the same time, the difference in results before and after optimization will be controlled within 1 TRX, which will have almost no impact on the TRX inflation of the entire network and user rewards.

Specification

Glossary

  • maintenance: maintenance period, responsible for logical processing, such as voting, voting reward calculation, and proposal validation in the current cycle. Each maintenance period of the main network is 6 hours.
  • phase1.algorithm1: current reward withdrawal algorithm for Phase1
  • phase1.algorithm2: new reward withdrawal algorithm for Phase1 after optimization
  • phase2.algorithm: reward withdrawal algorithm for Phase2
  • rewardViSet: the accumulated reward per vote for SR set in Phase1
  • rewardViRootHash: the root hash of all data in rewardViSet
  • rewardViStore: the database that stores the RewardViSet
  • SRStore: SR database
  • rewardViCalService: monitor calculateRewardViTask status
  • calculateRewardViTask: calculate RewardViSet and store the result to RewardViStore
  • accumulateRewardPerVote[i, s]: the accumulated reward per vote for SR s after maintenance i
  • totalReward(i, s): the total voting rewards of voters for SR s at maintenance i
  • totalVote(i, s): the total votes for SR s at maintenance i
  • finishFlag: the flag indicates whether the calculateRewardViTask is completed
  • newRewardCycle: the maintenance which the proposal of phase2.algorithm is activated, which is 4708 on mainnet
  • startCycle: the maintenance when the user starts to vote
  • endCycle: the maintenance when the user cancels the voting
  • userVote(s): the number of use’s votes for SR s
  • userReward(s): the user rewards for voting SR s
  • newRewardCalculationAlgorithmProposal: new voting reward algorithm proposal for TIP-465
  • phase1OptProposal: the new proposal for phase1.algorithm2

In Phase1 and Phase2, the calculation logic of voting reward for a single SR vote is as follows:

  • Phase1.algorithm1:
userReward(s) = for i range [startCycle, newRewardCycle) {
  userReward(s) += userVote(s)/totalVote(i, s) * totalReward(i,s) 
}
  • Phase2.algorithm:
userReward(s) = userVote(s) * (accumulateRewardPerVote[newRewardCycle - 1,s] - accumulateRewardPerVote[endCycle - 1,s])

Phase1.algorithm1 uses iterative calculations for each maintenance[startCycle, endCycle) and does an accumulation in the final, and the algorithm complexity is O(n).

Phase2.algorithm directly multiplies the userVote(s) and the delta between the accumulateRewardPerVote[startCycle-1, s] and accumulateRewardPerVote[endCycle-1, s], and the algorithm complexity is O(1). For more information please refer to TIP-465.

The Phase1.algorithm2 remains the same as the Phase2.algorithm. Based on the previous analysis, this optimization includes the following steps:

  1. Pre-calculate rewardViSet, and persist it into rewardViStore.
  2. Using phase1OptProposal to control whether the phase1.algorithm2 should be adopted.
  3. After the phase1OptProposal is passed, the reward calculation of the transactions involved in Phase1 will use phase1.algorithm2, which will use the rewardViSet to calculate the result with O(1) performance. The difference between phase1.algorithm1 and Phase1.algorithm2 is within 1 TRX.

Note: The voting reward withdrawal can be triggered by the following transactions:

  • VoteWitnessContract
  • UnfreezeBalanceContract
  • UnfreezeBalanceV2Contract
  • WithdrawBalanceContract

Rationale

Impacts on Ecology

  • For the total supply of TRX: the core calculation logic of phase1.algorithm1 is consistent with Phase1.algorithm2. This optimization aims to improve the performance of phase1.algorithm1. At the same time, Phase1.algorithm2 improves the calculation accuracy. This optimization has almost no impact on the circulation and inflation of the entire TRX.
  • User rewards: since higher precision is used, user rewards will be closer to the expected value without multiple decimal truncations. However, since there is not much difference in the calculation results before and after optimization, this optimization has little impact on users.

Data Consistency

This optimization affects the voting reward calculation logic and the consensus. Therefore, a proposal is needed to control the optimization to ensure data consistency on the chain.

Pre-calculation of rewardViSet

To minimize the impact on node performance, pre-calculation is executed asynchronously. However asynchronous tasks often have some state synchronization problems, so the execution logic of pre-calculation tasks needs to be described in detail here.

  1. RewardViCalService: This service will start with the node. It is responsible for detecting whether the newRewardCalculationAlgorithmProposal is enabled for every 3s.
    1. if enabled: execute calculateRewardViTask asynchronously
      1. calculateRewardViTask has been completed based on the finishFlag.
        1. completed: stop the monitor service, then return
        2. not completed: start calculateRewardViTask and wait for complete, stop monitor service, then return
    2. not enabled: enter the next detection
  2. CalculateRewardViTask:
    1. calculate SR single vote accumulated reward value from the maintenance period TIP-53 took effect to TIP-465 took effect
    2. store rewardViSet into RewardViStore, the data composition is consistent with TIP-465
    3. write finishFlag

Transaction Execution

Check if there is any Phase1 voting reward

  1. if there is: check if phase1OptProposal is activated
    1. activated: check if the calculateRewardViTask is completed
      1. completed: transaction adopts Phase1.algorithm2.
      2. not completed: blocked and waiting for the calculateRewardViTask to be completed, and the transaction adopts Phase1.algorithm2.
    2. not activated: the transaction adopts Phase1.algorithm1
  2. if there is not: the transaction adopts Phase2.algorithm

Backward Compatibility

According to the above sections, this optimization has no compatibility issues. Please note that if any third party implements an off-chain voting reward estimation algorithm individually, please adapt these changes promptly.