[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Importance #128

Open
Qiwei97 opened this issue Mar 24, 2022 · 2 comments
Open

Feature Importance #128

Qiwei97 opened this issue Mar 24, 2022 · 2 comments
Labels
Suggestion New feature or request

Comments

@Qiwei97
Copy link
Qiwei97 commented Mar 24, 2022

Hi,

Is it possible to obtain feature importance plots from the agents? Or perhaps getting it to work with the SHap library?

Thank you!

@YangletLiu YangletLiu added the Suggestion New feature or request label Mar 24, 2022
@Yonv1943
Copy link
Collaborator

Yes. It is possible.

For example, see

return obj_critic.item(), obj_actor.item(), a_std_log.item() # logging_tuple

The fucntion agent.update_net( ) return the training logging.
The training process will print these information on the terminal.

print(f"{self.agent_id:<3}{self.total_step:8.2e}{self.r_max:8.2f} |"
f"{r_avg:8.2f}{r_std:7.1f}{s_avg:7.0f}{s_std:6.0f} |"
f"{r_exp:8.2f}{''.join(f'{n:7.2f}' for n in log_tuple)}")

################################################################################
ID     Step    maxR |    avgR   stdR   avgS  stdS |    expR   objC   etc.
6  4.09e+03  244.72 |
6  4.09e+03  244.72 |  244.72    3.5    124     2 |    0.11   0.74   0.26   0.05
6  1.21e+05  244.72 |  239.92    0.0    105     0 |    0.29   0.18  16.41   0.09
6  1.79e+05  244.72 |  193.45    0.0     92     0 |    0.31   0.29  31.29   0.16
6  2.25e+05  325.86 |
6  2.25e+05  325.86 |  325.86    2.4    144     0 |    0.30   0.34  37.28   0.18
6  2.64e+05  325.86 |  226.47    0.0    109     0 |    0.31   0.40  38.07   0.22
6  2.99e+05  558.23 |
6  2.99e+05  558.23 |  558.23    5.6    354     0 |    0.18   0.35  44.14   0.21
6  3.31e+05  558.23 |  324.83    0.0    147     0 |    0.29   0.32  47.91   0.18
6  3.64e+05 1451.47 |
6  3.64e+05 1451.47 | 1451.47  632.2    626   272 |    0.33   0.29  45.99   0.17
6  3.94e+05 2104.08 | 2104.08   10.2   1000     0 |    0.28   0.26  45.76   0.17

@Qiwei97
Copy link
Author
Qiwei97 commented Mar 29, 2022

Hi,

Thank you for the reply. I was thinking of something along the lines of this: https://towardsdatascience.com/dear-reinforcement-learning-agent-please-explain-your-actions-da6635390d4d

Will it be possible to integrate the SHap library with the agents? Or perhaps an official tutorial or demo will be good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Suggestion New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants