Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training and applying the human like manager autoregressive #2

Open
7 tasks
LBrinkmann opened this issue Dec 27, 2022 · 0 comments
Open
7 tasks

Training and applying the human like manager autoregressive #2

LBrinkmann opened this issue Dec 27, 2022 · 0 comments

Comments

@LBrinkmann
Copy link
Collaborator

LBrinkmann commented Dec 27, 2022

Our model needs to be able to reproduce autocorrelation of punishments of different participants by the same manager. There is (for instance in the first round) no correlate that could provide such correlation.

Redesigning the human like manager, such that it makes punishment predictions autoregressive. That means, first predicting the punishment of a random participant, then provided that knowledge predicting the punishment of a random next participant and so on.

Tasks

  • Refactor evaluation f796d36
  • Refactor graph model a71ad00
  • Refactor api manager bac4029
  • Refactor data 3df766b
  • Refactor batch creation
  • Refactor graph
  • Rerun old training
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant