Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

如何写自定义评价函数,需要保证训练集和验证集的AUC差的最小,且验证AUC最好的。 #98

Open
cfkstat opened this issue Dec 28, 2022 · 5 comments

Comments

@cfkstat
Copy link

cfkstat commented Dec 28, 2022

示例中的reward_metric的函数,好像只传了训练集的预测值和实际值?

@oaksharks
Copy link
Collaborator

@cfkstat 这类需求需要使用多目标优化+自定义多个目标函数解决, HyperGBM的多目标优化正在研发中, 敬请期待。

@cfkstat
Copy link
Author

cfkstat commented Dec 28, 2022

评价函数中,能把本轮迭代的模型当做参数传进去吗?

@oaksharks
Copy link
Collaborator

自定义metric 请参考:
https://hypergbm.readthedocs.io/en/latest/how_to/customize_reward_metric.html
目前自定义metric的入参里没有当前迭代的模型。

@cfkstat
Copy link
Author

cfkstat commented Dec 28, 2022

我改了hypernet的评价函数,可以实现。如何获得每一轮迭代的模型结果和参数?

@oaksharks
Copy link
Collaborator

“目前自定义metric的入参里没有当前迭代的模型。”

但是在模型训练完成后可以可以获取迭代历史模型:

hk = HyperGBM(...)
...
hk.history.trials  # get trials in iteration history 
hk.history.trials[0].model_file # get the model of trail  

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants