Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Human Feedback is not Gold Standard #2114

Open
icoxfog417 opened this issue Oct 28, 2023 · 0 comments
Open

Human Feedback is not Gold Standard #2114

icoxfog417 opened this issue Oct 28, 2023 · 0 comments

Comments

@icoxfog417
Copy link
Member

一言でいうと

LLM の出力を評価する際、人手の評価はそれほど信頼できないという研究。出力のエラータイプの評価と全体的な評価をそれぞれ別々に行ったところ、事実性や矛盾性が評価に与える影響が少ないことを発見。また、自信あるように書かれていると事実性の評価が揺らぐことを指摘

論文リンク

https://arxiv.org/abs/2309.16349

著者/所属機関

Tom Hosking, Phil Blunsom, Max Bartolo

  • University of Edinburgh
  • Cohere

投稿日付(yyyy/MM/dd)

2023/9/28

概要

新規性・差分

手法

結果

コメント

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant