Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What to improve further? #6

Open
xHansonx opened this issue Jul 12, 2018 · 1 comment
Open

What to improve further? #6

xHansonx opened this issue Jul 12, 2018 · 1 comment

Comments

@xHansonx
Copy link

Based on the results you showed in gits, it looks like losses from training and valid is diverging instead of converging as number of epoch increasing. So, what's the problem here? I saw other text stigmatization model, they build their own dictionary and input something like one-hot encoding vector for each word. Does this process make difference? Thank you!
I'm glad if you want to talk more by WeChat (ID: XHansonX)

@rmahfuz
Copy link

rmahfuz commented Jul 31, 2019

I agree that it is strange that the validation loss increases instead of decreasing. Does anyone have suggestions about how this can be fixed? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants