On the issue of Chinese full-text search matching #4324
Unanswered
jerryyang-git
asked this question in
Q&A
Replies: 1 comment 3 replies
-
Hello @jerryyang-git, Thank you for our issue, |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I use Docker's latest image deployment
My data is like this
I set
alias
as the search field and use/indexes/{index_uid}/settings/separator token
to set the delimiter to\n
But when I search for
水仙
, I cannot obtain this data, but if I add extra words to the search, such asキ水仙
, I can find it. I don't understand the problem with thisI have another piece of data in a similar form, and when I search for
死馆
, I can accurately match itAt present, I suspect that Meilisearch segmented the search terms during Chinese search, which seems to be incorrect. However, I am just a beginner in Python and I am confused about this. I need help
Beta Was this translation helpful? Give feedback.
All reactions