Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Knowledge base not referred to when using local Ollama #834

Closed
1 task done
teohhc opened this issue May 19, 2024 · 5 comments
Closed
1 task done

[Bug]: Knowledge base not referred to when using local Ollama #834

teohhc opened this issue May 19, 2024 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@teohhc
Copy link

teohhc commented May 19, 2024

Is there an existing issue for the same bug?

  • I have checked the existing issues.

Branch name

main

Commit ID

673a28e

Other environment information

vendor_id	: GenuineIntel
cpu family	: 6
model		: 191
model name	: 13th Gen Intel(R) Core(TM) i5-13400F
16GB RAM

Graphics
--------
NVIDIA-SMI 550.67                 
Driver Version: 550.67         
CUDA Version: 12.4
8GB RAM

OS
--
Arch Linux 6.8.5-arch1-1
x86_64

Actual behavior

Ran 2 ollama, one for the embedding (port 11435) and the other for chat (port 11434).
Screenshot from 2024-05-19 14-25-21

My environment setup
Screenshot from 2024-05-19 14-30-07

Testing Steps:

  1. Ollama models configured without problems
    Screenshot from 2024-05-19 14-33-46

  2. Change the system model settings
    Screenshot from 2024-05-19 14-50-30

  3. Knowledge base created
    Screenshot from 2024-05-19 14-32-49

  4. Retrieval testing without problems
    Screenshot from 2024-05-19 14-35-38

  5. Chat configuration
    Screenshot from 2024-05-19 14-38-44
    Screenshot from 2024-05-19 14-40-23
    Screenshot from 2024-05-19 14-40-44

  6. Chat fails to retrieve from knowledge base
    Screenshot from 2024-05-19 14-41-56

Checked the ragflow-logs/api/*.log but can't find any useful hints

Expected behavior

It should refer to the knowledge base for answers similar to the retrieval testing.

Steps to reproduce

I've added phpmyadmin just to check the database.
1. docker-compose up -d ragflow-mysql ragflow-redis ragflow-phpmyadmin ragflow-es-01 minio
2. docker-compose up -d ragflow
3. Refer to https://hub.docker.com/r/ollama/ollama to run both ollama on different ports, llama3 on 11434 and nomic-embed-text (11435)
4. Follow "Testing Steps" give in the "Actual behavior" section

Additional information

Elastic search is still alive and healthy. There is remaining 1GB of free memory left. Both Ollamas didn't report any errors. Retried by clearing docker volumes and recreating the containers and making sure all other services are running first before starting ragflow main service.

@teohhc teohhc added the bug Something isn't working label May 19, 2024
@fangxingSR
Copy link

The same experience

@DillionApple
Copy link

+1

@XiaoCC
Copy link

XiaoCC commented May 20, 2024

xinference +1

@KevinHuSh KevinHuSh mentioned this issue May 20, 2024
1 task
KevinHuSh added a commit that referenced this issue May 20, 2024
### What problem does this PR solve?

#834 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
@guoyuhao2330
Copy link
Contributor

同一错误是否存在现有问题?

  • 我已经检查了现有问题。

分行名称

主要

提交 ID

673A28E

其他环境信息

vendor_id	: GenuineIntel
cpu family	: 6
model		: 191
model name	: 13th Gen Intel(R) Core(TM) i5-13400F
16GB RAM

Graphics
--------
NVIDIA-SMI 550.67                 
Driver Version: 550.67         
CUDA Version: 12.4
8GB RAM

OS
--
Arch Linux 6.8.5-arch1-1
x86_64

实际行为

运行了 2 个 ollama,一个用于嵌入(端口 11435),另一个用于聊天(端口 11434)。 Screenshot from 2024-05-19 14-25-21

我的环境设置 Screenshot from 2024-05-19 14-30-07

测试步骤:

  1. Ollama 模型配置没有问题
    Screenshot from 2024-05-19 14-33-46
  2. Change the system model settings
    Screenshot from 2024-05-19 14-50-30
  3. Knowledge base created
    Screenshot from 2024-05-19 14-32-49
  4. Retrieval testing without problems
    Screenshot from 2024-05-19 14-35-38
  5. Chat configuration
    Screenshot from 2024-05-19 14-38-44
    Screenshot from 2024-05-19 14-40-23
    Screenshot from 2024-05-19 14-40-44
  6. Chat fails to retrieve from knowledge base
    Screenshot from 2024-05-19 14-41-56

Checked the ragflow-logs/api/*.log but can't find any useful hints

Expected behavior

It should refer to the knowledge base for answers similar to the retrieval testing.

Steps to reproduce

I've added phpmyadmin just to check the database.
1. docker-compose up -d ragflow-mysql ragflow-redis ragflow-phpmyadmin ragflow-es-01 minio
2. docker-compose up -d ragflow
3. Refer to https://hub.docker.com/r/ollama/ollama to run both ollama on different ports, llama3 on 11434 and nomic-embed-text (11435)
4. Follow "Testing Steps" give in the "Actual behavior" section

Additional information

Elastic search is still alive and healthy. There is remaining 1GB of free memory left. Both Ollamas didn't report any errors. Retried by clearing docker volumes and recreating the containers and making sure all other services are running first before starting ragflow main service.

Bug is fixed,please upgrade the dev version.

@teohhc
Copy link
Author

teohhc commented May 20, 2024

I've tested and it is now working as expect.

           _
          |   |
         |   |____
--------/     \____\
       |         \____|
       |        \____|
--------\___\___/

image

@teohhc teohhc closed this as completed May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants