Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use Agent or QA on Chat tab #1350

Open
AriaShishegaran opened this issue Oct 6, 2024 · 10 comments
Open

Cannot use Agent or QA on Chat tab #1350

AriaShishegaran opened this issue Oct 6, 2024 · 10 comments

Comments

@AriaShishegaran
Copy link

AriaShishegaran commented Oct 6, 2024

Describe the bug
I've installed the new full version and my command line r2r is now 3.2.8. My installation is following the full local llm config.

the embeddings model and llama are pulled and served through ollama.

For some strange reason that chat is not working at all. whatever I type into it it returns:

2024-10-06 13:53:06 INFO:     192.168.65.1:64077 - "OPTIONS /v2/documents_overview HTTP/1.1" 200 OK
2024-10-06 13:53:06 INFO:     192.168.65.1:16286 - "GET /v2/collections_overview HTTP/1.1" 200 OK
2024-10-06 13:53:06 INFO:     192.168.65.1:64077 - "GET /v2/documents_overview HTTP/1.1" 200 OK
2024-10-06 13:53:06 INFO:backoff:Backing off send_request(...) for 0.2s (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff6ecb3a60>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:06 INFO:backoff:Backing off send_request(...) for 1.2s (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff6d0878e0>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:07 ERROR:backoff:Giving up send_request(...) after 4 tries (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542ee620>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:07 ERROR:posthog:error uploading: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542ee620>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-10-06 13:53:08 INFO:backoff:Backing off send_request(...) for 0.5s (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542eda80>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:08 INFO:backoff:Backing off send_request(...) for 1.4s (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542ec340>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:09 INFO:     192.168.65.1:16286 - "OPTIONS /v2/agent HTTP/1.1" 200 OK
2024-10-06 13:53:09 INFO:     192.168.65.1:64077 - "POST /v2/agent HTTP/1.1" 200 OK
2024-10-06 13:53:10 INFO:backoff:Backing off send_request(...) for 2.1s (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542fe500>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:11 2024-10-06 11:53:11,573 - WARNING - core.base.providers.llm - Streaming request failed (attempt 1): sequence item 53: expected str instance, NoneType found
2024-10-06 13:53:12 ERROR:backoff:Giving up send_request(...) after 4 tries (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542ee470>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:12 ERROR:posthog:error uploading: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542ee470>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-10-06 13:53:12 INFO:backoff:Backing off send_request(...) for 0.7s (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542fda80>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:13 INFO:backoff:Backing off send_request(...) for 0.6s (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff6d087700>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:13 2024-10-06 11:53:13,686 - WARNING - core.base.providers.llm - Streaming request failed (attempt 2): sequence item 53: expected str instance, NoneType found
2024-10-06 13:53:14 INFO:backoff:Backing off send_request(...) for 0.3s (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542ee800>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:14 ERROR:backoff:Giving up send_request(...) after 4 tries (requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542ee260>: Failed to establish a new connection: [Errno 111] Connection refused')))
2024-10-06 13:53:14 ERROR:posthog:error uploading: HTTPSConnectionPool(host='us.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff542ee260>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-10-06 13:53:16 2024-10-06 11:53:16,308 - WARNING - core.base.providers.llm - Streaming request failed (attempt 3): sequence item 23: expected str instance, NoneType found
2024-10-06 13:53:20 2024-10-06 11:53:20,967 - WARNING - core.base.providers.llm - Streaming request failed (attempt 4): sequence item 23: expected str instance, NoneType found
2024-10-06 13:53:29 2024-10-06 11:53:29,633 - WARNING - core.base.providers.llm - Streaming request failed (attempt 5): sequence item 23: expected str instance, NoneType found
2024-10-06 13:53:46 2024-10-06 11:53:46,915 - WARNING - core.base.providers.llm - Streaming request failed (attempt 6): sequence item 53: expected str instance, NoneType found

I'm on MacOS Sequoia, M3 Pro Max.

My R2R config:
CleanShot 2024-10-06 at 18 37 43@2x
CleanShot 2024-10-06 at 18 37 59@2x
CleanShot 2024-10-06 at 18 38 14@2x
CleanShot 2024-10-06 at 18 38 28@2x
CleanShot 2024-10-06 at 18 38 47@2x
CleanShot 2024-10-06 at 18 38 57@2x
CleanShot 2024-10-06 at 18 39 05@2x

@NolanTrem
Copy link
Collaborator

Hey @AriaShishegaran based on the server logs, this seems to be an issue with the connection to your Ollama instance. Are you able to ingest documents, or do you get a connection error there as well when calls are made to Ollama?

@AriaShishegaran
Copy link
Author

AriaShishegaran commented Oct 8, 2024

@NolanTrem I'm completely able to ingest them both using the interface and command line, they go through unstructured.io and come back as ingested, the list also shows all the docs. but the chat doesn't work at all. Ollama is running and all the models are of course available.

CleanShot 2024-10-08 at 15 36 27@2x

@AriaShishegaran
Copy link
Author

@NolanTrem Also I have to mention that while a query is submitted in the chat tab, while it is processing, the rest of the tabs become non-responsive. meaning that if I say "Hi" in the chat and then try to navigate to "Documents" nothing loads and only the loader keeps spinning. This could be a different problem, let me know if you'd like me to open another issue for this. It seems the whole application gets into a halt until that request times out (since the chat is not working for me).

@NolanTrem
Copy link
Collaborator

NolanTrem commented Oct 8, 2024

@AriaShishegaran I'll be putting some cycles in today to see if I can replicate this issue with Ollama models.

As far as the second issue you mentioned goes, this is a separate issue but something that's a really good to catch. We have a _execute_with_backoff_async_stream method that isn't async, which is causing this. I'll be sure that we also address this.

Will put an update here with the Ollama issue shortly.

@NolanTrem
Copy link
Collaborator

NolanTrem commented Oct 11, 2024

Some good news and bad news @AriaShishegaran:

I have a fix for the issue that you had found, but it seems to be an issue with LiteLLM, which in itself is the result of Ollama changing the way that they're allowing responses to come in from models. I'm hoping that they're quick to accept my PR to fix this: BerriAI/litellm#6155

In the meantime, it seems like we're still able to use the rag endpoint (just not the agent). There's a few bugs on the frontend/backend with the rag query that I will clean up today to make this work, but the CLI is working.

image

@AriaShishegaran
Copy link
Author

@NolanTrem Hey Nolan, thank you very much for this thorough investigation. It seems like a big issue impacting other services which is a bit sad tbh.
My question would be is R2R able to connect to LM Studio instead of Ollama as a fallback? Do you have any integration/compatibility with other LLM servers?

And regarding the terminal interface, is there any command that can turn the final answer into a well-formatted and easy to read answer just like you see it in the web UI or are we stuck with the JSON output? Can't the stream be you just speaking or going back and forth with your docs over the terminal?

@AriaShishegaran
Copy link
Author

@NolanTrem BTW, it seems the issue is merged. What would be the next course of actions for us to be able to have the system working again?

@NolanTrem
Copy link
Collaborator

We'll update the package and then it should be good to go!

@Danielmoraisg
Copy link

Danielmoraisg commented Nov 10, 2024

Hey there 👋 , any news on this? I believe I'm running into the same problem 😢, I've seen that there was a new release of r2r, but since this issue is still open, I'm asking here if it is fixed

@NolanTrem
Copy link
Collaborator

Hey there, @Danielmoraisg I made a PR into LiteLLM to solve this issue, and it fixed it for a few weeks before they refactored some things and broke it again… This is actually more of an Ollama issue. I suspect that if you switch to the Question and Answer mode, it will work fine for you.

I am waiting on them to better support LM Studio, which would fix this issue entirely (see #1538) but they're being a bit slow to release the changes that I've requested. Will bump them to see if we can get this out sooner!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants