-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bot throwing errors disconnecting then reconnecting and taking more time to respond each reply. #9
Comments
text-generation-webui has made a lot of commits changing things recently. I think you cloned the repository between one of these commits when it was broken. If you update text-generation-webui again, I've just updated the bot's code so it's compatible with this latest version of text-generation-webui |
I just updated to the latest text-generation-webui commit and pulled bot.py again and no changes, still blocks until it disconnects then reconnects and sends the reply. |
What model are you using and what is the command you are using to run the bot? Have you tried to see if this error happens while running webui? The problem could also be a webui library so you could try using the |
You are right, the webui was super slow 0.03 tokens/s. I was able to get it back to over 2 tokens/s. I am using llamacpp models to run on cpu and I think its related to this issue. oobabooga/text-generation-webui#866 So now the webui is working however the bot throws errors now and won't reply. Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): |
I think I know the problem. ooba has broken the API again lol oobabooga/text-generation-webui@0f21209#diff-78eb3bd39cd9ce0f38f5648368b3c258b8aab36039ec050f41eabe1497d46e1cR108. I'll release a fix later today. |
I've updated the bot. It should work correctly now |
I'm going to close this, but feel free to open it again if the issue is still happening. |
I am getting these errors now, the bot does reply but it takes longer each reply and it disconnects and reconnects.
WARNING discord.gateway Shard ID None heartbeat blocked fo more than 180 seconds.
Loop thread traceback (most recent call last):
File "/home/user/text-generation-webui/bot.py", line 254, in
client.run(bot_args.token if bot_args.token else TOKEN, root_logger=True)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/client.py", line 860, in run
asyncio.run(runner())
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/runners.p", line 44, in run
return loop.run_until_complete(main)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/base_evens.py", line 636, in run_until_complete
self.run_forever()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/base_evens.py", line 603, in run_forever
self._run_once()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/base_evens.py", line 1906, in _run_once
handle._run()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/events.py, line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/app_commands/tree.py", line 1089, in wrapper
await self._call(interaction)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/app_commands/tree.py", line 1248, in _call
await command._invoke_with_namespace(interaction, namespace)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/ext/commands/hybrid.py", line 438, in _invoke_with_namespace
value = await self._do_call(ctx, ctx.kwargs) # type: ignore
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/app_commands/commands.py", line 842, in do_call
return await self.callback(interaction, **params) # type: ignore
File "/home/user/text-generation-webui/bot.py", line 203, in reply
await llm_gen(ctx, queues)
File "/home/user/text-generation-webui/bot.py", line 124, in llm_gen
for resp in chatbot_wrapper(**user_input):
File "/home/user/text-generation-webui/modules/chat.py", line 143, in chabot_wrapper
for reply in generate_reply(f"{prompt}{' ' if len(cumulative_reply) > 0 els ''}{cumulative_reply}", generate_state, eos_token=eos_token, stopping_strings=topping_strings):
File "/home/user/text-generation-webui/modules/text_generation.py", line 53, in generate_reply
for reply in shared.model.generate_with_streaming(context=question, **generte_params):
File "/home/user/text-generation-webui/modules/llamacpp_model_alternativepy", line 61, in generate_with_streaming
for token in generator:
File "/home/user/text-generation-webui/modules/callbacks.py", line 85, in__next
obj = self.q.get(True, None)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/queue.py", line 11, in get
self.not_empty.wait()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/threading.py", lie 320, in wait
waiter.acquire()
The text was updated successfully, but these errors were encountered: