-
Notifications
You must be signed in to change notification settings - Fork 300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for 'Text generation web UI' as backend for text generation #63
Conversation
I think this would be quite useful. |
I've pulled your branch and it has some problems, it saves the params in the settings.json as strings which makes the server bug out if you don't manually edit them. |
@clementine-of-whitewind Interesting, I haven't noticed that. Which params? |
for example if I change the temperature the ooba server bugs out because it didn't expect a string when doing some comparison operation. ive ran ooba with llama-7b. manually removing the parantheses in the settings.json for temperature and other parameters restored functionality. |
Okay. It seems to work with strings for me as well, but storing them that way was certainly bug which should be now fixed in this PR branch. |
Hey, @im-not-tom |
@SillyLossy I don't mind at all. |
It is excellent, probably the simplest way to get LLaMa+GPTQ working. |
Look's like the API usage with streaming is different: https://github.com/oobabooga/text-generation-webui/blob/main/api-example-stream.py |
FYI: the other way around is now possible, ie making text-generation-webui pretend to be kobold while leaving TavernAI unchanged |
In my experience, this PR worked a bit better than the KoboldAI API implementation on the text-generation-webgui. For example, with |
That being said, singleline mode does not seem to work either with the KoboldAI API on the text-generation-webgui, or this may be an issue on TavernAI's side. I have multigen, free name mode, and singleline mode enabled. The behavior I am getting is TavernAI not ceasing the text generation even after the line has been generated, which results in lines being generated that are completely irrelevant to the dialogue, which of course results in waste of time. |
Hello,
this PR allows using oobabooga/text-generation-webui as backend for text generation, allowing Tavern to be used as alternative to oobabooga's web UI.
It adds option to API Combobox, UI for settings and both server.js and index.html code to handle API provided by text-generation-webui's server.py.
It works best when text-generation-webui's server.py is started without
--chat
or--cai-chat
arguments, for example as in this collab.I've tried to match your code style, so I believe this change should not be too intrusive.