-
-
Notifications
You must be signed in to change notification settings - Fork 7.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fix] for issue #2447 (OpenAI-like API fails when a redundant, empty assistant message is sent over) #2453
Conversation
…s to OpenAI's API.
…e any empty messages.
This should already be fixed in |
Oh..? But I really just checked right before writing my fix.. I don't know I'm just grateful it now works.... but can you honestly take a moment and review it? I don't want to be stuck with the current version for this small thing. Or as you said I could just workaround using the memory which I don't really want but hey it might fix it so yeah. Thank you! |
I'm somewhat certain this is the same issue I'm thinking of, if enabling memory causes things to work normally again with unmodified code then that would probably be the case. It was supposed to have been fixed already but I suppose it's also possible we missed something. Let us know if you can test a re-pulled |
Hi, I have the same issue at the current main (3355577) where memory is both ON and OFF. |
LGTM, thanks! |
Pull Request Checklist
Note to first-time contributors: Please open a discussion post in Discussions and describe your changes before submitting a pull request.
Before submitting, make sure you've checked the following:
dev
branch.Changelog Entry
Description
I've noticed an issue with the UI described at issue #2447. After fiddling around for some time, I've come up with a really simple solution that doesn't require any major code refactoring or even a change in testing methods. I simply added a rule to filter messages before getting sent to OpenAI API and remove empty ones. I have no idea if the actual OpenAI API accepts this or not as I'm broke enough to not be able to buy the API and test on it, but it should work fine based on my ChatGPT free usage network monitoring.
I've tested it on my LM Studio OpenAI like server and it works like a charm now.. I'll be using my local image for the docker container (which was built successfully btw) until this change is hopefully upstream.
Added
Changed
Deprecated
Removed
Fixed
Security
Breaking Changes
Additional Information
Screenshots or Videos
BEFORE:
AFTER: