You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe:
Very often the files I work on are significantly larger than the max output limit even of the newest Gemioni models (and adhering to keeping the code as modular as possible). Because of that, given file is never finished within the project structure and the only workaround I see is to request from LLM to answer directly in chat.
Describe the solution you'd like:
Think about adjusting the LLM prompt along with adjusting the app code so that in case output token limit is close to be reached, LLM could leave a defined markdown to allow the application to detect it and let the llm to continue from there in the next output.
Maybe it would also be possible to mark the unfinished file by the app itself - for example, definig a rule that if a file is not being changed anymore while create wheel is still spinning, that place could be marked by the app in a specific way to then, allow LLM to resume from that exact point.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe:
Very often the files I work on are significantly larger than the max output limit even of the newest Gemioni models (and adhering to keeping the code as modular as possible). Because of that, given file is never finished within the project structure and the only workaround I see is to request from LLM to answer directly in chat.
Describe the solution you'd like:
Think about adjusting the LLM prompt along with adjusting the app code so that in case output token limit is close to be reached, LLM could leave a defined markdown to allow the application to detect it and let the llm to continue from there in the next output.
Maybe it would also be possible to mark the unfinished file by the app itself - for example, definig a rule that if a file is not being changed anymore while create wheel is still spinning, that place could be marked by the app in a specific way to then, allow LLM to resume from that exact point.
The text was updated successfully, but these errors were encountered: