Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add capacity for self-hosted llama model #57

Merged
merged 2 commits into from
Oct 10, 2023

Conversation

MichaelClifford
Copy link
Collaborator

PR Type

  • Bugfix
  • Feature
  • Code Style Update
  • Refactor
  • Deployment
  • Security Patch
  • Documentation Update

Summary

Added the capacity to utilize a self hosted llama2 model.

Description

Checklist

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • I have run make lint, make test, and make test-api locally and resolved any errors

Security Checklist

  • Data encryption
  • No hardcoded secrets
  • Proper access controls

Additional Context (if applicable)

  • Kubernetes/OpenShift Version:
  • LLM & Relevant Configuration:

Screenshots (if applicable)

Related Tickets

#25

@HunterGerlach HunterGerlach changed the title add capacity for self-hosted llama model Draft: add capacity for self-hosted llama model Oct 5, 2023
Copy link
Collaborator

@hemajv hemajv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MichaelClifford your code logic looks good to me 👍
I'm having some trouble testing it out locally due to some config issues on my system, but didn't want the PR to be blocked from merging before tomorrow's demo. @HunterGerlach please feel free to merge it and I will figure out the errors I'm facing and maybe create separate issues for them if required.

@hemajv hemajv changed the title Draft: add capacity for self-hosted llama model add capacity for self-hosted llama model Oct 5, 2023
@hemajv
Copy link
Collaborator

hemajv commented Oct 9, 2023

@MichaelClifford @HunterGerlach I was able to test this out locally and resolve all my errors. Worked fine for me! 👍

/lgtm 🚢

@hemajv hemajv self-requested a review October 9, 2023 22:19
@HunterGerlach HunterGerlach merged commit d52c098 into HunterGerlach:main Oct 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants