Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consuming the obtained inference in production #7

Open
SangeethaVenkatesan opened this issue Feb 17, 2022 · 3 comments
Open

Consuming the obtained inference in production #7

SangeethaVenkatesan opened this issue Feb 17, 2022 · 3 comments

Comments

@SangeethaVenkatesan
Copy link

Hello Team,
Amazing model, very good sample results. I would like to know how as the team looking to use the model results in our production application. We would like to use the obtained inference in production. Huggingface there is only demo on gradio. But we would like to implement this model results in our production pipeline, Please let me know how we can go about it!

@RayeRen
Copy link
Member

RayeRen commented Feb 19, 2022

Sure, you can convert our models to ONNX and deploy them with ONNXRuntime.

@AK391
Copy link
Contributor

AK391 commented Feb 19, 2022

@SangeethaVenkatesan the gradio demos also have a api if your interested https://hf.space/gradioiframe/NATSpeech/PortaSpeech/api

@SangeethaVenkatesan
Copy link
Author

@SangeethaVenkatesan the gradio demos also have a api if your interested https://hf.space/gradioiframe/NATSpeech/PortaSpeech/api

Heyy, Thank you for the response! We would like to involve the model in the production pipeline,relying on gradio api would'nt be a production ready solution, since we have to consider rate limit in production api request calls coming in for the model to infer. Would it be possible if it could be placed in hugging_face model hub, so the inference would be a pretty standard procedure and also reliable in production.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants