-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
whisper : add Metal support in the Decoder #1047
Labels
Comments
ggerganov
added
performance
CPU and memory usage - results and comparisons
decoding
Decoding related issues
labels
Jun 25, 2023
ggerganov
added a commit
that referenced
this issue
Jun 25, 2023
Done via #whisper #1270 |
What are the instructions for running this (Metal, or Metal + non-ANE OpenML)? Sorry, couldn't figure out from the changes whether the most performant config is now the default or how to configure it to run with this new addition. Thanks. |
jacobwu-b
pushed a commit
to jacobwu-b/Transcriptify-by-whisper.cpp
that referenced
this issue
Oct 24, 2023
jacobwu-b
pushed a commit
to jacobwu-b/Transcriptify-by-whisper.cpp
that referenced
this issue
Oct 24, 2023
landtanin
pushed a commit
to landtanin/whisper.cpp
that referenced
this issue
Dec 16, 2023
iThalay
pushed a commit
to iThalay/whisper.cpp
that referenced
this issue
Sep 23, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
GPU inference on Apple Silicon via Metal backend was recently added to
llama.cpp
: ggerganov/llama.cpp#1642We should port the changes to
whisper.cpp
and allow the Decoder to run on the GPU in a similar wayThe text was updated successfully, but these errors were encountered: