Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AWQ (Activation-aware Weight Quantization) for llama, llama2, mpt, and mistral models #4593

Merged
merged 34 commits into from
Dec 27, 2023
Merged
Changes from 1 commit
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
2ea3934
update: awq support llama-7b model
Dec 14, 2023
8a3cece
update: change order
Dec 14, 2023
0adf4c7
update: benchmark results for llama2-7b
Dec 16, 2023
e851199
update: mistral 7b v1 benchmark
Dec 18, 2023
eb9a790
update: support 4 models
Dec 18, 2023
576d28b
fix: Readme
Dec 18, 2023
4cad8d7
update: ready for PR
Dec 19, 2023
f97c587
update: readme
Dec 19, 2023
ef61a66
fix: readme
Dec 19, 2023
f8cf783
update: change order import
Dec 19, 2023
1b300cb
black
Dec 19, 2023
8fece75
format code
Dec 19, 2023
8177ad4
update: work for bot mpt and awqmpt
Dec 19, 2023
d2e9d00
update: readme
Dec 19, 2023
0610672
Rename to llm_build_ffn_mpt_awq
Dec 20, 2023
c02f6df
Formatted other files
Dec 20, 2023
71c0a27
Fixed params count
Dec 20, 2023
741b7fb
Merge branch 'github' of https://gitlab.vinai.io/mlbooster/llama.cpp …
Dec 20, 2023
e04b8f0
fix: remove code
Dec 22, 2023
48cd819
update: more detail for mpt
Dec 22, 2023
6fcdb07
fix: readme
Dec 22, 2023
b00e2d9
fix: readme
Dec 22, 2023
440cc2f
update: change folder architecture
Dec 22, 2023
00f48ad
fix: common.cpp
Dec 22, 2023
9b742c5
fix: readme
Dec 22, 2023
e8fae2d
Merge branch 'master' of https://github.com/ggerganov/llama.cpp into …
Dec 22, 2023
a600c61
fix: remove ggml_repeat
namtranase Dec 22, 2023
2187a8d
update: cicd
namtranase Dec 22, 2023
e9ad5fe
update: cicd
namtranase Dec 23, 2023
13f60c4
uppdate: remove use_awq arg
namtranase Dec 25, 2023
44f4ce2
Merge branch 'master' of https://github.com/namtranase/llama.cpp
namtranase Dec 25, 2023
d089842
update: readme
namtranase Dec 25, 2023
278f3e9
Merge branch 'master' into HEAD
ggerganov Dec 27, 2023
9174699
llama : adapt plamo to new ffn
ggerganov Dec 27, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fix: common.cpp
  • Loading branch information
Trần Đức Nam committed Dec 22, 2023
commit 00f48ade6afdbb709bfa3e747a213dd266685161
4 changes: 2 additions & 2 deletions common/common.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ bool gpt_params_parse_ex(int argc, char ** argv, gpt_params & params) {
break;
}
params.seed = std::stoul(argv[i]);
} else if (arg == "-awq" || arg == "--use-awq") {
} else if (arg == "--use-awq") {
params.use_awq = true;
} else if (arg == "-t" || arg == "--threads") {
if (++i >= argc) {
Expand Down Expand Up @@ -811,7 +811,7 @@ void gpt_print_usage(int /*argc*/, char ** argv, const gpt_params & params) {
printf(" (can be specified more than once for multiple prompts).\n");
printf(" --color colorise output to distinguish prompt and user input from generations\n");
printf(" -s SEED, --seed SEED RNG seed (default: -1, use random seed for < 0)\n");
printf(" -awq, --use-awq Using AWQ quantization model in inferences\n");
printf(" --use-awq Using AWQ quantization model in inferences\n");
printf(" -t N, --threads N number of threads to use during generation (default: %d)\n", params.n_threads);
printf(" -tb N, --threads-batch N\n");
printf(" number of threads to use during batch and prompt processing (default: same as --threads)\n");
Expand Down