Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
victorsungo authored Aug 17, 2023
1 parent 36e6a73 commit 633f0e2
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion WizardMath/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ The following table clearly demonstrates that our **WizardMath** exhibits a subs
## Training
### Supervised fine-tuning

We supervised fine-tune WizardMath using the modified code `WizardMath/train/train_wizardmath.py` from [Llama-X](https://github.com/AetherCortex/Llama-X), which uses the open-source friendly [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE).
In the SFT stage, we train WizardMath with the code `WizardMath/train/train_wizardmath.py` from [Llama-X](https://github.com/AetherCortex/Llama-X), which uses the open-source friendly [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE).
We supervised fine-tune WizardMath-13B with the following hyperparameters:

| Hyperparameter | LLaMA 2 13B |
Expand Down

0 comments on commit 633f0e2

Please sign in to comment.