Skip to content

Commit

Permalink
update demo links
Browse files Browse the repository at this point in the history
  • Loading branch information
ChiYeungLaw authored Jun 28, 2023
1 parent 30e2cc8 commit 656f201
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ At present, our core contributors are preparing the **65B** version and we expec

- 🔥🔥🔥 we released the latest optimized version of Evol-Instruct training data of WizardLM model. Please refer to [this HuggingFace Repo](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) to download.

- 🔥🔥🔥 We released **WizardCoder-15B-V1.0** (trained with **78k** evolved code instructions), which surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details ([Paper](https://arxiv.org/abs/2306.08568), [Demo (Only support code-related English instructions now.)](https://fa6d89fdc546d594.gradio.app/), [Backup Demo1](https://a7a1ac225062217b.gradio.app/), [Backup Demo2](https://fc4e35bf3ef95f73.gradio.app/), [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)), please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
- 🔥🔥🔥 We released **WizardCoder-15B-V1.0** (trained with **78k** evolved code instructions), which surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details ([Paper](https://arxiv.org/abs/2306.08568), [Demo (Only support code-related English instructions now.)](https://e7e16a48b3ed6f9b.gradio.app/), [Backup Demo1](https://8be3f4e8521d3243.gradio.app/), please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).

- 🔥 Our WizardLM-13B-V1.0 model achieves the 1st-rank of the opensource models on the [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/).
- 📣 Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time.
Expand Down
4 changes: 2 additions & 2 deletions WizardCoder/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ To develop our WizardCoder model, we begin by adapting the Evol-Instruct method
## News

- 🔥 Our **WizardCoder-15B-V1.0** model achieves the **57.3 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval), which is **22.3** points higher than the SOTA open-source Code LLMs.
- 🔥 We released **WizardCoder-15B-V1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), [Demo (Only support code-related English instructions now.)](https://fa6d89fdc546d594.gradio.app/), [Backup Demo1](https://a7a1ac225062217b.gradio.app/), [Backup Demo2](https://fc4e35bf3ef95f73.gradio.app/), and [Paper](https://arxiv.org/abs/2306.08568).
- 🔥 We released **WizardCoder-15B-V1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), [Demo (Only support code-related English instructions now.)](https://e7e16a48b3ed6f9b.gradio.app/), [Backup Demo1](https://8be3f4e8521d3243.gradio.app/), and [Paper](https://arxiv.org/abs/2306.08568).
- 📣 Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time.


Expand All @@ -21,7 +21,7 @@ To develop our WizardCoder model, we begin by adapting the Evol-Instruct method
<a ><img src="imgs/pass1.png" alt="WizardCoder" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
</p>

❗❗❗**Note: This performance is 100% reproducible! If you canot reproduce it, please follow the steps in [Evaluation](#evaluation).**
❗❗❗**Note: This performance is 100% reproducible! If you cannot reproduce it, please follow the steps in [Evaluation](#evaluation).**

**Note: In this study, we copy the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a **single attempt**, and the resulting pass rate percentage is reported. Our **WizardCoder** generates answers using greedy decoding and tests with the same [code](https://github.com/evalplus/evalplus).**

Expand Down

0 comments on commit 656f201

Please sign in to comment.