Skip to content

Commit

Permalink
Update flash_attention_En.md
Browse files Browse the repository at this point in the history
  • Loading branch information
yangapku authored Feb 20, 2023
1 parent 006a981 commit 2d3e95d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions flash_attention_En.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Chinese-CLIP now supports the acceleration of training process through [FlashAtt
## Environmental Preparation

+ Nvidia GPUs **with Volta or Ampere architecture** (such as A100, RTX 3090, T4, and RTX 2080). Please refer to [this document](https://en.wikipedia.org/wiki/CUDA#GPUs_supported) for the corresponding GPUs of each Nvidia architecture.
+ CUDA 11NVCC
+ CUDA 11, NVCC
+ **FlashAttention**:Install FlashAttention by executing `pip install flash-attn`. Please refer to the [FlashAttention project repository](https://github.com/HazyResearch/flash-attention).

## Use it in Chinese-CLIP!
Expand Down Expand Up @@ -67,4 +67,4 @@ We present the comparison of the batch time and memory usage of FP16 precision f
<td width="120%">CN-CLIP<sub>ViT-H/14</sub></td><td>64*8</td><td>76</td><td>57</td>
</tr>
</table>
<br>
<br>

0 comments on commit 2d3e95d

Please sign in to comment.