Stars
CVPR and NeurIPS poster examples and templates. May we have in-person poster session soon!
Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI
Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**
The calflops is designed to calculate FLOPs、MACs and Parameters in all various neural networks, such as Linear、 CNN、 RNN、 GCN、Transformer(Bert、LlaMA etc Large Language Model)
Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance. Accepted to ACL 2024.
Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Instruct-tune LLaMA on consumer hardware
🛠️ Class-imbalanced Ensemble Learning Toolbox. | 类别不平衡/长尾机器学习库
Top-level Conference Publications on Knowledge Graph
GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of (ambiguous pronoun, antecedent name), sampled from Wikipedia for the evaluation of coreference resolution in practica…
Multiple-Relations-Extraction-Only-Look-Once. Just look at the sentence once and extract the multiple pairs of entities and their corresponding relations. 端到端联合多关系抽取模型,可用于 http://lic2019.ccf.org.cn…
PyTorch code for SpERT: Span-based Entity and Relation Transformer
使用基于Transformer的预训练模型在ACE2005数据集上进行事件抽取任务
An implement of the paper of EDA for Chinese corpus.中文语料的EDA数据增强工具。NLP数据增强。论文阅读笔记。
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型
Graphical Java application for managing BibTeX and biblatex (.bib) databases
本项目是利用深度学习技术来构建知识图谱方向上的一次尝试,作为开放领域的关系抽取,算是笔者的一次创新,目前在这方面的文章和项目都很少。
在Keras下微调Bert的一些例子;some examples of bert in keras
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型