Skip to content

Project to explore the development of personalised Large Language Models

Notifications You must be signed in to change notification settings

benjamincham/Journey_Customised_LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 A Journey into customised LLM

Large Language Model (LLM), like ChatGPT, is a type of advanced artificial intelligence program to understand and generate human-like text. LLM, in my opinion, would exceed the limitations of traditional chatbots and overtake recommendation systems to play a pivotal role and significantly enhance human-AI interaction and automating complex tasks.

This project aims explore the development of personalised Large Language Models on Medium.

image
Image generated by Chatgpt4 (DALL-E) in November 2023

About the project

This project provides:

Setup Instructions

Pre-requisite

  • A Linux System (Prefably Ubuntu 20.04 and later)
  • The System is installed with Anaconda or any conda flavors (i.e. miniconda etc)

Installation

1.) Clone the git repository

git clone https://github.com/benjamincham/Journery_Customised_LLM.git

2.) Create a conda environment. In my example, i named it 'Journery_Customised_LLM', and to use python version 3.10

conda create -n Journery_Customised_LLM python=3.10

3.) Grant execute permission for the scripts

chmod +x setup.sh
chmod +x models/download_models.sh

4.) Execute the script setup.sh to install all the necessary dependencies and download llama2 models.

Point to note, if your system has Nvidia Cuda driver installed.. the script will install OpenBLAS / cuBLAS / CLBlast support for acceleration.

./setup.sh --download-models # inclue --download-models flag if you wish to download models

How to run the demo

Running Cli Demo App

To run the demo using terminal, use python to run the demo_llama.py with input arguement '--model' to specify which weights to use.

In the example below, i am using llama-2 with 7B weights.

python ./demo/demo_llama.py --model ./models/llama-2-13b.Q2_K.gguf

image

To run the demo using web-UI interface, use python to run the demo_llama_webui.py with input arguement '--model' to specify which weights to use.

In the example below, i am using llama-2 with 7B weights. Use your internet browser and navigate to default url http://0.0.0.0:8090/ to interact with the LLM.

python ./demo/demo_llama_webui.py --model ./models/llama-2-7b.Q6_K.gguf

image

About

Project to explore the development of personalised Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages