Run inference on MPT-7B using CPU
Find a file
2023-06-26 18:05:03 +00:00
media initial commit 2023-06-26 05:36:27 +00:00
.gitignore Initial commit 2023-06-26 01:30:01 -04:00
download_model.py use symlink download, caches to the hf cache folder instead -> avoids downloading model twice if download is run again 2023-06-26 18:05:03 +00:00
inference.py remove empty import 2023-06-26 15:52:01 +00:00
LICENSE Initial commit 2023-06-26 01:30:01 -04:00
README.md fix link 2023-06-26 15:53:48 +00:00
requirements.txt initial commit 2023-06-26 05:36:27 +00:00

MPT 30B inference code using CPU

Run inference on the latest MPT-30B model using your CPU. This inference code uses a ggml quantized model. To run the model we'll use a library called ctransformers that has bindings to ggml in python.

Turn style with history on latest commit:

Inference Chat

Video of initial demo:

Inference Demo

Requirements

I recommend you use docker for this model, it will make everything easier for you. Minimum specs system with 32GB of ram. Tested on system with AMD Epyc CPU & python 3.10.

Setup

First create a venv.

python -m venv env && source env/bin/activate

Next install dependencies.

pip install -r requirements.txt

Next download the quantized model weights (about 19GB).

python download_model.py

Ready to rock, run inference.

python inference.py

Next modify inference script prompt and generation parameters.