mpt-7B-inference/README.md
2023-06-26 02:20:33 -04:00

983 B

MPT 30B inference code using CPU

Run inference on the latest MPT-30B model using your CPU. This inference code uses a ggml quantized model. To run the model we'll use a library called ctransformers that has bindings to ggml in python.

I recommend a system with 32GB of ram.

Inference Demo

Requirements

I recommend you use docker for this model, it will make everything easier for you. Tested on AMD Epyc CPU.

Setup

First create a venv.

python -m venv env && source env/bin/activate

Next install dependencies.

pip install -r requirements.txt

Next download the quantized model weights (about 19GB).

python download_model.py

Ready to rock, run inference.

python inference.py

Next modify inference script prompt and generation parameters.