mpt-7B-inference/README.md

48 lines
1.4 KiB
Markdown
Raw Permalink Normal View History

# MPT 7B inference code using CPU
2023-06-26 05:36:27 +00:00
Run inference on the latest MPT-7B model using your CPU and just 8gb of ram. If you have more ram (32gb), then you should check out the [original repo](https://github.com/abacaj/mpt-30B-inference) which has a much larger LLM. This inference code uses a [ggml](https://github.com/ggerganov/ggml) quantized model. To run the model we'll use a library called [ctransformers](https://github.com/marella/ctransformers) that has bindings to ggml in python.
2023-06-26 05:43:31 +00:00
2023-06-26 15:53:28 +00:00
Turn style with history on latest commit:
2023-06-26 15:53:48 +00:00
![Inference Chat](https://user-images.githubusercontent.com/7272343/248859199-28a82f3d-ee54-44e4-b22d-ca348ac667e3.png)
2023-06-26 15:53:28 +00:00
Video of initial demo:
2023-06-26 05:39:46 +00:00
[Inference Demo](https://github.com/abacaj/mpt-30B-inference/assets/7272343/486fc9b1-8216-43cc-93c3-781677235502)
2023-06-26 05:36:27 +00:00
## Requirements
2023-08-29 21:59:40 +00:00
I recommend you use docker for this model, it will make everything easier for you. Minimum specs system with 8GB of ram. Recommend to use `python 3.10`.
2023-06-26 18:08:13 +00:00
## Tested working on
2023-08-29 21:59:40 +00:00
AMD Ryzen 3750h with 16GB RAM, running Ubuntu 22.04 LTS. Runs fine, if not the fastest.
2023-06-26 05:36:27 +00:00
## Setup
First create a venv.
```sh
python -m venv env && source env/bin/activate
```
Next install dependencies.
```sh
pip install -r requirements.txt
```
Next download the quantized model weights (about 4GB).
2023-06-26 05:36:27 +00:00
```sh
python download_model.py
```
Ready to rock, run inference.
```sh
python inference.py
```
2023-06-26 05:38:20 +00:00
Next modify inference script prompt and generation parameters.