mpt-7B-inference/README.md

48 lines
1.2 KiB
Markdown
Raw Normal View History

# MPT 7B inference code using CPU
2023-06-26 05:36:27 +00:00
Run inference on the latest MPT-7B model using your CPU. This inference code uses a [ggml](https://github.com/ggerganov/ggml) quantized model. To run the model we'll use a library called [ctransformers](https://github.com/marella/ctransformers) that has bindings to ggml in python.
2023-06-26 05:43:31 +00:00
2023-06-26 15:53:28 +00:00
Turn style with history on latest commit:
2023-06-26 15:53:48 +00:00
![Inference Chat](https://user-images.githubusercontent.com/7272343/248859199-28a82f3d-ee54-44e4-b22d-ca348ac667e3.png)
2023-06-26 15:53:28 +00:00
Video of initial demo:
2023-06-26 05:39:46 +00:00
[Inference Demo](https://github.com/abacaj/mpt-30B-inference/assets/7272343/486fc9b1-8216-43cc-93c3-781677235502)
2023-06-26 05:36:27 +00:00
## Requirements
I recommend you use docker for this model, it will make everything easier for you. Minimum specs system with 16GB of ram. Recommend to use `python 3.10`.
2023-06-26 18:08:13 +00:00
## Tested working on
Nothing yet!
2023-06-26 05:36:27 +00:00
## Setup
First create a venv.
```sh
python -m venv env && source env/bin/activate
```
Next install dependencies.
```sh
pip install -r requirements.txt
```
Next download the quantized model weights (about 4GB).
2023-06-26 05:36:27 +00:00
```sh
python download_model.py
```
Ready to rock, run inference.
```sh
python inference.py
```
2023-06-26 05:38:20 +00:00
Next modify inference script prompt and generation parameters.