update readme

This commit is contained in:
Anton Bacaj 2023-06-26 15:53:28 +00:00
parent 604519a319
commit 5029906cc2

View file

@ -2,6 +2,12 @@
Run inference on the latest MPT-30B model using your CPU. This inference code uses a [ggml](https://github.com/ggerganov/llama.cpp) quantized model. To run the model we'll use a library called [ctransformers](https://github.com/marella/ctransformers) that has bindings to ggml in python.
Turn style with history on latest commit:
[Inference Chat](https://user-images.githubusercontent.com/7272343/248859199-28a82f3d-ee54-44e4-b22d-ca348ac667e3.png)
Video of initial demo:
[Inference Demo](https://github.com/abacaj/mpt-30B-inference/assets/7272343/486fc9b1-8216-43cc-93c3-781677235502)
## Requirements