readme change

This commit is contained in:
Anton Bacaj 2023-06-26 05:43:31 +00:00
parent 89c733223c
commit 72e38eb4af

View file

@ -1,6 +1,8 @@
# MPT 30B inference code using CPU
Run inference on the latest MPT-30B model using your CPU.
Run inference on the latest MPT-30B model using your CPU. This inference code uses a [ggml](https://github.com/ggerganov/llama.cpp) quantized model. To run the model we'll use a library called [ctransformers](https://github.com/marella/ctransformers) that has bindings to ggml in python.
I recommend a system with 32GB of ram.
[Inference Demo](https://github.com/abacaj/mpt-30B-inference/assets/7272343/486fc9b1-8216-43cc-93c3-781677235502)