diff --git a/README.md b/README.md index a1a146c..3c9728e 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,8 @@ # MPT 30B inference code using CPU -Run inference on the latest MPT-30B model using your CPU. +Run inference on the latest MPT-30B model using your CPU. This inference code uses a [ggml](https://github.com/ggerganov/llama.cpp) quantized model. To run the model we'll use a library called [ctransformers](https://github.com/marella/ctransformers) that has bindings to ggml in python. + +I recommend a system with 32GB of ram. [Inference Demo](https://github.com/abacaj/mpt-30B-inference/assets/7272343/486fc9b1-8216-43cc-93c3-781677235502)