Run inference on the latest MPT-30B model using your CPU. This inference code uses a [ggml](https://github.com/ggerganov/llama.cpp) quantized model. To run the model we'll use a library called [ctransformers](https://github.com/marella/ctransformers) that has bindings to ggml in python.
I recommend you use docker for this model, it will make everything easier for you. Minimum specs system with 32GB of ram. Tested on system with AMD Epyc CPU & python 3.10.