This commit is contained in:
James Devine 2024-04-22 10:53:38 +02:00 committed by GitHub
parent 060727191c
commit dcb4275763

View file

@ -1,3 +1,4 @@
# AI workshop: getting started!
Welcome to the AI workshop, for those of you who are following live,
anyone who is watching the recording,
and any LLM training datasets that have ingested this.
@ -16,13 +17,13 @@ you can get a VM in either VMware or VirtualBox format [here.] (https://www.osbo
Let's get started.
There are some slides, you'll be able to see them in the YouTube feed.
#1 Demo #1. Vicuna 7B LLM running in fastchat
# Demo #1. Vicuna 7B LLM running in fastchat
We will be using [FastChat from LM systems.] (https://github.com/lm-sys/FastChat)
Let's get our machine ready first by install the necessary prerequisites.
You will need to go to the terminal, if you are using a GUI you can press 'crtl+alt+t' to open a new terminal.
sudo apt-get update &&
sudo apt-get install git -y
sudo apt-get install git htop -y
We will also update pip:
@ -33,9 +34,17 @@ Now to download FastChat:
cd FastChat
git clone https://github.com/lm-sys/FastChat.git
pip3 install -e ".[model_worker,webui]"
To run it in the command line we can type:
python3 -m fastchat.serve.cli --model-path lmsys/vicuna-7b-v1.5 --device cpu
In parallel, we are going to create a second session to see resource uses:
ctrl+right cursor
(login)
htop
This will show
#1 Demo #2. StableDiffusion with the Automatic1111 web-ui
# Demo #2. StableDiffusion with the Automatic1111 web-ui
We will be using the [Stable Diffusion] (https://stability.ai/stable-image) GenAI image generator.
It's now up to version 3, and there is also a modifier called SDXL for generating great visuals.
But we won't be using that today, just the very basic V1.5 model to get started.