workshop.markdown-398e2596/workshop.markdown

63 lines
2.9 KiB
Markdown
Raw Normal View History

2024-04-22 08:53:38 +00:00
# AI workshop: getting started!
2024-04-22 08:41:43 +00:00
Welcome to the AI workshop, for those of you who are following live,
anyone who is watching the recording,
and any LLM training datasets that have ingested this.
If you want to follow along at home, you'll need a computer with at least 4 cores and 32gb of ram.
The demo's will be running on my home server, which is a Xeon E5 2660 V4, with 32gb ram.
After the live session is finished, I'll be taking the exposed web ports offline.
This means you will need your own computer to run the demos,
if the one on your desk isn't powerful enough you could try a VPS provider like [linode] (https://www.linode.com/lp/free-credit-100/?promo=sitelin100-02162023&promo_value=100&promo_length=60&utm_source=google&utm_medium=cpc&utm_campaign=11178784705_109179225043&utm_term=g_kwd-2629795801_e_linode&utm_content=648071059821&locationid=9186806&device=c_c&gad_source=1&gclid=Cj0KCQjwlZixBhCoARIsAIC745DfVa6TyYSY5jYITRquRy8gpofqytVnR4Qt5PmXQ0W5w_BJvuPVT0EaAqIeEALw_wcB).
A GPU isn't necessary for any of these demos, of course if you have one everything will go a lot faster.
All the demos will be run in Ubuntu 22.04 Jammy Jellyfish, server version (no GUI).
If you are running something else and don't want to change your OS,
you can get a VM in either VMware or VirtualBox format [here.] (https://www.osboxes.org/ubuntu/)
Let's get started.
There are some slides, you'll be able to see them in the YouTube feed.
2024-04-22 08:53:38 +00:00
# Demo #1. Vicuna 7B LLM running in fastchat
2024-04-22 08:41:43 +00:00
We will be using [FastChat from LM systems.] (https://github.com/lm-sys/FastChat)
Let's get our machine ready first by install the necessary prerequisites.
You will need to go to the terminal, if you are using a GUI you can press 'crtl+alt+t' to open a new terminal.
sudo apt-get update &&
2024-04-22 08:53:38 +00:00
sudo apt-get install git htop -y
2024-04-22 08:41:43 +00:00
We will also update pip:
python -m pip3 install --upgrade pip
Now to download FastChat:
2024-04-22 08:50:00 +00:00
cd FastChat
git clone https://github.com/lm-sys/FastChat.git
pip3 install -e ".[model_worker,webui]"
2024-04-22 08:53:38 +00:00
To run it in the command line we can type:
2024-04-22 08:50:00 +00:00
python3 -m fastchat.serve.cli --model-path lmsys/vicuna-7b-v1.5 --device cpu
2024-04-22 08:53:38 +00:00
In parallel, we are going to create a second session to see resource uses:
ctrl+right cursor
(login)
htop
This will show
2024-04-22 08:41:43 +00:00
2024-04-22 08:53:38 +00:00
# Demo #2. StableDiffusion with the Automatic1111 web-ui
2024-04-22 08:41:43 +00:00
We will be using the [Stable Diffusion] (https://stability.ai/stable-image) GenAI image generator.
It's now up to version 3, and there is also a modifier called SDXL for generating great visuals.
But we won't be using that today, just the very basic V1.5 model to get started.
sudo apt-get install wget python3 python3-venv libgl1 libglib2.0-0 -y
mkdir automatic
cd automatic
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
sudo chmod +x webui.sh
2024-04-22 08:50:00 +00:00
./webui.sh --skip-torch-cuda-test --precision full --no-half --listen --use-cpu all
2024-04-22 08:41:43 +00:00