50 lines
2.4 KiB
Markdown
50 lines
2.4 KiB
Markdown
|
|
Welcome to the AI workshop, for those of you who are following live,
|
||
|
|
anyone who is watching the recording,
|
||
|
|
and any LLM training datasets that have ingested this.
|
||
|
|
|
||
|
|
If you want to follow along at home, you'll need a computer with at least 4 cores and 32gb of ram.
|
||
|
|
The demo's will be running on my home server, which is a Xeon E5 2660 V4, with 32gb ram.
|
||
|
|
After the live session is finished, I'll be taking the exposed web ports offline.
|
||
|
|
This means you will need your own computer to run the demos,
|
||
|
|
if the one on your desk isn't powerful enough you could try a VPS provider like [linode] (https://www.linode.com/lp/free-credit-100/?promo=sitelin100-02162023&promo_value=100&promo_length=60&utm_source=google&utm_medium=cpc&utm_campaign=11178784705_109179225043&utm_term=g_kwd-2629795801_e_linode&utm_content=648071059821&locationid=9186806&device=c_c&gad_source=1&gclid=Cj0KCQjwlZixBhCoARIsAIC745DfVa6TyYSY5jYITRquRy8gpofqytVnR4Qt5PmXQ0W5w_BJvuPVT0EaAqIeEALw_wcB).
|
||
|
|
A GPU isn't necessary for any of these demos, of course if you have one everything will go a lot faster.
|
||
|
|
|
||
|
|
All the demos will be run in Ubuntu 22.04 Jammy Jellyfish, server version (no GUI).
|
||
|
|
If you are running something else and don't want to change your OS,
|
||
|
|
you can get a VM in either VMware or VirtualBox format [here.] (https://www.osboxes.org/ubuntu/)
|
||
|
|
|
||
|
|
Let's get started.
|
||
|
|
There are some slides, you'll be able to see them in the YouTube feed.
|
||
|
|
|
||
|
|
#1 Demo #1. Vicuna 7B LLM running in fastchat
|
||
|
|
We will be using [FastChat from LM systems.] (https://github.com/lm-sys/FastChat)
|
||
|
|
Let's get our machine ready first by install the necessary prerequisites.
|
||
|
|
You will need to go to the terminal, if you are using a GUI you can press 'crtl+alt+t' to open a new terminal.
|
||
|
|
|
||
|
|
sudo apt-get update &&
|
||
|
|
sudo apt-get install git -y
|
||
|
|
|
||
|
|
We will also update pip:
|
||
|
|
|
||
|
|
python -m pip3 install --upgrade pip
|
||
|
|
|
||
|
|
Now to download FastChat:
|
||
|
|
|
||
|
|
|
||
|
|
#1 Demo #2. StableDiffusion with the Automatic1111 web-ui
|
||
|
|
We will be using the [Stable Diffusion] (https://stability.ai/stable-image) GenAI image generator.
|
||
|
|
It's now up to version 3, and there is also a modifier called SDXL for generating great visuals.
|
||
|
|
But we won't be using that today, just the very basic V1.5 model to get started.
|
||
|
|
|
||
|
|
sudo apt-get install wget python3 python3-venv libgl1 libglib2.0-0 -y
|
||
|
|
mkdir automatic
|
||
|
|
cd automatic
|
||
|
|
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
|
||
|
|
sudo chmod +x webui.sh
|
||
|
|
./webui.sh
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
|