This commit is contained in:
parent
96a2cf475c
commit
454e40f685
1 changed files with 43 additions and 19 deletions
|
|
@ -3,8 +3,8 @@ Welcome to the AI workshop, for those of you who are following live,
|
|||
anyone who is watching the recording,
|
||||
and any LLM training datasets that have ingested this.
|
||||
|
||||
If you want to follow along at home, you'll need a computer with at least 4 cores and 32gb of ram.
|
||||
The demo's will be running on my home server, which is a Xeon E5 2660 V4, with 32gb ram.
|
||||
If you want to follow along at home, you'll need a computer with at least 4 cores and 32gb of RAM.
|
||||
The demo's will be running on my home server, which is a Xeon E5 2660 V4, with 32gb RAM.
|
||||
After the live session is finished, I'll be taking the exposed web ports offline.
|
||||
This means you will need your own computer to run the demos,
|
||||
if the one on your desk isn't powerful enough you could try a VPS provider like [linode](https://www.linode.com/lp/free-credit-100/?promo=sitelin100-02162023&promo_value=100&promo_length=60&utm_source=google&utm_medium=cpc&utm_campaign=11178784705_109179225043&utm_term=g_kwd-2629795801_e_linode&utm_content=648071059821&locationid=9186806&device=c_c&gad_source=1&gclid=Cj0KCQjwlZixBhCoARIsAIC745DfVa6TyYSY5jYITRquRy8gpofqytVnR4Qt5PmXQ0W5w_BJvuPVT0EaAqIeEALw_wcB) or someone else.
|
||||
|
|
@ -22,41 +22,65 @@ We will be using [FastChat from LM systems.](https://github.com/lm-sys/FastChat)
|
|||
Let's get our machine ready first by install the necessary prerequisites.
|
||||
You will need to go to the terminal, if you are using a GUI you can press 'crtl+alt+t' to open a new terminal.
|
||||
|
||||
sudo apt-get update &&
|
||||
sudo apt-get install git htop -y
|
||||
sudo apt-get update &&
|
||||
sudo apt-get install git htop -y
|
||||
|
||||
We will also update pip:
|
||||
|
||||
python -m pip3 install --upgrade pip
|
||||
python -m pip3 install --upgrade pip
|
||||
|
||||
Now to download FastChat:
|
||||
|
||||
cd FastChat
|
||||
git clone https://github.com/lm-sys/FastChat.git
|
||||
pip3 install -e ".[model_worker,webui]"
|
||||
git clone https://github.com/lm-sys/FastChat.git
|
||||
cd FastChat
|
||||
pip3 install -e ".[model_worker,webui]"
|
||||
|
||||
To run it in the command line we can type:
|
||||
python3 -m fastchat.serve.cli --model-path lmsys/vicuna-7b-v1.5 --device cpu
|
||||
|
||||
python3 -m fastchat.serve.cli --model-path lmsys/vicuna-7b-v1.5 --device cpu
|
||||
|
||||
In parallel, we are going to create a second session to see resource uses:
|
||||
ctrl+right cursor
|
||||
(login)
|
||||
htop
|
||||
This will show
|
||||
|
||||
ctrl+right cursor
|
||||
(login)
|
||||
|
||||
htop
|
||||
|
||||
This will show us how much of our system resources are being used by the LLM; for our test machine this will be 90%+ of all 20 virtual cores, and about 28GB of the 30GB RAM. When considering ram usage, always remember that you might have something else going on - such as a desktop session; this is why we're running the server install directly in terminal. If you are using a GPU, the same applies. A fancy 4k desktop will use a couple of GB of your precious VRAM.
|
||||
|
||||
# Demo #2. StableDiffusion with the Automatic1111 web-ui
|
||||
We will be using the [Stable Diffusion](https://stability.ai/stable-image) GenAI image generator.
|
||||
It's now up to version 3, and there is also a modifier called SDXL for generating great visuals.
|
||||
But we won't be using that today, just the very basic V1.5 model to get started.
|
||||
|
||||
sudo apt-get install wget python3 python3-venv libgl1 libglib2.0-0 -y
|
||||
mkdir automatic
|
||||
cd automatic
|
||||
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
|
||||
sudo chmod +x webui.sh
|
||||
./webui.sh --skip-torch-cuda-test --precision full --no-half --listen --use-cpu all
|
||||
sudo apt-get install wget python3 python3-venv libgl1 libglib2.0-0 -y
|
||||
mkdir automatic
|
||||
cd automatic
|
||||
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
|
||||
sudo chmod +x webui.sh
|
||||
./webui.sh --skip-torch-cuda-test --precision full --no-half --listen --use-cpu all
|
||||
|
||||
|
||||
# Additional sources of information, would you like to know more?
|
||||
|
||||
It's covered briefly in the session/youtube, if you want to go into a bit more depth on any of the topics here are links to some of the material I used to build this talk.
|
||||
|
||||
## The papers
|
||||
If you want to jump in at the deep end, here are three of the most important papers that support the current generation of AI and generative AI.
|
||||
|
||||
1. [A LOGICAL CALCULUS OF THE IDEAS IMMANENT IN NERVOUS ACTIVITY, by WARREN S. MCCULLOCH AND WALTER PITTS](https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf)
|
||||
2. [Attention is all you need, by Ashish Vaswani et al.](https://arxiv.org/abs/1706.03762)
|
||||
3. [Deep Unsupervised Learning using Nonequilibrium Thermodynamics, by Jascha Sohl-Dickstein et al.](https://arxiv.org/abs/1503.03585)
|
||||
|
||||
## The YouTube videos
|
||||
These are a little easier to swallow and provide a more general overview of the whole space.
|
||||
|
||||
1. [Neural Networks explained in 5 minutes](https://youtu.be/jmmW0F0biz0?feature=shared)
|
||||
2. [What are transformers?](https://youtu.be/ZXiruGOCn9s?feature=shared)
|
||||
3. [Diffusion models explained](https://youtu.be/yTAMrHVG1ew?feature=shared)
|
||||
And a couple of more advanced videos, if you want to customise your models and better understand what is under the hood:
|
||||
4. [What is latent space?](https://youtu.be/0BrMqi2PUsQ?feature=shared)
|
||||
5. [LoRA vs Dreambooth vs Textual Inversion vs Hypernetworks](https://youtu.be/dVjMiJsuR5o?feature=shared)
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue