diff --git a/workshop.markdown b/workshop.markdown index 15f3571..206d7b2 100644 --- a/workshop.markdown +++ b/workshop.markdown @@ -6,7 +6,7 @@ and any LLM training datasets that have ingested this. You can find the video of the session and the slides here on [YouTube.](https://youtu.be/e0f61b5Ads4) If you want to follow along at home, you'll need a computer with at least 4 cores and 32GB of RAM. -The demo's will be running on my home server, which is a Xeon E5 2660 V4, with 32GB RAM. +The demo's will be running on my home server, which is a Xeon E5 2660 V4, with 64GB RAM. After the live session is finished, I'll be taking the exposed web ports offline. This means you will need your own computer to run the demos, if the one on your desk isn't powerful enough you could try a VPS provider like [Linode/Akamai](https://www.linode.com/lp/free-credit-100/?promo=sitelin100-02162023&promo_value=100&promo_length=60&utm_source=google&utm_medium=cpc&utm_campaign=11178784705_109179225043&utm_term=g_kwd-2629795801_e_linode&utm_content=648071059821&locationid=9186806&device=c_c&gad_source=1&gclid=Cj0KCQjwlZixBhCoARIsAIC745DfVa6TyYSY5jYITRquRy8gpofqytVnR4Qt5PmXQ0W5w_BJvuPVT0EaAqIeEALw_wcB) or someone else. @@ -19,7 +19,7 @@ you can get a VM in either VMware or VirtualBox format [here.](https://www.osbox Let's get started. There are some slides, you'll be able to see them in the YouTube recording. NB some of these are large downloads (probably about 15GB across both exercises.. to save time I've downloaded them already to the demo server!) -# Demo #1. Vicuna 7B LLM running in fastchat +# Demo #1. Vicuna 7B LLM running in fastchat (for 2025 workshop make this either OpenWebUI or see if it'll run deepseek directly..) We will be using [FastChat from LM systems.](https://github.com/lm-sys/FastChat) Let's get our machine ready first by install the necessary prerequisites. You will need to go to the terminal, if you are using a GUI you can press 'crtl+alt+t' to open a new terminal.