diff --git a/README.md b/README.md index a0a1bc4..c8db1d5 100644 --- a/README.md +++ b/README.md @@ -56,7 +56,7 @@ The result is saved (by default) in `results/result_voice.mp4`. You can specify Preparing LRS2 for training ---------- -Our models are trained on LRS2. Training on other datasets might require small modifications to the code. Changes to FPS etc. would need significant code changes. +Our models are trained on LRS2. See [here](#training-on-datasets-other-than-lrs2) for a few suggestions regarding training on other datasets. ##### LRS2 dataset folder structure ``` @@ -89,7 +89,7 @@ Train! There are two major steps: (i) Train the expert lip-sync discriminator, (ii) Train the Wav2Lip model(s). ##### Training the expert discriminator -You can download [the pre-trained weights]() if you want to skip this step. To train it: +You can download [the pre-trained weights](#getting-the-weights) if you want to skip this step. To train it: ```bash python color_syncnet_train.py --data_root lrs2_preprocessed/ --checkpoint_dir ``` @@ -101,6 +101,17 @@ python wav2lip_train.py --data_root lrs2_preprocessed/ --checkpoint_dir