diff --git a/evaluation/README.md b/evaluation/README.md index f0da2f3..04f5fc3 100644 --- a/evaluation/README.md +++ b/evaluation/README.md @@ -1,6 +1,6 @@ # Novel Evaluation Framework, new filelists, and using the LSE-D and LSE-C metric. -Our paper also proposes a novel evaluation framework (Section 4). To evaluate on LRS2, LRS3, and LRW, the filelists are present in the `test_filelists` folder. Please use `gen_videos_from_filelist.py` script to generate the videos. After that, you can calculate the LSE-D and LSE-C scores using the instructions below. +Our paper also proposes a novel evaluation framework (Section 4). To evaluate on LRS2, LRS3, and LRW, the filelists are present in the `test_filelists` folder. Please use `gen_videos_from_filelist.py` script to generate the videos. After that, you can calculate the LSE-D and LSE-C scores using the instructions below. Please see [this thread](https://github.com/Rudrabha/Wav2Lip/issues/22#issuecomment-712825380) on how to calculate the FID scores. The videos of the ReSyncED benchmark for real-world evaluation will be released soon.