introduction
The advances in automatic sign language translation (SLT) to spoken languages have been mostly benchmarked with datasets of limited size and restricted domains. Our work advances the state of the art by providing the first baseline results on How2Sign, a large and broad dataset.
We train a Transformer over I3D video features, using the reduced BLEU as a reference metric for validation, instead of the widely used BLEU score. We report a result of 8.03 on the BLEU score, and publish the first open-source implementation of its kind to promote further advances.
If you find this work useful, please cite us!
@InProceedings{slt-how2sign-wicv2023, author = {Laia Tarrés and Gerard I. Gállego and Amanda Duarte and Jordi Torres and Xavier Giró-i-Nieto}, title = {Sign Language Translation from Instructional Videos}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, year = {2023} }
Model
The building blocks of our implementation are depicted in the following figure, where we show an example of sign language translation:
Results
Quantitative results on the How2Sign dataset with the best performing model:
Examples
Qualitative results for the best performing model on the How2Sign test partition:
code
Poster
acknowledgements
This work has been partially supported under grant agreement 2021-SGR-0047 and by the framework of projects PID2019-107579RB-I00/AEI/10.13039/501100011033, research grants PRE2020-094223, PID2021-126248OB-I00, PID2019-107255GB-C21 financed by the Spanish Ministerio de Economía y Competitividad and the European Regional Development Fund (ERDF). |