Real-Time Sign Language Detection Using Human Pose Estimation

Amit Moryossef, Ioannis Tsochantaridis, Roee Aharoni, Sarah Ebling, Srini Narayanan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

42 Scopus citations

Abstract

We propose a lightweight real-time sign language detection model, as we identify the need for such a case in videoconferencing. We extract optical flow features based on human pose estimation and, using a linear classifier, show these features are meaningful with an accuracy of 80%, evaluated on the Public DGS Corpus. Using a recurrent model directly on the input, we see improvements of up to 91% accuracy, while still working under 4 ms. We describe a demo application to sign language detection in the browser in order to demonstrate its usage possibility in videoconferencing applications.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2020 Workshops, Proceedings
EditorsAdrien Bartoli, Andrea Fusiello
PublisherSpringer Science and Business Media Deutschland GmbH
Pages237-248
Number of pages12
ISBN (Print)9783030660956
DOIs
StatePublished - 2020
EventWorkshops held at the 16th European Conference on Computer Vision, ECCV 2020 - Glasgow, United Kingdom
Duration: 23 Aug 202028 Aug 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12536 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceWorkshops held at the 16th European Conference on Computer Vision, ECCV 2020
Country/TerritoryUnited Kingdom
CityGlasgow
Period23/08/2028/08/20

Bibliographical note

Publisher Copyright:
© 2020, Springer Nature Switzerland AG.

Keywords

  • Sign language detection
  • Sign language processing

Fingerprint

Dive into the research topics of 'Real-Time Sign Language Detection Using Human Pose Estimation'. Together they form a unique fingerprint.

Cite this