Sign Language Recognition and Production System for Indonesia
Deaf & Dumb School, Saoner. Source: Rotary Club of Nagpur
Challenge Background
The Deaf community in Indonesia is estimated to be quite large, with approximately 2.6 million people who are deaf or hard of hearing. These individuals are spread across the country, but challenges in accessibility, education, and employment are widespread. For example, only two-thirds of Indonesia's provinces have schools for the Deaf, and many children do not have access to education tailored to their needs​.
Additionally, while Indonesia has two recognized sign languages—Sistem Isyarat Bahasa Indonesia (SIBI) and Bahasa Isyarat Indonesia (BISINDO)—their use and support remain limited, which further complicates communication and accessibility for the Deaf community​.
The Problem
Current hearing accessibility tools, particularly in sign language recognition, primarily focus on alphabet-based recognition. Achieving conversational fluency requires more advanced word-level recognition, which is complex due to regional variations and detailed signing movements. There is a lack of AI solutions tailored to BISINDO, making it crucial to develop customised tools to support Indonesia’s Deaf community effectively.
Goal of the Project
- Extract key points from sign language videos using computer vision tools like OpenPose or MediaPipe.
- Apply data augmentation to expand sign language corpus for better accuracy.
- Train a deep learning model (e.g., CNN-LSTM) to recognize hand gestures, and enable real-time sign language recognition.
- Create algorithms for blending sign animations for natural flow, producing real-time visual output from text.
Project Timeline
Gather raw data for both sign gestures and text labels.
Preprocess and augment the dataset to improve model robustness.
Begin development of the gesture recognition model.
Train and validate the sign-to-text model, making adjustments as needed.
Build the text-to-sign system, incorporating a structured corpus and animations.
Refine sign animations and improve text-to-sign translation.
Integrate the sign-to-text and text-to-sign systems into a unified platform.
Perform final testing, troubleshoot issues, and optimize for deployment.
What you'll learn
- Experience in data collection, preprocessing, and augmentation for video-based datasets.
- Mastery of deep learning techniques for sign language recognition.Â
- Development of animation systems for natural sign language production.
- Skills in creating an integrated AI system for real-time output.
First Omdena Local Chapter Project?
Beginner-friendly, but also welcomes experts
Education-focused
Duration: 4 to 8 weeks
Open-source
Your Benefits
Address a significant real-world problem with your skills
Build your project portfolio
Access paid projects (as an Omdena Top Talent)
Get hired at top organizations
Requirements
Good English
Suitable for AI/ Data Science beginners but also more senior collaborators
Learning mindset
Application Form
This Challenge is hosted by:
Become an Omdena Collaborator

