Adaptive Input
Treats idiosyncratic gestures and non-speech sounds as valid, modelled input signals.
GestureLabs · A3CP
Communication is a human right. GestureLabs is developing open-source AI communication tools that adapt to the individual and their abilities.
Ability-adaptive communication is a new approach to Augmentative and Alternative Communication (AAC). Traditional AAC helps millions of non-speaking people, but many individuals with complex motor, sensory, or cognitive profiles cannot use these systems and are left without reliable ways to express themselves. We are building an AI-driven communication system that learns from the user’s own gestures, sounds, and patterns over time. Instead of forcing people to adapt to rigid interfaces, the system adapts to their abilities. It is designed from the start to protect privacy, support ethical use, and operate transparently.
Treats idiosyncratic gestures and non-speech sounds as valid, modelled input signals.
Stores derived features instead of raw video or audio, supporting local, privacy-preserving deployments.
Designed to operate with or without internet for use in homes, therapy centers, schools...where the user wants to be.
GestureLabs is an international team of researchers, developers, academics, and product managers commited to supporting the needs of people with complex needs. The platform is built as public digital infrastructure: transparent, auditable, and independent of commercial lock-in.


We are looking for institutions, developers, researchers, and families interested in co-designing, creating, and evaluating a new generation of ability adaptive communication.