Why GestureLabs is building the Ability-Adaptive Communication Platform (A3CP)

A3CP is an open-source communication platform for people who cannot rely on speech. We work with families, clinicians, and care organisations to build systems that learn how each person communicates—through movement, sound, and context—rather than forcing them to adapt to rigid interfaces.

What problem A3CP is trying to solve

Across Europe, many people with severe motor or cognitive disabilities still lack reliable ways to communicate. Current assistive-communication tools often assume abilities—precise pointing, eye-tracking, symbol selection—that many do not have. A3CP is our attempt to close this gap.

The challenge

Non-speaking people are often asked to fit into technology that was not designed for them. Systems built around fixed input methods can leave individuals isolated and caregivers overwhelmed, especially in understaffed welfare institutions.

Our approach

A3CP is a modular, open-source platform that learns each person’s patterns of movement, sound, and context. Caregivers remain in the loop: they label gestures, confirm meanings, and guide how the system adapts over time.

The impact we aim for

By making ability-adaptive communication infrastructure openly available, we want to enable non-speaking people to express needs and preferences, reduce caregiver stress, and give welfare organisations tools they can audit, extend, and trust.

Where A3CP came from

A3CP began with a family’s experience. Andrea’s son Eric, a bright and curious child with cerebral palsy, understands two languages and communicates through sound, movement, and expression—but does not yet speak. Everyday interactions made two things clear: the richness of his non-verbal communication, and the limits of existing assistive systems that required him to adapt to fixed input methods he could not reliably control.

Andrea’s wish to give her son a voice initiated a collaboration between technologists, designers, and researchers at The Open University (UK). Early prototypes combined simple sensors, gesture recognition, and participatory design with families and therapists. These experiments showed that even low-cost technology can capture meaningful signals when it is designed around individual ability.

GestureLabs was founded in Berlin to carry this work forward as a non-profit initiative. A3CP is now being developed as open, transparent infrastructure that care organisations, researchers, and families can freely use, improve, and share.

Andrea with her son Eric during a communication session
Andrea and her son Eric. Their experience helped shape the core ideas behind A3CP.

What we are building

The Ability-Adaptive Augmentative Communication Platform is not a single app, but a stack of components that can be deployed in schools, care homes, therapy centres, and family homes.

Inputs: gestures, sound, context

A3CP takes in signals from cameras, microphones, and other sensors, turning movement, vocalisation, and environmental context into structured representations. The goal is to work with affordable, widely available hardware.

Learning with caregivers

The platform learns over time how a specific person uses gestures and sounds to express meaning. Caregivers can label new actions, correct mistakes, and see how the system is changing—keeping humans in control of interpretation.

Outputs and integration

Recognised intents can be rendered as speech, text, or symbols and integrated with existing AAC tools where appropriate. The modular FastAPI-based stack and documented schemas are designed for extension and audit.

What we value and how we design A3CP

Shaping an adaptive communication system for non-speaking people requires principled choices about data, design, and governance.

  1. 01

    Put the person first.

    We design around individual abilities and contexts, not around standardised gestures or abstract user models.

  2. 02

    Caregiver-in-the-loop.

    Caregivers label new actions, correct interpretations, and review how the system adapts over time.

  3. 03

    Open and inspectable.

    Schemas, logs, and code are open so institutions can audit behaviour and adapt the platform to their needs.

  4. 04

    Privacy and safety by design.

    We prioritise local processing, data minimisation, and clear documentation of how information flows.

  5. 05

    Scientific and clinical rigour.

    We work with researchers, clinicians, and families using transparent methods and reproducible protocols.

Where the project is today

A3CP is moving from research prototype to deployable, open-source infrastructure. Here is where we are now and what we are preparing next.

Now

  • Core architecture defined using a modular FastAPI-based stack.
  • Canonical data schemas validated with Pydantic v2 and shared with partners.
  • Adaptive memory and caregiver-feedback framework under active implementation.
  • Collaboration network forming in Berlin–Brandenburg and internationally.

Next

  • End-to-end demonstrator deployment in pilot environments.
  • Pilot studies with care and research partners planned for 2026.
  • Documentation and training materials for institutions and developers.
  • Refinement of privacy-preserving deployment options (on-premise, edge-first).

Interested in collaborating on A3CP?

We are looking for partners in care, research, engineering, and policy who want to help shape ability-adaptive communication technology as open, transparent infrastructure.