Extend any Android app's capabilities to any on-device AI assistant.

OACP gives apps a simple way to describe what they can do, and gives assistants a standard way to discover and invoke those actions on device.

Start in the right place

How it works

  1. 1
    Describe the app
    The app ships oacp.json and OACP.md so an assistant can understand the actions, parameters, and vocabulary.
  2. 2
    Discover capabilities
    An OACP-compatible assistant scans the device for .oacp ContentProviders and reads each manifest at runtime.
  3. 3
    Match the request
    The assistant uses its on-device pipeline to match the user's request to the best capability and extract any parameters.
  4. 4
    Invoke the app
    The assistant dispatches an Android intent. Background tasks use broadcasts and UI flows use activities.
  5. 5
    Return the result
    If the action is async, the app sends a structured result back and the assistant can speak or display it.
 ┌──────────────────────────┐        ┌──────────────────────────┐
 │  Any Android app         │        │  Hark (voice assistant)  │
 │  + OACP Kotlin SDK       │◀──────▶│  on-device AI pipeline   │
 │  + oacp.json / OACP.md   │  OACP  │  discovers + dispatches  │
 └──────────────────────────┘        └──────────────────────────┘
              ▲                                    ▲
              └──────── OACP Protocol ─────────────┘
                   (content providers + intents)

What Is Being Cooked

A fuller open-source AI assistant with on-device AI, a stronger protocol, more SDKs, and more real apps using OACP. The current docs stay a little unfinished on purpose so the next contributors can see where help is needed.

See the full roadmap