// News

Google Introduces AppFunctions, Betting Android's Future on On-Device Agents

Android AppFunctions lets apps expose typed, discoverable capabilities that AI agents like Gemini can invoke directly — no UI required. It's Android's answer to MCP, and it's shipping in Android 16. Here's what it means for your product.

30 March 2026 android ai-agents agentic-ai google apple ios mobile developer-tools mcp

Google has made its position clear: the next interaction layer on Android isn’t a better launcher or a smarter notification shade — it’s an AI agent that acts on your behalf, across your apps, without you ever tapping a single screen.

AppFunctions is the technical foundation for that vision. It’s a new Android 16 platform feature that lets apps declare specific, named capabilities — things your app can do — that authorised callers, including Gemini and third-party agent apps, can discover and execute directly. The app doesn’t need to open. The UI doesn’t need to render. The user just gets the result.

If you’ve been following the MCP ecosystem on the server side, this will feel immediately familiar. AppFunctions is, deliberately, the on-device equivalent.


What AppFunctions Actually Is

The core idea is straightforward: an app registers a list of capabilities with the Android system — “create a reminder”, “search photos”, “book a ride” — along with a precise description of what each capability needs as input and what it returns. The system indexes these capabilities, and authorised agents like Gemini can query that index, match a user’s intent to the right capability, and invoke it directly.

The whole interaction is structured and typed. No screen scraping. No accessibility hacks. No simulated touch events. The agent calls a well-defined function; the app executes it and returns a result.

For common categories — calendar, tasks, notes, reminders — Google provides predefined schemas that apps can implement. This standardisation is the point: an agent shouldn’t need to know the specific details of every to-do app on the Play Store. If they all implement the same “create task” contract, Gemini can route to any of them interchangeably.


The MCP Parallel Is Intentional

Google is explicit about the comparison: AppFunctions is positioned as the mobile equivalent of tools within the Model Context Protocol.

MCP, developed by Anthropic, standardises how AI agents interact with external systems — exposing capabilities as named, typed tools that agents can discover and call. It has become the dominant pattern for agentic backend integrations. AppFunctions applies the same architecture to the device layer.

The implication is significant. If MCP is how agents reach out to servers, AppFunctions is how agents reach into apps. Together they sketch a coherent picture of an agentic stack: a model that can orchestrate across cloud services and local applications, treating both as capability surfaces rather than UIs.


How It Ships

AppFunctions is landing in stages, and the staging is deliberate.

The first production use is Gemini on Samsung Galaxy S26 and select Pixel 10 devices, starting with calendar, notes, and task apps from a curated set of partners. The early preview is live now for developers via a beta feature in the Gemini app. The full stable API is targeting Android 17, expected mid-year 2026.

The Galaxy S26 integration with Samsung Gallery is the clearest demonstration of the end state. Ask Gemini to show you photos of your cat from Samsung Gallery. Gemini identifies the right capability, calls it, gets back a filtered result set, and displays it inline — without ever launching the app. That’s the pattern Google is betting on at scale.

For apps that haven’t implemented AppFunctions yet, Google is also building a UI automation fallback — a framework that lets agents perform tasks by simulating user interactions with the existing UI. It’s explicitly positioned as a transition mechanism, not the target architecture. The message to app teams is clear: implement AppFunctions properly, or your app will be automated around you.


What This Means for Your Product

If you’re a PM or product leader on an Android app, AppFunctions changes what “discovery” and “engagement” mean.

Visibility shifts from the app store to the agent layer. Today, users find your app through search, recommendations, and word of mouth. In an agentic model, Gemini routes tasks to apps based on what capabilities they’ve registered. If your app hasn’t declared what it can do, it’s invisible to agents — regardless of your store ranking or install base. That’s a distribution problem as much as a technical one.

The quality of your capability descriptions becomes a product decision. How precisely your app’s functions are defined — what they accept, what they return, how well they match the standard schemas — determines whether agents invoke your app reliably or pass you over for a better-specified alternative. This is a new axis of product quality that sits entirely outside the UI.

Early adoption has compounding value. Google is working with a limited set of app developers now to refine integrations before broader rollout. The apps that establish patterns during the preview window will shape how the ecosystem works. Being the app Gemini reliably routes calendar tasks to, rather than one of five alternatives, is a durable advantage.

Think about scope carefully. The capabilities your app exposes to agents define what agents can do on a user’s behalf. An agent that can create a calendar event shouldn’t necessarily be able to read all existing events. These boundaries are product decisions — they affect user trust, privacy positioning, and your relationship with the platforms that orchestrate on top of you.


The Security Question Nobody Is Answering Yet

The current trust model for AppFunctions is, to be blunt, still underdeveloped.

The security is essentially: only authorised callers can invoke your app’s capabilities, and users get notified of agent actions. That’s a reasonable start, but it doesn’t address the deeper problem: AI agents can hallucinate, misinterpret intent, and chain actions in ways that produce outcomes no user intended.

Structured, typed capabilities are a significant improvement over UI automation from a safety standpoint — explicit definitions constrain what an agent can do in ways that simulated touches don’t. But there’s no standardised way to express preconditions (“only create this event if the user has confirmed the details”), no clear transaction model across multi-step agent workflows, and no audit trail for after-the-fact review.

These are solvable problems, and Android 17 is likely to address some of them. But for product teams thinking about what to expose through AppFunctions — especially in apps that handle sensitive data or irreversible actions — the current model warrants caution. Start with capabilities that are low-stakes and easy to reverse.


Apple Is Doing the Same Thing — Just More Slowly

AppFunctions isn’t the first time a major mobile platform has tried this. Apple has been working toward the same architecture through App Intents, and the parallel is worth understanding — both for what it reveals about where mobile is heading, and for what it says about Google’s execution speed.

App Intents, introduced in iOS 16, lets developers declare named actions that Siri, Shortcuts, and Spotlight can discover and invoke. The model is structurally identical to AppFunctions: you describe what your app can do, what inputs each action needs, and what it returns. The system exposes it to callers. Apple also introduced App Intent domains in iOS 18 — predefined schemas for specific categories like Books, Camera, Mail, and Spreadsheets — which mirrors exactly what Google is doing with AppFunctions’ standard schemas for common categories.

The key difference is delivery. Apple announced agent-grade capabilities — multi-step Siri actions across apps, on-screen awareness, personal context — at WWDC 2024, targeting a 2025 release. Those features slipped. Then they were retargeted to iOS 26.4 in spring 2026. iOS 26.4 has now shipped, and the upgraded Siri and cross-app App Intents integration didn’t make it. Current reporting points to iOS 26.5 (May 2026) for some features, with the full LLM-based Siri overhaul potentially not landing until iOS 27 in September.

There’s an additional wrinkle: Apple has struck a deal to use Google’s Gemini models to power the upgraded Siri. The same model family that calls AppFunctions on Android may end up calling App Intents on iOS. That’s an odd strategic position for Apple, and it underscores how far behind their internal model capabilities were relative to what their agent platform required.

For iOS product teams, the implications are essentially identical — apps that expose rich, well-defined App Intents will be more visible and useful to the agent layer than apps that don’t. The difference is timing: iOS developers have had the framework since 2022 but limited production agent traffic to optimise for. That’s about to change, but the timeline keeps shifting.

The broader picture is that both platforms are converging on the same model. The OS becomes a capability broker. Apps become capability providers. AI agents become the orchestration layer. Google is executing faster and with cleaner developer messaging right now. Apple has more framework history and a larger base of developers who’ve already adopted App Intents for Shortcuts. Neither platform has fully solved the security and trust model.


The Bigger Bet

AppFunctions isn’t just a developer API. It’s a statement about what Android thinks the OS is for in an agentic world.

The traditional mental model is: OS manages hardware resources, apps provide user-facing functionality, users navigate between apps via launchers and notifications. AppFunctions introduces a different model: the OS is a capability broker, apps are capability providers, and AI agents are the orchestration layer that routes user intent to the right functions.

That’s a genuine product shift as much as a technical one. As the Apple comparison shows, it’s a shift both major mobile platforms have independently arrived at — which suggests it reflects something real about where AI-native mobile is heading, rather than just being a Google bet.

Whether it pays off depends on two things: whether app teams adopt AppFunctions at scale before the UI automation fallback becomes the default path, and whether Google can build the trust and safety infrastructure that makes users comfortable delegating real actions to on-device agents.

Both are open questions. But for product teams building anything with meaningful user-facing functionality on Android, this is the platform shift worth getting ahead of now.


AppFunctions documentation is available at developer.android.com/ai/appfunctions. The Jetpack library releases are tracked at developer.android.com/jetpack/androidx/releases/appfunctions.