GOOGLE

This is what agentic AI actually looks like

Google has introduced the Interactions API, a new way for developers to work with Gemini models (such as Gemini 3 Pro) and agents like Gemini Deep Research.

It’s now available in public beta through Google AI Studio.

The API is designed to manage the more complex behaviour of modern AI systems, including mixed messages, model reasoning, tool use and long tasks.

Developers can use one endpoint to talk to either a model or an agent by specifying which one they want.

Gemini Deep Research (Preview) is included and can run longer research tasks and turn results into clear reports.

Google plans to add more built-in agents and eventually allow developers to create their own, all supported under the same API.

Here’s what you should know:

  • A single API to work with both Gemini models and built-in agents.

  • Made for complex, multi-step agent tasks with optional server-managed state.

  • In public beta, with broader tool support coming soon.

Let the API babysit for once

The Interactions API adds features beyond standard text generation: optional server-side history management, a clearer structure for agent workflows, background processing for long-running tasks, and support for MCP tools.

It’s still in beta, so generateContent remains the recommended option for stable production work.

Developers can try it now using a Gemini API key.

Google says it wants the API to reduce friction when building agent-based systems and will expand support across more tools in the coming months.

Finally, Google can Google things for me. Next level of laziness unlocked. - MV

Keep Reading

No posts found