Welcome to the era of thinking APIs

OPENAI

Welcome to the era of thinking APIs

OpenAI recently upgraded its Response API, its backbone for building agent-like applications, by adding powerful tools like image generation, Code Interpreter, and file search.

Response dropped in March and is now fully supported across the GPT-4, GPT-4.1, and o-series models (including o3 and o4-mini), which can now run tools natively during their reasoning process, making results faster, smarter, and more relevant.

There is also support for remote Model Context Protocol (MCP) servers, allowing developers to integrate models with third-party platforms like Stripe, HubSpot, and Zapier with minimal code.

OpenAI has also joined MCP’s steering committee to support this growing ecosystem.

Other handy additions include a background mode for long tasks that need more time to run, reasoning summaries to help with debugging and transparency, and encrypted reasoning items that boost performance without storing any data on OpenAI’s servers, ideal for anyone using Zero Data Retention.

The new tools are designed to be more flexible:

  • Image generation with real-time previews and step-by-step edits

  • Code Interpreter for maths, data analysis, and image-based reasoning

  • File search across multiple sources with smart filters and better results

No thoughts, just summaries

Everything’s now live, with no pricing changes to core tools.

Image generation and vector search are priced separately, with clear breakdowns available in OpenAI’s docs.

Still waiting on an agent to do my job for me…