The New York Times is testing AI, but journalists are still in control

AI JOURNALISM

The New York Times has introduced AI tools to help newsroom staff with editing, summarising, coding, and writing tasks.

An internal email confirmed that editorial and product teams will receive AI training, alongside the launch of an in-house tool, Echo, designed to summarize articles, briefings, and internal updates.

New editorial guidelines allow staff to use AI for refining copy, creating social media posts, generating SEO headlines, and developing news quizzes, quote cards, and interview questions.

However, AI cannot be used to write full articles, bypass paywalls, process copyrighted content, or publish AI-generated images or videos without clear labelling.

Here’s what you should know:

  • The Times allows AI to assist with editing, summaries, and content creation but not full article writing.

  • AI cannot be used to bypass paywalls or publish AI-generated visuals without disclosure.

  • The rollout happens as The Times sues OpenAI and Microsoft for allegedly using its content without permission.

Humans remain in control

The Times has not specified how much AI-edited content will appear in published articles but insists that all reporting, writing, and editing remain under human oversight.

Its AI principles, introduced in May 2024, emphasize that journalists are responsible for all content, even when AI is involved.

Other approved AI tools include GitHub Copilot for coding, Google Vertex AI for product development, and OpenAI’s non-ChatGPT API.

These changes come as The Times continues its lawsuit against OpenAI and Microsoft, accusing them of using its content without permission to train AI models.

One minute, NYT is suing AI. The next, they’re giving it a desk.