Breaking news

The New York Times Greenlights AI Tools For Editorial And Product Teams

In a significant move, The New York Times is giving its editorial and product teams the green light to use AI tools to enhance their workflow. According to a report by Semafor, the paper has introduced a new internal AI summary tool called Echo, alongside a suite of approved AI products to assist with tasks ranging from coding to editorial brainstorming.

What’s New At The Times?

In a recent internal email, The New York Times informed its staff about the debut of Echo, designed to generate concise AI summaries. The email also outlined several AI tools that staff can use for various functions, including the creation of web products and the development of editorial content. Notably, these AI tools are intended to help staff suggest edits, develop interview questions, and assist with research.

Editorial Guidelines For AI Use

The guidelines, however, come with clear boundaries. Staff are encouraged to use AI for tasks like suggesting edits and brainstorming, but not for drafting or making substantial revisions to articles. Additionally, confidential source information is strictly off-limits for AI input. There are also indications that The Times may leverage AI for voice-enabled articles and translations into multiple languages.

Approved AI Tools

The Times has approved several AI products for use, including GitHub Copilot for programming, Google’s Vertex AI for product development, NotebookLM, and selected Amazon AI tools. OpenAI’s API, excluding ChatGPT, is also on the approved list for business accounts.

A Contradictory Situation

This AI rollout comes amidst an ongoing lawsuit that The Times has filed against OpenAI and Microsoft. The lawsuit accuses the tech giants of violating copyright law by allegedly using the publisher’s content to train their generative AI models.

The New York Times’ cautious but forward-thinking approach reflects both its desire to embrace the power of AI while navigating the complex legal and ethical implications of generative technologies.

The AI Agent Revolution: Can the Industry Handle the Compute Surge?

As AI agents evolve from simple chatbots into complex, autonomous assistants, the tech industry faces a new challenge: Is there enough computing power to support them? With AI agents poised to become integral in various industries, computational demands are rising rapidly.

A recent Barclays report forecasts that the AI industry can support between 1.5 billion and 22 billion AI agents, potentially revolutionizing white-collar work. However, the increase in AI’s capabilities comes at a cost. AI agents, unlike chatbots, generate significantly more tokens—up to 25 times more per query—requiring far greater computing power.

Tokens, the fundamental units of generative AI, represent fragmented parts of language to simplify processing. This increase in token generation is linked to reasoning models, like OpenAI’s o1 and DeepSeek’s R1, which break tasks into smaller, manageable chunks. As AI agents process more complex tasks, the tokens multiply, driving up the demand for AI chips and computational capacity.

Barclays analysts caution that while the current infrastructure can handle a significant volume of agents, the rise of these “super agents” might outpace available resources, requiring additional chips and servers to meet demand. OpenAI’s ChatGPT Pro, for example, generates around 9.4 million tokens annually per subscriber, highlighting just how computationally expensive these reasoning models can be.

In essence, the tech industry is at a critical juncture. While AI agents show immense potential, their expansion could strain the limits of current computing infrastructure. The question is, can the industry keep up with the demand?

Become a Speaker

Become a Speaker

Become a Partner

Subscribe for our weekly newsletter