As AI continues to reshape industries, newsrooms are at the forefront of exploring both its potential and its pitfalls. Lisa van der Pool, executive vice president of media strategy at Inkhouse, an Orchestra Company, examines how journalists are integrating AI into reporting, the ethical challenges that arise, and what communications professionals need to know to navigate this evolving landscape.
Journalists, like everyone else, are embracing AI — from using machine learning to analyze data to relying on AI tools for transcription, translation, and even drafting headlines and articles.
While AI certainly holds the potential to improve efficiency, it also raises a host of ethical issues for reporters. AI might be powerful, but it’s also hardly perfect. Its history is tied to plagiarism, inaccurate citations to sources, and “hallucinated” information, all of which work against a journalist’s credibility. AI has also been known to show bias, another cardinal sin among reporters.
Newsrooms using AI also risk further undermining the media’s overall credibility. According to a 2024 Gallup poll, trust in the U.S. media has been dropping steadily for fifty years. Today, roughly 35 percent of all Americans trust traditional media, down from about 70 percent in the early 1970s.
In June, the non-profit media institute Poynter released ethics guidelines to help U.S. newsrooms develop responsible policies for using artificial intelligence. The updated framework, building on last year’s version, aims to help newsrooms enhance their reporting with AI while addressing potential credibility concerns among readers.
The guidelines shed light on exactly what newsrooms are grappling with in the age of AI, suggesting that publications prioritize transparency, accuracy, human oversight, privacy and security, accountability, and exploration.
Poynter also notes that journalists should only use AI for “journalism-centered intentions,” and that are cleared by the editors and reporters who oversee AI in their newsrooms. No matter what, verification of AI-generated text, data, or images should always be done by real people.
The guidelines also emphasize that reporters should avoid, at all costs, full writes and rewrites using AI, and instead rely on AI only for specific edits.
So, which publications are actually following Poynter’s recommendations?
Newsroom Guidelines for Implementing AI
Newsrooms today are continually assessing the best way to leverage AI without compromising journalistic integrity. Here’s a sampling of how different newsrooms — each with its own approach — are addressing the use of AI:
Like journalists, PR professionals are also navigating how to use AI ethically and effectively. It’s crucial that communications teams are familiar with all the different ways newsrooms are using and experimenting with AI, as it will change how they approach their work.
AI is transforming the way journalists write and engage with their audiences. For companies engaging with the media, understanding how newsrooms are using this technology is more important than ever.