fbpx

Facts, fiction & fake news: How could AI impact the news media?

How is AI (Artificial Intelligence) affecting the news media, and how much greater could its impact be as this type of rapidly advancing technology becomes even more powerful? Richard Allen, Senior Content Creator at leading content and communications agency MMC who has over 25 years’ experience working in the media, looks at some of the key issues.

How can we be sure that the news we read, watch or listen to is fair, balanced and truthful?

This question is not new – it was as relevant in the earliest days of newspapers and radio as it is in today’s digital media world. But, even for the most media savvy, answering it is about to get a lot more difficult as ever more powerful AI technology makes it possible to make fiction look like fact, and vice versa. In nations where a free press/media is considered a cornerstone of democracy, this presents some major challenges. The flip side is that AI can also make news organisations more efficient and boost profits – an important factor at a time when many businesses are dealing with rising costs and falling revenue due to inflationary pressures on subscribers and advertisers.

Let’s look at some of the pros and cons.

The risks

AI – with its power to create convincing but completely fabricated news, as shown by this recent Guardian example – is now forcing news media channels to address the challenge this creates in terms of the trust and integrity of their businesses and brands. Whether its lone-wolf conspiracy theorists, criminals and scammers, or extremist political movements, we can all think of scenarios in which individuals or organisations seek advantages from creating and spreading fake news. We have already seen how social media can be used as a weapon to undermine trust in mainstream news media and destabilise the social and political order. AI tech that can create fake video and audio content, in the wrong hands, poses an even greater risk.

Many news media experts have voiced their concerns about the risk posed by AI. It could even lead, as one Guardian journalist has warned recently, to a ‘fake news frenzy’ if there isn’t adequate control of AI technology.

At a more mundane level, some news media channels are already using apps such as ChatGPT to ‘harvest’ content from multiple sources and help to speed up the process of creating news and feature articles. But machine-generated copy and images raise all sorts of questions, including copyright, intellectual property rights, plagiarism, defamation and many more factors besides.

While ChatGPT and similar apps can generate copy which mimics the writing style required for news channels, it can’t make decisions about the legality or morality of the content it produces. For the time being at least, the decision about what to publish will continue to have a human element for all reputable news channels. That’s the view of Professor Charlie Beckett, head of the Polis/LSE JournalismAI research project, who contributed to this recent Reuters Institute discussion on ChatGPT and the news media – and its one that’s shared broadly across the industry in the UK, US and Europe.

The rewards

ChatGPT and similar technology can certainly make the process of creating content much faster, and therefore make news media businesses more efficient. Faster delivery of time sensitive content can mean more clicks, and that can mean more paying subscribers and higher advertising revenue.

Perhaps its biggest long-term impact of AI, however, will be to help investigative journalists. In a recent BBC interview with Amol Rajan, Bob Woodward and Carl Bernstein (The Washington Post reporters famous for their work on the Watergate Scandal) gave their view on the potential impact of AI on the news media industry.

Back in 1972 when they began investigating the burglary at the Democratic National Committee headquarters, they used the methods available to journalists at that time. Attending court hearings, knocking on the doors of potential whistleblowers, phoning key contacts and, famously, ‘following the money’. The road which would ultimately lead them to The White House and President Nixon began with a paper trail of documents ranging from library cards to cheques and bank statements.

Today, investigative reporters inspired by them still use the same methods from time to time – but they’re more likely to be dealing with whistleblowers leaking masses of digital data. The Panama Papers, for example, involved the release of 11.5million documents. Reporters from the International Consortium of Investigative Journalism (ICIJ) used open-source data mining technology to help make sense of all that information, and find the public interest news stories buried within it. This is one example of how AI can be used by investigative reporters to break global news stories (ICIJ has been using machine learning – a branch of AI – since at least 2019).

Regulation is on its way

In the news media industry, as in many others, there are calls for new laws and guidelines to control how AI technology develops, what it is used for, and who has access to it. Earlier this month the creator of ChatGPT, Sam Altman (CEO of OpenAI), called on US lawmakers to regulate the industry. This is happening against a backdrop in the US where there’s growing concern about the political and societal impact of fake news being spread via social media. Even more worrying for many media experts and political commentators is the decline in moral and ethical standards at major broadcast news organisations. The best (or perhaps worst) example is Fox Corp’s false claims in news coverage about the outcome of the 2020 presidential election which recently cost the company $787.5million.

In the UK, in March this year the Government ruled out giving responsibility for AI governance to a new regulatory body, but it did set out new guidelines to cover AI tech within the existing regulatory framework. The European Commission also has well-advanced proposals for a regulatory framework on AI too.

Regulation of AI technology – which of course will have an impact on how it’s used in the news media and all other sectors – is on its way. Time will tell how effective it is in terms of ensuring that society reaps the rewards of AI, and mitigates the risks.

Write a comment