News Corp Australia is producing 3,000 articles per week using generative AI. Even the august Washington Post is using its home-grown Heliograf AI tool to automatically generate short reports for its live blog.
Yet traditional media also has expressed its dread of the impact of AI on journalism. News Corp’s global CEO, Robert Thompson, is reported as warning (in a made-for-tabloid-headline):
“Instead of elevating and enhancing, what you might find is that you have this ever shrinking cycle of sanity surrounded by a reservoir of rubbish. That instead of the insight that AI can potentially bring, it will evolve into essentially a maggot-ridden mind mould.”
CNET, a popular tech news website, has had to append correction notices to feature articles generated entirely by artificial intelligence. For example, a numeracy-challenged AI ‘authored’ an article about compound interest which incorrectly said that a $10,000 deposit bearing 3 percent interest would earn $10,300 after the first year, not just $300.
Could AI not come at a better or worse time for traditional media?
A recent article by European academics (Andreas Opdal et al) points out that while we need trustworthy journalism more than ever, the traditional media business model on which we rely to deliver it is broken:
“The last two decades have put pressure on journalists, editors, and newsrooms. On the content side, young digital natives have different news habits from the older media consumers. They rely more on alternative and free information sources and are less likely to pay for news subscriptions. Other segments of the population shun mainstream media due to perceived political bias and distrust in authorities. These attitudes may become exacerbated by co-ordinated disinformation campaigns and amplified in sealed information enclaves, such as online echo chambers and alleged search and recommendation bubbles….On the business side, media income from advertising, subscriptions and sales has dropped due to the availability of free online news sources, social media, search engines, and other intermediaries…As broken business models lead to newsroom layoffs, these challenges become even harder to tackle, creating a vicious cycle.”
In their article, Opdal et al endeavour to present “a vision for how recent advances in AI can support trustworthy high-quality journalism at every stage of the journalistic value chain..”
Trust in the journalism is a two-sided equation: the first two stages – gathering and assessing news - have to do with the journalist believing in the trustworthiness of journalistic sources and of the information they provide: whereas the last two stages - creating and presenting the news – involve the reader or viewer believing in the trustworthiness of the news stories they receive.
How AI can enhance gathering of news
Information gathering is the foundation of trustworthy news production. Journalists know that placing trust in a single source is high risk, and so a journalist’s standard operating procedure should be to cross check a story with as many other sources as possible. Often this will be a jigsaw puzzle in which different, small pieces of the story will be verified across a range of separate sources, to reach the point where the story can be judged more likely true than not.
AI, of course, is all about finding unrevealed or unobvious correlations in lakes of data. As Opdal et al say, “new uses of AI can seek to make content more trustworthy by relying on diverse and credible sources and by corroborating (or triangulating) overlapping information from independent sources.”
AI also can improve ‘breaking news’ capabilities of traditional news organisations by detecting ‘early tremors’ of news in social media, such as people posting on-the-spot photos of events in real time. Reuters has experimented with News Tracer NLP and ML to detect pre-news events, giving ‘‘our journalists anywhere from an 8- to 60-min head start’’ on ‘‘global news outlets in breaking over 50 major news stories’’.
AI can help traditional media get even further ahead of the news curve and identify topics of interest to segments of its audience on which journalists can then their focus news gathering or investigative efforts. For example, The Atlanta Voice, the largest African American community newspaper in Georgia, USA, uses CrowdTangle to identify topics and monitor trends of special interest to the African American community:
“We use key words and Overperforming data to find trend lines that are not easily researched or discovered through traditional search or data analytics using Google or the on site platform analytics. The Instagram data we can access has also been invaluable to show growth trends on pages. Ex: When "Olde Town Road" became a hit record for Lil Nas X (a Georgia native), we were able to research and show when the account gained traction on Instagram.”
AI can be adept at verification in a multimedia environment. ‘Deep fake’ photos are a particular challenge for journalists but it can take one AI to catch another AI faker. AI also can verify text against a wide range of other data sources: for example, are the stated numbers of participants in a demonstration consistent with traffic data at the time, and are the supporting images consistent with weather data and light conditions?
A little more spookily, Opdal et al suggest that AI could be used to test the inherent trustworthiness of the person who is the source of the story:
“Machine learning techniques can be used to train models that profile human informants through measures such as their historic reputation, their social-network connections, their knowledge background, position, sponsoring organisation, and whether they are referred to by other informants. For example, post-event historical analyses of Twitter feeds can be used to identify and positively weigh accounts that have consistently reported newsworthy events early and in a trustworthy manner.”
How AI can improve journalistic assessment of news
As Opdal et al point out, AI’s peerless ability to rapidly process vast lakes of data has obvious benefits for investigative journalism in our data-heavy world:
- the Panama papers, for example, required analysis of 4.8 million emails, 3 million database entries, 2 million PDFs, one million images and 320,000 text documents, totalling 2.6 terabytes of information. Sophisticated analysis of the data was required to unpeel the deliberately complicated ownership structures used by politicians to hide their corrupt wealth.
- the analysis of such vast stores of data often involves large teams assembled across multiple media organisations: the Panama Papers involved 100 news companies and 400 journalists. Opdal et al say that “[AI] tools that analyse social networks and that connect the right people inside a distributed and possibly global news organisation..[can] ensure that each news story is backed by a team with complementary competences and to avoid duplicate or even inconsistent reports about the same event.”
- a commonplace approach of interviewees is to brazenly assert ‘alternative facts’ as the truth. Opdal et al say that “[d]uring interviews, the veracity of the claims made and information provided could be assessed in real-time…[i]nformation retrieval and NL inference techniques could be used to suggest appropriate background information and follow-up questions.
How AI can enhance creation and distribution of news
The most obvious role of AI can play in assisting a journalist write an article is to generate from the source material a basic or starting narrative, which the journalist can then build on, polish and edit.
But Opdal et al say that AI can be used as a writing tool in more sophisticated ways which can enhance the quality of human-authored journalism. NewsCube is a three dimensional storytelling tool that lets people curate complex stories and tell them from multiple perspectives, developed by journalist Syke Doherty and winner of a Walkley Grant for innovation in journalism. NewsCube can facilitate the journalist in the writing process to identify different ‘angles on the story’, even weakly backed or known false positions, and to test whether they should be potentially mentioned in the news report.
Google is reported to be developing a personal assistant for journalists, called Genesis, which automate some tasks, such as generating headlines or writing in different styles. Google sees Genesis as not replacing journalists, but as an opportunity to help “steer the publishing industry away from the pitfalls of generative AI”.
AI also can broaden and deepen content:
- “Media consumption has moved from one-dimensional linear content streams (such as linear TV, static HTML) onto multiple platforms that are capable of adaptation and interactivity (such as phones, smart speakers, smartwatches, tablets, etc.)…[and once the core story is written].. generative multimodal representation models can be used to create transmedial narratives that can be presented across several platforms of different types."
- AI can be used to translate content into multiple languages to reach a wider audience.
- AI also could be “harnessed for trustworthiness..[f]or example, while presenting a story on a TV screen, trustworthiness can be underlined by making deeper information, such as background facts, related social-media content, examples, links, and other information, available through the viewer’s mobile phone at the same time.”
Are robot journalists inevitable?
While painting this optimistic picture of the potential benefits of AI for journalism, Opdal et al are realistic enough to acknowledge that:
“[i]n a moment where many newsrooms are suffering crises of distribution and revenue — problems for which generative AI offers no obvious help — the most obvious use for any form of automation is cost reduction.”
“augmented news production pipelines [will] initiate an irreversible process, driven by business concerns, towards increasingly automated news, in which journalists are gradually turned into high-level overseers and maintainers of journalistic information flows.”
KPMG has estimated that 43% of the tasks performed by authors, writers and translators could be carried out by AI tools. BILD, Germany’s biggest tabloid newspaper, laid off 20% of its workforce, in part justified by the increasing use of AI.
Yet at a recent Oxford Internet Institute-Minderoo Foundation conference on AI and journalism, there was a widely held view that journalism has an inherent ‘humanness’ that AI could not replicate:
“Jobs in journalism consist of a combination of tasks, only some of which can be automated. Many speakers, for example, were convinced that AI cannot match a journalist’s writing skill (two of them called AI’s writing ‘boring’ and bland). Likewise, the ability to report or identify new facts and opinions are skills that will likely remain hard to automate. Other participants highlighted that humans are unbeatable at understanding the meaning and context of information – something that together with accuracy, however, remains the core of good journalism….. for many journalistic tasks, the human ability to think and reason –drawing intuitive inferences about other inferences – remains of the essence. Questions such as ‘Why does this story matter?’, ‘Why is this important?’, and ‘What does this mean in a given context?’ can only be asked and properly answered by a human.”
There is also a view that the decision not to use AI, i.e. by identifying articles as being ‘written by humans’ could become a moniker of quality journalism. The SMH reports the CEO of Crikey, Will Hayward, as saying that AI “hoovering up all the unoriginal reporting and regurgitated non journalism that so many mainstream outlets provide” will have the opposite effect of rewarding original news and information delivered by publishers direct to readers.
But this confidence in the essential, irreplaceable humanness of journalism might be misplaced: the SMH also quotes Lisa Davies, CEO of Australian Associated Press: “AI is like a cadet journalist. It’s enthusiastic and prone to mistakes, but one day it’ll be better than you.”
Read more: Trustworthy journalism through AI