Generative AI’s impact on journalism is both revolutionary as it is murky.
I’ve been writing stories my entire professional life, whether it’s as a journalist, or as a public relations professional for a company or a brand.
Authentic story telling is close to my heart and in my opinion, it’s the only thing that can move the needle in any way.
In other words, when writing non-fiction, true stories that are well-researched and well-written are what will get it eyeballs, stir emotion, educate and encourage action.
Having said that, with the rapid emergence and widespread adoption of generative AI, there are many questions that come to the forefront:
- What is AI’s role in journalism?
- How will we distinguish between AI generated and human-generated stories?
- How will news outlets stay ethical and accountable to their jobs and audience?
- How can media outlets ensure that they are using AI to work better and faster, without compromising quality journalism?
- Should news outlets be allowing AI tools to scrape their data?
Answering these questions are full of complexities, debates, nuances and unknowns.
The more they are discussed, the more the media industry will be able to find ways to answer them in way that is best for readers as well as their business model.
There is a great chat on BBC between Madhumita Murgia, Artificial Intelligence Editor, Financial Times; Tom Clarke, Science and Technology Editor, Sky News; Eliz Mizon, Communications Lead, The Bristol Cable; Jackson Ryan, Science Editor, CNET — that touches upon all of the above questions.
The salient points from the chat for me were:
- Media outlets cannot ignore AI and need to make a conscious effort to be aware of what generative AI can do.
For example, AI can be good at quickly summarizing research papers or long complex documents, transcribing and summarizing minutes of court cases — saving journalists a large amount of time. It can also help journalist craft their pitches. However, human overview and editing is required irrespective…