Whether if it’s a press release or a corporate statement—how credible it appears to readers depends on who they believe wrote it. If they are told a human wrote it, readers tend to find it much more trustworthy. But if they are told it was written by artificial intelligence (AI), their trust significantly decreases.
This is the finding of a study conducted by the University of Kansas in the US. The research report was recently published in the international journal ‘Corporate Communications: An International Journal’.
The study was conducted by Associate Professor Cameron Piercy, PhD researcher Aiman Alhammad, and Assistant Professor Christopher Etheridge of the University of Kansas’ Department of Communication Studies.
The aim of the research was to answer an important question, does knowing whether a piece of writing was produced by a human or AI change how readers perceive it?
AI is increasingly becoming a part of daily life, and people are constantly exploring new ways to use it. These uses can have both positive and negative impacts. Often, the use of AI is not disclosed.
The idea for this research came from a communication class at the University of Kansas, where students explored whether they could tell the difference between writing produced by AI and by humans.
Co-researcher Cameron Piercy explained, “Even if readers can’t distinguish between human and AI writing, we wanted to know whether their reaction changes if they’re told who the author is.”
Research methodology
The study used a multi-method approach. Participants were presented with a fictional corporate crisis scenario: some customers became ill after consuming products from a company named ‘Chunky Chocolate Company’. The cause was deliberate misdoings by some employees.
Participants were then given three types of press releases to read: informative, empathetic, and apologetic. In each case, they were told whether the message was written by a human or by AI.
Human-writings more trusted
The results showed that participants who believed a human wrote the press release found the message far more credible and effective. But when told the message was generated by AI, readers tended to respond more neutrally or with suspicion.
However, the researchers found little difference in readers’ reactions based on the type of message—whether it was informative, apologetic, or empathetic.
Accountability lies with humans
The researchers noted that when the identity of the writer is not disclosed in a corporate statement, during a crisis or during other times, readers instinctively question who is responsible.
Co-researcher Christopher Etheridge said, “If you're using AI in writing, you must maintain transparency, take responsibility for any errors, and be prepared for the readers’ reactions.”
The researchers concluded that even if AI is used during a crisis, it cannot replace human accountability, editorial judgement or transparency.