Communications professionals should use artificial intelligence ethically
Opinion
By
Ngulamu Jonathan
| Jan 21, 2026
From newsroom analyses to press statements and corporate campaigns, Artificial Intelligence (AI) has been everywhere. Even TIME Magazine named the “Architects of AI” as its Person of the Year 2025, showing clearly that AI is a technology of global impact.
Yet, alongside this recognition, serious debates have emerged: Unedited AI outputs appearing in major publications, leaders’ remarks showing clear AI authorship, and growing concerns about credibility and transparency. These incidents raise a critical question for our profession: Are there ethical boundaries to guide communicators as they inevitably collaborate with AI?
So far, studies have shown that communication practitioners cannot avoid collaborating with AI. In fact, the rise of AI is not speculative - it’s already happening, and many practitioners in various organisations are using it. By the way, some people would argue: What is the essence of technology and innovation? To make work seamless and easier. Right?
Data from the Global AI Report of 2025 by the International Data Centre Authority shows that 76 per cent of organisations worldwide now use AI, and 69 per cent deploy generative AI in at least one business function.
When it comes to communications, the majority of practitioners use it for brainstorming, analysing content, monitoring and evaluation, proofreading, and so on. Frankly, AI is becoming a core tool in a communicator’s arsenal.
READ MORE
Profit, people and policy: The CEO's triple mandate
The downside of the cheque system on Kenya's economy
A call to account: The Sh100b question every county must answer
Why Kenya's catalytic funds are not effective in poverty alleviation
KRA seeks private funder for new multibillion revenue collection system
KPC set to shake up NSE top 10 as it seeks Sh106 billion in IPO
Why travel insurance is now a modern travel necessity, not an option
Kenya banks on partnerships to get sea-time opportunities for cadets
However, with great power comes great responsibility. For AI - which at times has a tendency to misrepresent facts and, to some extent, perpetuate bias - there are high chances of eroding communication's credibility, which is the very foundation of public relations and communications. This is where ethical responsibility comes in.
According to the Public Relations Society of America (PRSA) - from which many other PR bodies borrow insights - the use of AI in communications must adhere to a Code of Ethics emphasising transparency and accuracy, while safeguarding the necessity of accountability.
In addition, the International Public Relations Association’s Gold Paper No. 19, released early in 2025, warned that AI tools risk amplifying bias, compromising transparency, and undermining trust if not ethically deployed. For instance, an AI tool trained on specific communication demands for one region may fail to correctly generate content for a communicator in a different region.
There’s also the risk of deception among audiences. If a press release is generated using AI, should this be cited? If a Chabot helps respond to a social media query, should the audience be informed? Some of these questions are not just technical - they are deeply ethical. Communications is all about credibility. And credibility, even with the development of new media, remains at the heart of communication’s mission: Building trust through strategic approaches among stakeholders.
That said, critical ethical paths remain crucial to all communications professionals. Practitioners must exercise human oversight, transparency, cultural sensitivity, and accountability even as they collaborate with AI.
On transparency, for instance, when using AI in sensitive contexts such as in content production, disclosures may be made clear. In the case of human oversight, it is important for communicators to realise that AI is not a replacement for human judgment. Whenever any task is involved, there should be a human element that can assess the narrative from a real human communications' perspective. For instance, during crisis communication, it’s only the communicators on the ground who understand empathy, sensitivity, and other humanised elements. AI might reinforce, but human element remains crucial.
As the PRSA echoes these principles in its 2025 ethics update, one thing remains clear: Communicators must treat AI as a collaborator and must uphold the high standards of truth, respect, and fairness as they carry on their day-to-day communications work.
Back home, organisations such as the Public Relations Society of Kenya (PRSK) and communications agencies may continue with sensitisation campaigns reiterating that AI is not the enemy of communications but its evolution. Importantly, communications professionals must lead this charge.