Shadow Generative AI may be putting your business at risk

Opinion
By Chris Odindo | Jun 23, 2024
Chris Odindo, an AI expert at De Montfort University in the UK. [Courtesy]

Just when you thought it was safe to breathe again after the major AI companies' recent updates, there comes a new term to consider: Shadow AI, a hidden danger that lurks within businesses.

Even the phrase evokes a sense of dangerous darkness. Shadow AI refers to the unauthorised use of AI tools, applications, or systems by employees or departments. While the intentions behind that might be positive, aiming to improve efficiency or solve problems, the lack of oversight can lead to serious consequences.

Imagine an AI system that recommends Kenyan traditional remedies for residents of South B while offering Bolivian traditional medical treatments to South C residents just a short distance away. Or using a car with a jua kali-made engine, which was given to you as a bonus after buying a 2kg bucket of potatoes by the roadside on your way from Kinangop and hoping for the best-it might work, it might not work.

Let me get into some specific implications for Kenya's finance, health, banking, and higher education sectors.

In the financial and banking sectors, Shadow AI poses significant compliance risks. This is because unauthorised AI tools might not adhere to the strict financial regulations set by Kenyan regulators, which could result in hefty fines for a business. Furthermore, Shadow AI systems often lack the robust security protocols employed by internal IT departments and vetted systems.

This may result in data breaches exposing sensitive financial information like customer accounts or loan details, all of which could then have devastating consequences for both the business and its clients.

Algorithmic bias is another cause for concern. Unvetted AI models inherit biases from the data they're trained on, potentially leading to unfair lending practices or discriminatory treatment of customers.

Imagine an AI-powered loan approval system that unintentionally discriminates against women-owned businesses due to historical biases in loan application data. That would be wrong.

The healthcare sector is particularly vulnerable to the pitfalls of Shadow AI. Unauthorised AI used for tasks as critical as patient diagnosis or treatment planning raises serious concerns about privacy violations.

Shadow AI systems might lack the necessary safeguards to protect sensitive medical data, putting patient privacy at risk. Moreover, the use of unreliable AI tools for medical diagnosis could lead to misdiagnosis and negative patient outcomes.

Since Shadow AI operates outside official channels, there's reduced accountability for errors or malfunctions caused by these unauthorised systems.

Something else to ponder: AI systems might have to make tough ethical choices, like deciding whether to save a wealthy patient or a low-income one. Having an unsanctioned one making the decision can be as tricky as navigating Jogoo Road during rush hour.

The potential negative impacts of Shadow AI extend to Kenya's higher education institutions as well. Shadow AI could be misused for plagiarism or cheating, undermining the integrity of academic programmes, and devaluing hard-earned qualifications.

Unequal access to unauthorised AI tools could exacerbate existing educational disparities among students. Students from wealthier backgrounds might have the resources to acquire these tools, giving them an unfair advantage over their less fortunate peers.

In addition, overreliance on Shadow AI for learning could have a detrimental effect on students' development, hindering their ability to be well-rounded graduates. Furthermore, AI-powered tutoring systems or automated grading might reduce student engagement and critical thinking skills, further hindering their ability to learn independently and solve problems creatively.

So, how can Kenyan businesses navigate the potential of generative AI while mitigating the risks associated with Shadow AI? A key answer lies in promoting AI transparency.

Businesses should establish clear guidelines and educate employees about the responsible use of AI tools. A centralised process for vetting and approving AI applications before deployment is crucial to ensure compliance and avoid security vulnerabilities.

Enforcing strong data governance practices is also essential to safeguard the security and privacy of sensitive information used in AI systems.

By acknowledging the existence of Shadow generative AI and taking proactive measures to address it, Kenyan businesses can harness the transformative power of generative AI while ensuring responsible and ethical implementation.

- The writer is AI expert at De Montfort University, UK

Share this story
Hope for more jobs as firms in strongest hiring in over 6 years
Companies across most sectors reported taking on more staff to meet rising demand and build capacity for the future.
Agritech dialogue: The high price of Africa's GMO hesitation
Genetic modification of crops confers them with resilience against a number of environmental challenges, key among them drought, pests and diseases.
Kenya's economy grows by 4.9pc in Q3 2025 - KNBS report
Kenya’s economy grew by 4.9pc in Q3 2025, up from 4.2pc in Q3 2024; KNBS attributes change to growth in sectors including construction, mining and agriculture.
Zanzibar to host regional summit as eastern Africa heightens war on illegal fishing
The Blue Voices Regional Summit 2026, will focus on strengthening coordination among coastal states to curb the growing threat of Illegal, Unreported and Unregulated fishing.
Intra-African trade hits Sh28.8tr, but AfCFTA rollout lags
Africa has been urged to speed up implementation of the AfCFTA as trade within the continent hits a record high.
.
RECOMMENDED NEWS