Shadow Generative AI may be putting your business at risk

Opinion
By Chris Odindo | Jun 23, 2024
Chris Odindo, an AI expert at De Montfort University in the UK. [Courtesy]

Just when you thought it was safe to breathe again after the major AI companies' recent updates, there comes a new term to consider: Shadow AI, a hidden danger that lurks within businesses.

Even the phrase evokes a sense of dangerous darkness. Shadow AI refers to the unauthorised use of AI tools, applications, or systems by employees or departments. While the intentions behind that might be positive, aiming to improve efficiency or solve problems, the lack of oversight can lead to serious consequences.

Imagine an AI system that recommends Kenyan traditional remedies for residents of South B while offering Bolivian traditional medical treatments to South C residents just a short distance away. Or using a car with a jua kali-made engine, which was given to you as a bonus after buying a 2kg bucket of potatoes by the roadside on your way from Kinangop and hoping for the best-it might work, it might not work.

Let me get into some specific implications for Kenya's finance, health, banking, and higher education sectors.

In the financial and banking sectors, Shadow AI poses significant compliance risks. This is because unauthorised AI tools might not adhere to the strict financial regulations set by Kenyan regulators, which could result in hefty fines for a business. Furthermore, Shadow AI systems often lack the robust security protocols employed by internal IT departments and vetted systems.

This may result in data breaches exposing sensitive financial information like customer accounts or loan details, all of which could then have devastating consequences for both the business and its clients.

Algorithmic bias is another cause for concern. Unvetted AI models inherit biases from the data they're trained on, potentially leading to unfair lending practices or discriminatory treatment of customers.

Imagine an AI-powered loan approval system that unintentionally discriminates against women-owned businesses due to historical biases in loan application data. That would be wrong.

The healthcare sector is particularly vulnerable to the pitfalls of Shadow AI. Unauthorised AI used for tasks as critical as patient diagnosis or treatment planning raises serious concerns about privacy violations.

Shadow AI systems might lack the necessary safeguards to protect sensitive medical data, putting patient privacy at risk. Moreover, the use of unreliable AI tools for medical diagnosis could lead to misdiagnosis and negative patient outcomes.

Since Shadow AI operates outside official channels, there's reduced accountability for errors or malfunctions caused by these unauthorised systems.

Something else to ponder: AI systems might have to make tough ethical choices, like deciding whether to save a wealthy patient or a low-income one. Having an unsanctioned one making the decision can be as tricky as navigating Jogoo Road during rush hour.

The potential negative impacts of Shadow AI extend to Kenya's higher education institutions as well. Shadow AI could be misused for plagiarism or cheating, undermining the integrity of academic programmes, and devaluing hard-earned qualifications.

Unequal access to unauthorised AI tools could exacerbate existing educational disparities among students. Students from wealthier backgrounds might have the resources to acquire these tools, giving them an unfair advantage over their less fortunate peers.

In addition, overreliance on Shadow AI for learning could have a detrimental effect on students' development, hindering their ability to be well-rounded graduates. Furthermore, AI-powered tutoring systems or automated grading might reduce student engagement and critical thinking skills, further hindering their ability to learn independently and solve problems creatively.

So, how can Kenyan businesses navigate the potential of generative AI while mitigating the risks associated with Shadow AI? A key answer lies in promoting AI transparency.

Businesses should establish clear guidelines and educate employees about the responsible use of AI tools. A centralised process for vetting and approving AI applications before deployment is crucial to ensure compliance and avoid security vulnerabilities.

Enforcing strong data governance practices is also essential to safeguard the security and privacy of sensitive information used in AI systems.

By acknowledging the existence of Shadow generative AI and taking proactive measures to address it, Kenyan businesses can harness the transformative power of generative AI while ensuring responsible and ethical implementation.

- The writer is AI expert at De Montfort University, UK

Share this story
Why Sh1.9b Coast floating bridge turned into a 'white elephant'
The once sturdy metals of the Liwatoni Floating Pedestrian Bridge (LFPB), hailed in 2020 as an engineering marvel, are sagging under the weight of sheer neglect at Liwatoni and Ras Bofu, Mombasa.
More holes in payslip as new NSSF rates set to come into force
Salaried workers, already reeling from enhanced statutory deductions, are bracing for a further squeeze on their pay from February 1 as a scheduled increase in NSSF contributions takes effect.
KPC stake sale: Kenya's strategic play in East Africa's oil and gas rush
So far, the economic prospects in terms of growth seem to favour Uganda and Tanzania compared to Kenya majorly because of natural gas and oil exploration.
How Africa's green energy boom is stalled by ageing power grids
Despite huge renewable potential, Africa’s grid infrastructure is characterised by under investment, fragmentation, and technical difficulties, leaving millions without dependable electricity.
State dangles incentives to woo private investors in geothermal
The proposals are aimed at de-risking investment in a sector the government says can no longer rely solely on public financing to meet future power needs.
.
RECOMMENDED NEWS