Artificial Intelligence (AI) systems are increasingly becoming tools of efficiency and innovation, but they could also become instruments of unfairness and insecurity if left unchecked.
That was the key warning at a DIPPER Lab’s Research Masterclass organised virtually.
The masterclass was themed “Bias in AI at the Crossroads of Safety and Security” and brought together researchers, policy analysts, tech developers and students.
The event forms part of a larger AI research capacity building initiative funded by the UK’s Foreign, Commonwealth and Development Office (FCDO) and Innovate UK through a Knowledge Transfer Partnership project involving KNUST, Manchester metropolitan University and Sesi Technologies.
Delivering the keynote address was Dr. Mohammed Al-Khalidi, Co-Lead of the AI Safety and Security team at the Manchester Metropolitan University’s Turing Network.
He cautioned that AI systems, if not properly audited, can be manipulated to produce biased and potentially dangerous results.
“Bias in AI isn’t just about fairness anymore. It’s about safety. It’s about security. We’ve shown how easily a model can be pushed beyond acceptable fairness thresholds through simple adversarial attacks,” Dr. Al-Khalidi said.
The masterclass also explored strategies for building more robust and fair AI models.
Dr. Al-Khalidi stressed the importance of adversarial training, which involves teaching AI systems to recognise and reject malicious or misleading inputs.
“It’s like teaching a child what’s inappropriate. You have to show it the bad examples to learn what to avoid,” he noted.
Other recommendations included regular audits of AI models, stronger data sanitisation methods, and carefully balanced explainability features ensuring that AI remains transparent without giving hackers the tools to exploit its design.
The session also delved into different forms of bias, including confirmation bias, selection bias, and bias arising from underrepresented demographics.
Dr. Al-Khalidi called for broader experimentation across more complex AI models and data scenarios.
He noted that his team’s study used logistic regression but said further work could test the impact of adversarial attacks on neural networks and other advanced systems.
“AI is here to stay. We’re not saying don’t use it. But human oversight is still essential. We need to balance automation with ethical and secure design,” he said.
Dr. Eric Tutu Tchao, Scientific Director of the DIPPER Lab and host of the event, reiterated the Lab’s commitment to advancing research in AI Safety and Security.
The DIPPER Lab Masterclass, forms part of ongoing efforts to strengthen Africa’s voice in the global AI Security conversation.