Legal Scholar Calls for Robust Regulation of Artificial Intelligence to Safeguard Human Rights

As artificial intelligence (AI) becomes increasingly woven into the fabric of modern society, legal experts are urging swift and comprehensive regulation to address its ethical and legal challenges.

Associate Professor of Public Law at the University of Lille, Dr. Marcel Moritz, has highlighted the growing legal ambiguity surrounding AI, especially concerning the origin, fairness, and bias of training data.

Speaking at the 6th Eminent Legal Scholars and Lawyers Public Lecture organised by the Faculty of Law at KNUST, he warned that as AI expands in influence, questions of liability, transparency, and data integrity can no longer be overlooked.

“Artificial intelligence relies heavily on data. Where does the training data come from? What are the biases? These are questions the law must address,” Dr. Moritz stressed.

He noted that flawed or biased data sets can lead to poor and even harmful AI outcomes, calling for legal safeguards to ensure fairness and accountability.

Dr. Moritz also referenced two newly adopted European AI regulations, one by the Council of Europe focused on human rights, and another from the European Union centered on business.

He said these frameworks could serve as models for countries developing their own AI governance structures.

Vice-Chancellor of KNUST, Professor (Mrs.) Rita Akosua Dickson, emphasised the urgency of responsible AI regulation.

“AI is right in our communities. It is an active reality that is reshaping the way we work, communicate, and govern,” she noted.

She called for frameworks that protect human rights, dignity, and the rule of law as technology continues to evolve.