Navigating the Future: Essential Strategies for Effective AI Governance
Good morning, everyone. I am Obadare Peter Adewale hailing from Lagos, Nigeria. I proudly represent the most credentialed cybersecurity professional in Africa, holding more than 58 professional certifications that encompass governance, risk, and compliance. In 2008, I became the first licensed penetration tester on the continent and currently serve as a Professor of Practice in Cybersecurity at a Nigerian university. I also lead Digital Encode, a company dedicated to cybersecurity, governance, risk, and compliance, specializing in penetration testing and vulnerability management.
Today, my focus centers on AI governance, standardization, and cybersecurity within AI platforms. My keynote addresses the pressing questions: Why is governance essential? Why is standardization important? And why does cybersecurity matter in the realm of AI?
While many are celebrating the potential of AI — and I wholeheartedly recognize its capabilities — it’s crucial to understand that AI is not infallible, omnipresent, or all-knowing. AI is fundamentally a technology, a tool, and its effectiveness hinges on the quality of data it processes and its algorithms, which are inherently data-dependent. This reality highlights the urgent necessity for governance.
Take the Wassenaar Agreement, which spans over 42 nations, recognizing AI as a dual-use technology similar to missiles, which can serve both military and scientific ends. This classification of AI as a dual-use good starkly illustrates the immediate need for comprehensive governance.
Let’s reflect on a troubling insight: During the global biological pandemic of 2020, a study by Checkpoint indicated that our next major crisis could very likely be a cyber pandemic. As AI assumes a greater role in our daily lives, neglecting proper governance could lead to disastrous outcomes.
The term governance, which originates from the Greek word “kubernāo,” translates to “steering the ship.” To prevent our global society from veering into perilous waters, we require robust AI governance. Recent events corroborate this necessity; for instance, while we celebrated DeepSea, it was substantially compromised by a cyber attack the very next midnight. Notably, ChatGPT also faced a security breach in March 2022. If prominent platforms with significant resources can be attacked due to inadequate governance, we must question the security of solutions arising from the Global South.
Fortunately, finding a solution isn’t as complex as it may appear. We need to pose three pivotal questions:
- Are we doing the right thing? (A strategic inquiry)
- Are we doing it the right way? (An architectural consideration)
- Are we completing the task effectively? (A delivery-focused question)
These questions should guide our efforts in AI governance, and I have numerous reference frameworks at hand to assist us in answering them adequately. The trajectory of AI hinges on our commitment to establish effective governance structures starting today.
We must take decisive action now to ensure that our AI systems are not only powerful but also secure, ethical, and well-governed. Thank you!