BarnOwl Info Sharing Insight: AI, Unlocking Potential and Addressing Risk Presented by Alex Pryor, IOCO, Head of Innovation
Thank you very much Alex for your excellent presentation on ‘Artificial Intelligence (AI), Unlocking Potential and Addressing Risk’ at the BarnOwl info-sharing event held on 27th March 2025. Thank you to all those who attended the session.
Artificial Intelligence (AI) is unlocking unprecedented opportunities for efficiency, innovation, and growth. But it also introduces complex risks, ranging from ethical concerns and biases to security threats and regulatory challenges. In this engaging and practical talk, we’ll explore how to harness AI’s power while proactively managing its risks.
Slide 1: Agenda
Slide 1 YouTube
Slide 2: What is AI really?
Slide 2 YouTube
Machine Learning
Machine Learning analyzes large amounts of data to identify patterns, generate insights, and make decisions. It enables organizations to process vast datasets quickly—something humans would struggle to do manually—making it a major area of investment over the past 10–15 years.
Deep Learning
Deep Learning builds on machine learning by teaching computers to think more like humans. It mimics human thought processes to handle complex tasks like facial recognition and image understanding, enabling advancements such as computer vision that were not possible two decades ago.
Generative AI
Generative AI, like ChatGPT, creates new content by remixing patterns it has learned from vast amounts of data. While it appears intelligent and well-read, it doesn’t truly understand the content it produces—it simply generates responses based on patterns and input.
Slide 3: The potential for GOOD?
“The thing to remember is that technology, AI included, it’s not good and it’s not bad. It’s what people do with it.”
Slide 3 YouTube
Real-time fraud detection
One of the exciting developments, particularly in financial services, is the use of AI—specifically machine learning—for real-time fraud detection.
Regulatory Change Management
AI can simplify regulatory change management by automatically tracking updates. Instead of manually checking websites for new regulations, tools can perform daily web searches, summarize key changes—like AI-related regulations—and provide links, helping users stay informed with minimal effort.
“Tasks” allow users to set up scheduled, automated actions. It’s designed to help people stay on top of evolving topics like regulatory changes without manually checking for updates. It’s especially useful for professionals who need to monitor fast-moving fields—like AI policy, finance regulations, or compliance frameworks—without falling behind. Still in beta, this “tasks” feature is being tested, but users are already finding it valuable because it acts like a personalised research assistant that never misses a day.
Predictive analytics and risk identification
Predictive analytics uses pattern recognition to forecast future events, enabling proactive decisions. In mining, for example, it helps predict equipment maintenance needs based on usage data, shifting from reactive to predictive maintenance—saving companies millions monthly. This approach can be applied across industries to anticipate risks and optimize operations.
AI can provide insights and recommendations, but final decisions should involve human oversight. Since AI can be rigid and lacks nuanced judgment, critical thinking and human checks are essential—especially for high-stakes decisions like approving or denying insurance policies.
Slide 4: Risks and Red flags
“So now let’s look at the risk and red flags because these are always great to think about.”
Slide 4 YouTube
Accuracy and AI hallucinations
Even top-performing AI systems have limitations, with an average accuracy of 83% according to a Stanford study. They can be wrong about 17% of the time—and sometimes, they don’t just err but actually make things up, a phenomenon known as “AI hallucinations.” This highlights the importance of verifying AI-generated information.
Underlying data quality
AI relies on quality data to provide accurate answers. If the underlying data is poor or incomplete, AI can generate incorrect or fabricated information, as seen in cases where lawyers used AI for non-existent legal precedents. For reliable results, AI should be trained on accurate, relevant data.
Discrimination and bias
AI developers must address issues of discrimination and bias in their systems by implementing guardrails. For sensitive topics like vaccines, these guardrails ensure that the AI provides more scientifically grounded responses, aiming to avoid bias—whether intentional or not—while reflecting current knowledge.
Deep fakes and abuse
Deepfakes are an increasing issue, with AI being used in real-time to overlay images and voices onto conversations, raising concerns about misuse.
Over-reliance and unintentional misuse
“So this is more around where people don’t really know what they don’t know. So part of it is assuming that ChatGPT is always right. Part of it is using it for everything, but there’s also that unintentional misuse. And this is where we really need to start thinking as companies from a risk perspective.”
Over-reliance on AI, like assuming ChatGPT is always correct or using it for everything, can lead to unintentional misuse. Companies need to address this from a risk management perspective to avoid potential issues
Privacy breaches
Privacy breaches can occur when staff members unknowingly input proprietary information into a public AI platform, like ChatGPT. This could lead to sensitive data being integrated into the model. It’s crucial to educate employees on the proper use of AI to prevent such risks.
Slide 5: Governance Frameworks
“How do we actually govern AI?”
Slide 5 YouTube
AI Risk Matrix
A simple governance framework for AI risk management involves using an AI risk matrix, which prioritizes risks based on their impact and likelihood. For example, a company can assess the impact and likelihood of proprietary data being exposed in AI models like ChatGPT and decide how to mitigate that risk.
NIST
NIST’s AI risk management framework, based on the principles of map, measure, and manage, provides organizations with a common language for AI governance and oversight. Proper AI oversight is crucial, as failing to implement the right processes could lead to financial risks, such as cyber insurance providers refusing to pay out in the event of a data breach caused by AI tools.
ISO / IEC 42001
ISO/IEC 42001 is a new standard for data and AI that focuses on documenting, monitoring, and governing AI responsibly. It provides guidelines for ensuring proper management and oversight of AI systems.
“So there’re all of these things here that do help you to manage AI, but it really starts with having that conversation.”
Slide 6: Ethical AI
Slide 6a YouTube
Fairness
Fairness in the context of ethical AI is about ensuring that AI systems make decisions impartially and without bias. It’s a critical component of responsible AI development and use. Ultimately, fairness in AI is about creating systems that are just, unbiased, and equitable—aligning technology with societal values and human rights.
Transparency
Transparency is about understanding the path from input to output in an AI system—what’s often called the “black box problem.” If we can’t explain how the AI reached its decision, that lack of clarity can pose serious challenges and risks.
Accountability
Accountability in AI means clearly defining who is responsible when things go wrong. It’s essential to establish ownership and response plans to ensure someone is held accountable for AI-related issues.
Privacy
Privacy in AI involves ensuring that personal or sensitive data remains secure and isn’t exposed to others. It’s crucial to use AI tools that protect data and prevent potential privacy breaches, especially when handling client or proprietary information.
“You need to make sure that the AI you are using is going to keep your data private and make sure that it is in a way that it’s not going to cause data privacy breaches.”
Sustainability
Sustainability in AI refers to the significant energy consumption required for its computing power. It’s important to consider whether the companies providing these AI services align with sustainability practices and global goals like the UN’s Sustainable Development Goals (SDGs).
Slide 6b YouTube
Slide 7: Questions you should be asking
“So what should you be asking? I’m going to give you 7 really good questions here to start you thinking about AI and governance; to start thinking about how you actually manage risk in your organisations.”
Slide 8 YouTube
These seven questions are just a starting point—but what really matters is engaging in the conversation. Get involved in shaping your organization’s AI policy and take initiative, whether within your team or across the company. Because while AI is powerful and full of potential, it needs thoughtful guidance to be used responsibly.
“Because AI is awesome.”
Interesting comments from the audience
“And something that I’m a little bit worried about is when I’ve also work in early childhood education, and when you work with these stochastic models, for example, a very good prompt is one that doesn’t ask please, because the moment you add the word please, then you open up a branch on the decision tree where it is allowed to deny your request. So I actually teach my students, don’t open up branches that you don’t want it to traverse. So don’t ever say please because then you never create that branch.”
“But when I’m working with the toddlers now, now I have to teach them to never say please to a machine, but always say please to a human.”
“But for the very nature of this giant that we’re dealing with, it feels like such a deep and dark hole that a company can find itself in so much trouble.
So, the first thing is that companies have to accept is that the genie is out of the bottle. You cannot put it back in.
Companies are going to run off and they’re going to make policies that that no one is going to read and everyone is going to ignore.
And then in terms of long term, my biggest recommendation, the first thing you do before policies train, go to your staff and say we know you’re going to use AI.
These are our preferred tools. So train them on how to use it. Train them how to ask good prompts. Try tell them what they should and shouldn’t be using it for and explain why they shouldn’t be using it for that.
And then the next thing is to have an ongoing of committee where you have people, not just the execs, but the people who are using it, literally down to the interns.”
Conclusion
The advancement of AI and robotics promises a better world for all, with huge potential for improving living standards, food security, medicine, service delivery, clean energy etc. On the other hand, they also pose risks such as increased susceptibility to Cyber Attacks and the need for robust IT continuity plans. Organisations can utilise risk management software to detect threats, internal audit software to validate accuracy, and GRC software to solve governance challenges. Compliance software also assists in protecting data privacy. The challenge lies in harnessing the benefits of these technologies while mitigating the related risks. To find out how BarnOwl’s software can assist your organization in managing AI-related risks, check out our resources page.
Presentation and video links
Please see attached presentation here, and the info sharing recording here.
You can also order Alex’s book Risking Irrelevance which I found insightful and extremely useful
Contact us
Cheryl Keller | BarnOwl | cheryl@barnowl.co.za
Alex Pryor | Head of Innovation | IOCO | thatalexpryor@proton.me
Thank you
Once again, thank you Alex for your time and for your informative presentation and thank you to all those who attended our info sharing session. We look forward to seeing you at our next info sharing session. Please keep a look out for our upcoming events at:
http://www.barnowl.co.za/events/
Kind regards
Jonathan Crisp
Director – BarnOwl GRC and Audit software
About BarnOwl:
BarnOwl is a fully integrated governance, risk management, compliance and audit software solution used by over 150 organisations locally and internationally. BarnOwl is a locally developed software solution and is the preferred risk management solution for the South African public sector
supporting the National Treasury risk framework.
Please see www.barnowl.co.za for more information.