By Ryan Boyes,- Artificial intelligence (AI) is rapidly becoming embedded in our everyday lives, from the apps we use, to search engines, facial recognition, smart devices in our homes, and more.
However, while AI has many applications and benefits, and businesses are exploring its use in a variety of ways, there is also a level of risk involved, particularly when it comes to the data that AI uses.
Risk management around AI is critical for any business, whether you have an AI strategy or not, because AI, simply put, is everywhere.
Global standardisation
Having an international standard in place to manage the long-term risk of AI is critical, especially in light of companies like OpenAI recently disbanding their long-term risk team. The need for this is highlighted by the introduction of the International Standards Organisation (ISO) 42001 standard in December 2023.
ISO 42001 provides organisations with best practices for governing AI effectively, with formalised standards around AI management systems and a focus on understanding the risk of AI. It offers a comprehensive approach to managing AI systems throughout their lifecycle.
While ISO 42001 is a separate standard and certification, it is also intrinsically linked to ISO 27001, which is the standard for information security, because AI relies on data to perform its functions. It is therefore impossible to effectively manage AI without addressing information management systems as well.
Every time anyone makes use of any AI system, whether this is part of corporate strategy or not, there is information that is used and processed. It has become imperative that this is better understood and better managed; otherwise, organisations run the risk of information leaks, compliance breaches, and other issues around data security.
Intelligence requires information
The reality is that AI and automation are frequently applied to information in today’s world, often without our noticing or being fully aware. For example, if you use an AI platform like ChatGPT to build a document or help construct an email, which is something many people do without thinking, what information are you inputting to do this?
If there is sensitive data like client names or company intellectual property, there is a risk of compliance breaches, as this information is now no longer under your control and could be stored, processed, and used in a way that goes against local legislation.
Even storing information in SharePoint and then using Microsoft Copilot could potentially be problematic, as the AI servers may be located outside of your jurisdiction, and this may breach laws that your company is required to adhere to. If there is an information breach, the potential implications could be dire
. Organisations today need to be aware of how to manage the risks around AI when it comes to their information, and this needs to form an intrinsic part of both compliance and cybersecurity strategy.
Not just an IT problem
Information and information security are no longer just an IT problem; everyone uses information, and it is critical that it is managed and protected effectively.
From an organisational perspective, this means businesses need to be aware of what AI tools are out there and freely available, what is being used in the company, how to manage potential risk, and, importantly, where it fits in with their overall security strategy.
The borders between roles and responsibilities are blurring, and both information and compliance officers need to understand how AI is being used and ensure appropriate security controls are in place.
While becoming certified on ISO standards is not a legal requirement, they do provide excellent frameworks to guide the process of risk mitigation and to ensure that effective, holistic information and cybersecurity strategies are in place.
An experienced third-party security and risk provider can be an invaluable partner on this journey, helping businesses to understand risks and their impact, how to manage, mitigate, or accept risk, and implement the systems and controls to manage information security effectively as part of a holistic, overarching cybersecurity and cyber resilience strategy.
Ryan Boyes is the governance, risk and compliance officer at Galix