The recent integration of Copilot across each of Microsoft’s commercial and even personal interfaces, from its Bing web browsers to its Teams collaborative platform, is just the latest of the rapid expansion of artificial intelligence (AI) capabilities to enhance modern knowledge work.

Jaco Oosthuizen, Category Head: Mobility at Rectron looks at some of the unintended security and regulatory implications of AI and offers some general guidelines to navigating some of these murky waters.

ChatGPT, Meta AI, Google AI, Amazon AI, Microsoft CoPilot and even IBM’s Watson platform are just a handful of the more well-known natural language processing (NLP) and AI language model assistants making their way into the South African market.

AI has the potential to transform small and medium sized businesses by developing individuals and organisations. Individuals, through self-directed AI-assisted learning, can develop every area of their lives continuously. From their holistic health, personal finance and academic development needs, AI has the ability to guide people in a way never seen before.

For organisations, leaders can use AI tools like Copilot to drastically enhance their knowledge work. Jobs in finance, regulatory compliance, marketing, strategy, programming and a variety of others that require highly targeted, technical and nuanced insight stand to benefit in ways that few can imagine.

As these tools are increasingly integrated into business systems, it is only a matter of time before they are more fully integrated across every single touch point a user has, from one device to another, from business to social media, without skipping a beat.

The natural human desire for seamless experiences, powered by synced accounts and iterative learning of a person’s lifelong usage means this reality is inevitable, but so is the risk of sensitive work (or personal) data falling into the wrong hands.

 

Global threats

Already, threat actors are increasingly looking towards South Africa as an attractive hacking target, considering the country’s increasing strategic importance on the geopolitical and economic landscape.

Already, some of South Africa’s most important financial services providers, public institutions and telecommunications systems have fallen prey to massive cyberattacks, with threat groups from as far afield as Asia.

The persistence and sophistication of hacking groups, some even backed by nation states, means they are well finance, creative and highly skilled at finding and exploiting even the smallest vulnerabilities.

This all means that individuals themselves are potential vulnerabilities that companies need to protect against phishing, business email compromise, ransomware and other attacks.

 

Licensing protections

While most free systems offer AI assistants, Copilot for Microsoft 365 is currently the only one that offers long-term iterative machine learning per user, with data stored on the local machine as well as in the cloud for an increasingly personalised and more efficient experience.

This, however, presents potential vulnerabilities, where sensitive, proprietary or client data may be compromised or exposed to unauthorised access.

Owners, organisations and users need to all ensure that they understand and comply with license protections and make full use of security features and functionality.

 

Restrict access

With the Copilot for Microsoft 365 Business Premium licence, for instance, cyber threat protections are built into the solution, however, system owners need to ensure they are familiar with the various permissions included.

As Copilot is embedded into various apps, like Word, Excell and Outlook, users need to also be careful about what data they process through AI. Some organisations may even automatically prevent AI tools from access certain data sets, especially where there are legal implications to sharing confidential or sensitive data.

 

Deploying judiciously

As the Copilot licence comes at a cost (US$30 per user per month, or roughly R6 500 a year) organisations should think carefully about which users would best benefit from the investment.

Fields like marketing, sales and strategy may be the first to adopt AI assistance, following which data on both usage and security impact can be measured and reported on before further deployments can be made.

AI tools are set to become embedded into almost every area of work, lifestyle and play.

How this adoption is managed will rely heavily on users’ understanding of their own protections under their licences, broader regulatory frameworks and the rapidly evolving capabilities of these tools.

With a firm and consistent review of the various legal and technological frameworks, organisations and individuals alike can ensure that they gain the most out of AI while shielding themselves from the inherent risks of an increasingly digitised world.