By Brian Pinnock – Is there any single technology burdened with more hype than artificial intelligence?

I recently chatted to author and technologist Mo Gawdat about his book Scary Smart. In our discussion the former Google X chief said that if managed correctly AI will ultimately lead to greater efficiency and productivity, allowing humans to focus on more creative and fulfilling pursuits. He warned, however, that it is important for humans to maintain control over AI and ensure that it is aligned with human values and goals.

He predicts that by as early as 2029, AI will be ‘smarter’ than humans. By 2049, AI could be a billion times more intelligent than humans, making it critical that policymakers ensure that the benefits of AI are distributed fairly and that the negative consequences are minimised.

While I personally believe his timelines are up for debate, there is no denying that the stunning success of generative AIs like ChatGPT and text-to-image tools such as MidJourney have brought AI into mainstream use and will contribute to its rapid growth.

As users explore the potential applications of these highly accessible tools across millions of use cases, the technology will become increasingly embedded in our personal and professional lives, much like social media has become an integral part of our day-to-day existence.

This also introduces the idea of Bring your own AI (BYOAI) for companies, which will need monitoring and control – just like BYOD had to be managed to control the risk.

 

Cyber threat landscape hastens call for AI adoption

Cybersecurity professionals are no strangers to discussions about the impact of AI, both in improving their own security strategies and in how it can be used to power new and deadlier cyberattacks.

Faced with an increasingly complex and volatile threat landscape, cybersecurity professionals would be foolish to discard the potential of AI and machine learning to improve key security functions and add much-needed automation, especially in light of a pervasive skills shortage.

The evolving threat landscape is also placing pressure on security teams to enhance their cyber resilience strategies and deploy new tools to keep work protected from attack.

Two-thirds of South African respondents in Mimecast’s latest State of Email Security 2023 report said the cyberattacks on their organisations are becoming increasingly sophisticated.

 

AI a double-edged sword

The growing complexity and scale of cyber threats can partly be ascribed to how threat actors use AI in their own devious efforts. As is often the case with emerging technologies, cybercriminals were among the first to experiment with AI and machine learning.

The algorithms developed by threat actors have allowed them to scale their attacks and overwhelm underprepared cyber defences that held firm just a few years ago. There are also widespread fears that ChatGPT is democratising social engineering by enabling threat actors to enhance their attacks and create far more convincing and harder to detect phishing emails. The chatbot can automatically handle follow-up questions as well — not with a template, but by generating an in-character response.

With cybercriminals using AI to boost email phishing scams and other attacks, cybersecurity leaders must fight AI with AI. Adopting this technology can help their teams process huge data volumes that exceed human capabilities. It can do it faster and can over time make ‘smart’ decisions over that data, provided a data science team is on hand to train the models as needed.

It’s important to remember however that AI is not the silver bullet and can fall short of expectations. Algorithms need huge volumes of high-quality data that not all organisations have. There’s a tendency to generate too many false positives, and AI remains vulnerable to reverse engineering and mimicry. Without proper maintenance, AI models run the risk of degrading, leading to vulnerabilities, and the lack of transparency over how AI makes decisions can challenge enterprise security teams trying to make good decisions with the data the algorithm produces.

 

Best practices for deploying AI to security

Despite these challenges, AI adds a valuable layer of defence to existing security infrastructure and helps organisations ward off malicious attacks.

South African organisations have taken note of its potential as a powerful addition to their cybersecurity toolkits.

Our research found that 55% of local companies already use AI or machine learning to bolster their cybersecurity, up from only a third last year. Those who have adopted AI report benefits that include an improved ability to block threats (reported by 56%), greater accuracy with threat detection (56%), and reduced scope for human error (49%).

South African organisations also seemingly have a greater appetite for AI than their global counterparts. Nearly all local companies that formed part of our research agreed that AI systems that provide real-time, contextual warnings to email and collaboration tool users would be a huge boon, compared to a global average of 81%. Nearly a third (32%) also said the benefits of such as system would revolutionise the ways in which cybersecurity is practiced, far outpacing the global average of 12%.

For those that are yet to add AI to their security toolkits, the following should serve as helpful guidelines to unlocking the power of AI:

 

Integrate AI into multi-layered defence strategies

Any AI tool should be deployed with existing security solutions. The outcome should be a broad and layered cyber defence system that utilises AI’s strengths but still leverages human expertise to ensure models are properly trained and maintained.

AI models can degrade over time without the proper maintenance by expert data science teams, so keeping a watchful eye over the effectiveness of AI-enabled security is essential.

 

Deploy AI where speed is key

AI is a game-changer for security where speed is of the essence, for example in quickly determining whether a URL that a user has clicked on is safe or malicious.

Proactive threat hunting can also benefit from AI’s speed: instead of responding to every potential attack in the same way, security teams can leverage AI as part of its proven security tools to develop tailored responses to threats on a case by case basis.

This can improve the effectiveness of the response and help avoid the time sink of managing false positives.

 

Use AI to automate some tasks to relieve pressure on security teams

The cybersecurity skills shortage is well-documented. Many global organisations find it challenging to source appropriately skilled cybersecurity professionals, leaving teams under pressure and under-resourced.

AI and machine learning tools can help close the skills gap, allowing security teams to offset critical skills challenges by automating repetitive tasks, streamlining workflows, and driving greater efficiency.

This can allow strained security teams to accomplish more with less, strengthening the organisation’s overall security posture and helping keep data, systems, and employees safe.

 

Mitigate the risk of ‘bring-your-own-AI’

Similar to how bring-your-own-device was a security nightmare for organisations until the appropriate policies and controls were implemented, bring-your-own-AI will pose serious challenges to security teams. New vulnerabilities could be introduced by employees using unsanctioned AI tools, or even by security practitioners using unsanctioned security AI tools.

And using non-public information in AI queries and prompts would open a veritable Pandora’s box of risks, including data leaks and losses, and loss of employees and customers’ personally identifiable information, exposing the organisation to breaches of privacy laws such as POPIA and GDPR.

 

Brian Pinnock is the Vice President: Sales Engineering EMEA at Mimecast