The development of the cloud over the last 15 years is one of the most significant convergences of computing and communications technologies in history.

By Alain Sanchez, senior evangelist  office of the CISO at Fortinet

It provides unprecedented agility and scalability for organisations, immediate access to information and transactions for individuals and has transformed our global economy similar to smartphones and IoT devices.

It has also resulted in one of the most unique technology transitions in history. Traditionally, whenever a new technology arrives – whether the steam engine or the Internet – there has been an orderly and often rapid transition from old to new tools. The advent of the cloud looked to be on a similar trajectory based on the initial assumption that it would be the best choice for all IT infrastructure.

However, according to a recent IHS Markit survey sponsored by Fortinet, infrastructure, applications, and data are continually moving back and forth between on-premise physical networks to private/public cloud infrastructures as organisations try to figure out where and how it is most appropriate for them to use the cloud.


Multi-cloud is here to stay

Of the 350 companies surveyed, 74% had moved an application into the public cloud, and then for a variety of reasons and circumstances, decided to move it back into their on-premises or private cloud infrastructure. This doesn’t mean they reversed all of their cloud deployments, just that they are encountering cases for bi-directional movement.

For example, 40% of respondents noted that, in some cases, the cloud deployments they moved back into their infrastructure were “planned temporary” deployments. This could be due to a variety of factors, such as the need to set up a temporary infrastructure during an IT transition associated with a merger or acquisition. However, there are many other issues at play that could also be responsible, including concerns about security, the need to manage costs, poor performance in the cloud, shifting regulations, development of new applications, and changes in underlying technologies.


Whatever your plans, plan for change

One crucial reality of the forces that drive these changes is that they are in constant flux, making the dynamic multi-cloud the new environment that many companies now need to live in. Companies deploying applications and other resources into the cloud, and the technology providers that help them with infrastructure, management, and security, now need to consider this new reality as a baseline condition and build products and services with bi-directional movement and co-existence in mind.

To truly take advantage of the best of the cloud, organisations need to make sure the tools and technologies they use are offer consistent capabilities, the ability to automate operations and good visibility across environments, meaning they should operate across a variety of public cloud environments, as well as in private clouds and on-premises physical networks. While moving applications and DevOps services between cloud environments seamless and straightforward, security can be more of a challenge.


When cloud deployments keep shifting, who owns security?

The first challenge is identifying who owns security in the event of a malicious cyber incident. When asked about the factors driving them to move applications back into their infrastructure, the top two responses – each selected by 52% of respondents – were performance and security.

Who is responsible? While performance is likely to improve over time as practices building applications in the cloud improve and organisations better establish expectations, security is a more vexing problem because many companies don’t have a good handle on who is responsible for what. In a best-case scenario, where it is clear who should be responsible (such as the existence of a vulnerability in the virtualization/cloud platform), only about half of respondents were able to pin the root cause where it truly belongs: on the company who built or implemented the vulnerable technology. Of course, this is a cynical response built on long experience working with flawed technology riddled with vulnerabilities for which IT and security teams accept responsibility, and in most cases, where they have made the decision to go with these technologies. Conversely, a high percentage of respondents incorrectly held their cloud provider responsible for higher layer threats (like APTs) affecting vulnerable systems they have chosen to deploy where in fact the organization itself is responsible.

While security responsibilities can be generally divided between the underlying cloud infrastructure (which needs to be secured by the cloud provider) and the software, data, and applications running on top of that infrastructure (which are the responsibility of the consumer), those divisions aren’t always so neatly divided – especially as we step into PaaS and FaaS. The best rule of thumb is that one must consult with best practices pertaining to every cloud service that he is consuming. And expect the cloud provider only to provide an isolated, available work environment to run these services. The cloud is a shared infrastructure, and when it comes to security events, its important to distinguish between the organisations responsibility to the cloud provider in order to effectively address risk.

The technology issue. The other challenge is that security tools, functions, policies, and protocols don’t operate similarly between different public cloud platforms, private clouds, and physical infrastructures. While moving an application or service from one environment to the next may be straightforward, many security solutions require a significant amount of IT resources to redeploy and validate a security solution, especially when workflows, applications, and data need to be inspected and secured as they flow between different environments.

Resolving this issue can be complicated. It starts with standardizing with a single security vendor that has solutions that run consistently across the broadest possible range of public cloud, private cloud, and physical environments. Next, these tools need to run natively in the various public cloud environments to maximize effectiveness, while seamlessly translating policies, functions, and protocols between different environments using some form of cloud objects abstraction layer. These will yield the best results as existing security operational model remain applicable across a diverse and dynamic environment.



The transition to the cloud has been anything but an orderly process. However, for the foreseeable future, applications and services are going to constantly be moving back and forth between different environments until organisations find the combination of public, private, and on-premises solutions that work best for them. And even then, there will continue to be plenty of reasons why applications, infrastructure, and other resources will need to be moved.

In this new dynamic environment, security cannot afford to be something bolted on after the fact or implementing multiple disperse tools. This approach inevitably leads to issues like vendor sprawl, deployment delays, and security gaps due to things like configuration incompatibilities and differences in functionality and policy enforcement between security solutions deployed in different environments.

As a result, adopting an integrated security strategy that can be used using a streamlined security management operational model, see and manage security devices and policies across the entire distributed network, that run natively in different cloud environments while maintaining consistent enforcement, and that can adapt seamlessly as network continues to evolve, are table stakes in today’s new digital economy.