Data is only becoming increasingly important for business success, while also getting more challenging to manage.

For large, established organisations with massive legacy databases and siloed systems, this could be a very dangerous combination.

Next year will be a milestone for many as 83% of enterprise workloads will have finally moved to the cloud and on-premises usage will drop by 10%. As these organisations migrate to the cloud, they’ll have both the opportunity and the need to get their house in order.

As we enter into a new decade, Jasmit Sagoo, senior director and head of technology UK&I at Veritas Technologies, explores how data will evolve and how organisations will transform to control and capitalise on it in 2020 and beyond.

 

IT will run itself while data acquires its own DNA

Organisations are already drowning in data, but the flood gates are about to open even wider. IDC predicts that the world’s data will grow to 175 zettabytes over the next five years. With this explosive growth comes increased complexity, making data harder than ever to manage. For many organisations already struggling, the pressure is on.

Yet the market will adjust. Over the next few years, organisations will exploit machine learning and greater automation to tackle the data deluge.

Machine learning applications are constantly improving when it comes to making predictions and taking actions based on historical trends and patterns. With its number-crunching capabilities, machine learning is the perfect solution for data management. We’ll soon see it accurately predicting outages and, with time, it will be able to automate the resolution of capacity challenges. It could do this, for example, by automatically purchasing cloud storage or re-allocating volumes when it detects a workload nearing capacity.

At the same time, with recent advances in technology we should also expect to see data becoming more intelligent, self-managing and self-protecting. We’ll see a new kind of automation where data is hardwired with a type of digital DNA. This data DNA will not only identify the data but will also program it with instructions and policies.

Adding intelligence to data will allow it to understand where it can reside, who can access it, what actions are compliant and even when to delete itself. These processes can then be carried out independently, with data acting like living cells in a human body, carrying out their hardcoded instructions for the good of the business.

However, with IT increasingly able to manage itself, and data management complexities resolved, what is left for the data leaders of the business? They’ll be freed from the low-value, repetitive tasks of data management and will have more time for decision-making and innovation. In this respect AI will become an invaluable tool, flagging issues experts may not have considered and giving them options, unmatched visibility and insight into their operations.

 

Attention will turn to innovating and securing the edge of the network

5G is just the beginning, opening us up to a whole new wave of instant, rich and interactive on-demand services processed at the edge of the network, narrowing the gap between data and user, and powered by the Internet of Things (IoT).

However, will the edge be able to keep up with the explosive growth of the IoT? Gartner predicts that by the end of next year there will be 5.8 billion connected devices on the market – a 21% increase on 2019, which saw 21,5% growth from 2018. If this rate of growth continues, there will be more data on the edge of the network than at the heart of it. The micro data centres being built now to process all this data will soon become macro data processors.

Until storage capacity issues are resolved, operators and organisations at the edge will have to restrict themselves to only carrying transient data. This is information that, once generated and used for its temporary purpose, can be expunged so as not to overburden the edge network with obsolete data. However, simply because data has a sell-by date doesn’t mean it has limited value.

Such transient data is most commonly used for decision-making. It could, for example, facilitate the navigation of autonomous vehicles in a future driverless transport system. IoT-connected sensors embedded in the body of a vehicle can stream accurate, real-time updates of the vehicle’s geolocation to overhead satellites, which in turn send back instructions that help the vehicle complete its route.

Once the message is received it has no further value and can be promptly deleted. However, the simple act of receiving the information ensures the vehicle and its passengers get to their destination in the quickest and safest way possible.

Crucial decisions will increasingly be made off the back of this temporary data. That’s enough to make it a tantalising target for cybercriminals interested in causing trouble or holding businesses to ransom. Tampering with autonomous transport systems, for example, could cause severe traffic build-up or even dangerous accidents.

It also magnifies the disruption caused by any downtime on the edge network. We’re very focused at the moment on moving our data to the edge, but our attention will turn very quickly to ensuring its resilience. Operators will respond either by building a large number of secondary edge sites to keep their critical services and applications available, or by using the centralised network as a backup.

 

The emergence of global data standards and data-centric roles

Data bloat is only one of the challenges facing organisations in 2020. The next most pressing will be data quality and efficiency of managing it. Not all companies take the same pains to optimise their data, resulting in repositories of unstructured data that are larger and less efficiently managed than they should be.

While standards such as GDPR have started to make a positive impact on helping companies prioritise data hygiene and protection, there is no single, global framework that tells businesses how they should store, manage, classify, protect and secure their data.

It’s easy to become accustomed to the status quo, but this divergence in data practices only slows down the flow of data between organisations and forces many to waste added time and resources on data cleansing and management. Data has become the lifeblood of many sectors – we can’t afford to let it clot.

That’s why we’ll see the beginnings of a concerted movement across industries to bring in legally enforceable standards for data quality. Arguably synthetic data will commonly be used as a mechanism to share intelligence without compromising the source or the subject of the data.

A single, global data standard that crosses borders remains a pipedream, but we should expect to see many industries start to entrench good data practice for their members, regardless of their country of origin or location of their customers.

The penalties for non-compliance may even include fines, the loss of industry accreditation or being banned from important associations.

Of course, mileage and speed will vary from sector to sector – already heavily regulated industries like banking and healthcare will likely take the lead – but any progress means better data quality and fewer data dilemmas.

The question is, who in the organisation will be charged with enforcing these new data standards? Many businesses already employ chief data officers (CDOs) and data protection officers (DPOs) to ensure their digital estate is secure and protected. However, the sheer amount of data they are responsible for, coupled with the growing awareness of data’s importance across the entire business, means we are going to see data responsibility filter out rather than become more centralised.

Rather than having a single CDO or DPO, different departments will begin to employ personnel with multiple competencies, including data expertise. Candidates with data experience in addition to the skillset traditionally expected for their role will only become more sought after as organisations hire for new hybrid roles. Other departments may take the alternate approach of hiring their own data specialist. Regardless, the time when data responsibility was passed off to IT or laid solely at the feet of the CDO will come to an end.

 

Insight is power

A combination of technology and automation will transform how organisations protect and utilise their most critical data in the future. However, companies can’t afford to neglect the basics of sound data management in the present. Many of tomorrow’s most exciting solutions depend on data that has already been centralised, cleaned up and correctly labelled. Automation may take over many of the day-to-day requirements of data management, but employees will still have to know where their company’s data is to make the most of it.

Data responsibility and best practice have to be taught first. Databases often fragment or bloat because employees lack strong guidelines – data leaders need to step in here, training employees in the correct use of metadata and discouraging unnecessary copying. Organisations should then encourage the adoption of data management tools that break down silos and help employees see what data they have and where it is at all times. Once the organisation has complete visibility, it can then look to automation tools.

In the data deluge, will organisations sink or swim? The answer depends on what they do now to deliver data protection, performance, accessibility and intelligence.