By Erica Langhi – Enterprises today are looking towards AI adoption to stay competitive, better understand their customers, and uncover efficiencies.

But, while excitement continues to grow around AI’s potential, many initiatives ultimately will struggle to gain traction. A primary culprit—the lack of a collaborative platform supported by a robust hybrid cloud infrastructure. Without hybrid cloud underpinning your AI strategy, success remains elusive.

The promise of AI proves hard to ignore. New AI-powered tools help enterprises work smarter by automating mundane tasks. They also provide sharper insights from data that can transform customer experiences, uncover cost savings, and reveal new opportunities. With many leading companies now touting AI capabilities, almost every CIO feels the pressure to pursue AI or risk falling behind the competition.

But the reality is that enterprises struggle converting AI projects from pilot to production. The associated costs and complexity overwhelm data science teams without the right operational maturity. Infrastructure can’t meet the heavy demands of AI workloads. Silos between developers, data engineers and IT ops slow progress.

 

Trust Through Model Explainability

In the realm of AI, trust is paramount. The idea of model explainability becomes a crucial factor in establishing trust, addressing concerns related to the ‘black box’ nature of large machine learning models. Many enterprises are hesitant to adopt AI due to understandable scepticism around trusting model outputs. How do you have confidence that AI recommendations accurately reflect reality? This proves especially concerning for risk-averse industries like healthcare and financial services.

Model explainability is not just about understanding the model’s inner workings; it’s about ensuring that the model has been trained on verified, proprietary, contextual data. The most valuable data for enterprise use cases remains the proprietary data locked away on legacy systems and in private data centres. Models trained on cleaned, validated, and enriched proprietary data assets can instil confidence that any AI outputs derive from real-world, truthful data unique to your organisation.

For example, by training customer service chatbots on years of genuinely tagged customer call transcripts, you ensure its responses match real customer conversations versus mimicking online dialogues. Similarly, in Ansible Lightspeed, models are trained on real working Ansible playbooks, the outputs are not just theoretically sound; they are practical and workable.

The verified data flows through hybrid pipelines into the models. So when deployed AI drives decisions, provides recommendations, or even automatically generates code, you’re able to explain what factors and data trained the model. This transparency establishes justified trust and confidence in adopted AI.

The big problem with this approach is that many organisations, especially highly regulated ones, are hesitant to have proprietary data in the cloud. In some cases they’re simply not able to due to legal and regulatory requirements. Keeping data on premise is therefore a must.

 

Flexibility with Burstable Resources  

This is where we encounter our next big problem – AI model development and training soaks up massive compute cycles well beyond the capacity of traditional data centres. The variable nature of data science work also demands flexible scaling up and down of infrastructure to meet needs meaning there is an undeniable need for the compute power and scalability that the public cloud offers.

Public cloud costs can spiral out of control without proper governance. What data science teams require is flexible access to public cloud resources that burst from a private cloud foundation. A hybrid model provides the most cost-efficient and agile training environment by eliminating unused capacity. Hybrid cloud allows public cloud consumption only when necessary to meet temporary demands whilst also enabling data to reside in on premise.

An additional benefit of the hybrid approach centres around Environmental, Social and Governance issues. As consumers and customers become increasingly motivated by ESG issues, they are moving their spending power to organisations with an established framework. Enterprises can consider hybrid cloud structures as offering a balanced approach to managing costs and environmental sustainability. Organisations can optimise resources based on specific project requirements, ensuring that AI initiatives remain cost-effective and environmentally responsible. The flexibility provided by a hybrid cloud allows for dynamic allocation of resources, preventing unnecessary expenditures and reducing the overall carbon footprint associated with AI model training.

The journey toward AI excellence involves striking a delicate balance. The era of AI demands not only technical prowess but also strategic acumen in managing proprietary data, ensuring legal compliance, and optimising resources. The hybrid cloud emerges as the linchpin in this narrative, offering a holistic solution that aligns the potential of AI with the imperatives of modern enterprise governance. As the AI landscape continues to evolve, embracing a hybrid cloud-centric strategy is not just a choice; it’s an imperative for success.

 

Erica Langhi is a senior solutions architect: EMEA at Red Hat