If AI was built to simplify business and technological complexities, why is it making cybersecurity harder?
More than 30 years ago, I was fascinated by the concept of simplicity championed by the lateral thinking pioneer Edward de Bono. De Bono argued that, with the right creative and lateral thinking skills, complex problems can be simplified into elegant, manageable solutions.
However, while he acknowledged the danger of oversimplification, he probably didn’t foresee another concept that loomed as a result: the “simplification paradox”.
While simplicity is a desired outcome, achieving it often requires complex and thoughtful effort. In fact, the simpler we want a solution to be, the more complex would be the underlying thought processes, workflow and design.
Any oversimplification, wrong data, or less-than-well-thought-through design could mean a negative user experience.
Advanced AI technologies are the modern embodiment of de Bono’s principle. Their intricate designs and sophisticated algorithms are precisely what enable them to absorb business complexity and deliver outcomes of unprecedented simplicity and performance.
In a business landscape obsessed with simplification, leaders are constantly seeking ways to streamline operations, reduce friction, and enhance productivity.
The irony is that today’s most powerful tool for achieving this is also creating the most difficulties in securing business infrastructure, data and operations.
AI’s intrinsic complexity
The power of modern AI to deliver simplicity stems directly from its intrinsic complexity. Advanced AI technologies are not simple scripts; they are sophisticated systems designed to mimic and exceed human cognitive abilities on a massive scale.
At the core of AI business applications are technologies like Neural Networks, which process vast datasets to identify patterns invisible to the human eye, and Generative AI, which creates new content by learning patterns from large datasets. Building on these, emerging tools such as Agentic AI allow systems to move beyond simple automation to autonomously understand goals and execute complex, multi-step tasks.
The “black box” effect, where complex algorithms produce a simple output, is a perfect illustration of de Bono’s concept in action. This evolution marks a shift from basic automation to intelligent simplification, where technology actively reduces the mental effort required from users.
When this profound technological complexity is properly harnessed, it translates into tangible simplicity across the enterprise – whether automating workflows and processes for enhanced agility, or empowering faster and better decision-making with real-time insights, or elevating customer experience.
The financial impact is also clear, as companies investing heavily in AI report substantially higher revenue and profit compared to non-adopters. This simplification-driven performance becomes a decisive competitive advantage, allowing businesses to operate with greater speed, intelligence, and customer focus.
As Srikanth Seshadri, Chief Solution Architect, HPE, put it: “AI adoption is no longer experimental, but it’s mainstream. In today’s AI-driven era, enterprises see AI as a catalyst for productivity, agility, and innovation, making it a critical lever for sustained growth in an AI-led marketplace.”
According to Deloitte’s report, AI for Business: APAC trends in AI platform adoption, investments in AI across APAC are projected to grow at a compound annual growth rate of 24% through 2028.
The AI security paradox
“However, the real story isn’t the innovation payoff,” Srikanth continued. “It’s the security bill that comes with it. As enterprises plug in more AI models and tools, they are inadvertently multiplying risks, such as security stacks getting more complex, sensitive data being more exposed, and recovery in the wake of an attack proving more challenging than ever.”
The paradox is clear: AI is meant to simplify business, but in practice, it is complicating security. According to the 2025 Global Study on Closing the IT Security Gap by Ponemon Institute, sponsored by HPE:
- 57% of organizations cite increased security complexity as their top AI risk
- 47% fear confidential data leakage
- 44% worry about recovering lost data after an attack
“In other words, the very tools meant to give enterprises an edge are also opening fresh blind spots for attackers to exploit,” said Srikanth. “This ‘AI trust gap’ mirrors a broader industry pattern: organizations rush to adopt new technologies, only to discover that the weakest link isn’t the tool itself. It’s the governance and resilience surrounding it.”
As enterprises accelerate AI adoption, Srikanth emphasized that bridging this gap has become a strategic imperative. “It requires a multi-pronged approach: ensuring transparency in how models function, embedding strong governance frameworks, and proactively addressing bias to promote fairness.”
Some key guardrails he advised organizations to install include:
- Prioritize explainability to help safeguard security and privacy
- Maintain human oversight in critical decision-making
- Align with global standards
- Educate stakeholders to foster confidence and accountability
He concluded: “It is important to unlock the full potential of AI responsibly and sustainably; however, without these guardrails, AI could become the hacker’s best friend rather than the enterprise’s most powerful ally.”