Industry leaders and practitioners gathered at the Splunk .conf25 conference to discuss — among other key issues — agentic AI verification and ethical frameworks amid rising threats.
How can organizations verify if their AI agents are hallucinating, or falling under the control of bad actors or insiders? In an era of exploding identity fraud and deepfake phishing, can manual oversight keep pace with machine-scale threats?
At the Splunk .conf25 event, CybersecAsia.net interviewed Hao Yang, Vice President, Artificial Intelligence, Splunk, about the mounting pressure on many organizations to deploy AI agents, chatbots and applications, as well as how they are wrestling with verifying autonomous AI agents to ensure resilience against fraud and deepfake-driven attacks.
Operational efficiency and ethical AI frameworks
In the interview, Hao Yang highlighted the core challenge: Ensuring accurate, high-quality outputs without errors or manipulation.
He noted: “We deploy applications. How do we know if they are doing what they are supposed to, and not being controlled by bad actors?” This extends to operational efficiency, where firms seek to empower analysts. “Customers want to be more efficient and allow analysts to do more,” Yang explained. “That is the biggest trend among IT leaders right now.”
Drawing from interactions with global customers, Hao Yang revealed: “Whenever I talk to customers, the first question is on agentic AI and how it will work in their domains,” he said. Large corporations are investing heavily to reshape operations through AI, and seeking help to build resilience for AI infrastructure and AI-driven system oversight. His advice? Use a dual approach to digital resilience:
- Fortify all systems for AI
- Use AI to detect anomalies
Financial firms, for instance — including community banks expanding collaboratively — rely on such tools for comprehensive monitoring, Hao Yang noted, pointing to longstanding AI use in payments, credit, and insurance. “These industries have used AI for years, but with regulations ensuring ethical application.”
The next growth phase for the industry hinges on ethical frameworks that enforce explainable decisions, from credit approvals to fraud detection, amid regulatory scrutiny.
Transparency is critical under mandates like those in the US and key financial markets in Asia Pacific, where banks must explain credit denials in plain language, not opaque algorithms. “It has to be human-understandable, not just pointing to the algorithm,” Yang said. This enforces accountability, curbing bias in high-stake decisions.
Citing some of his own firm’s case studies in APAC, he asserted that cloud migration with hybrid visibility (i.e. full visibility into both cloud and on-premise resources) enable clients to manage their hybrid operational environment more easily.
Three pain points
One area of concern was cross-border fraud, which overwhelms manual detection due to data volume. How can fintechs tackle the problem? “Manual efforts cannot catch it all,” Yang observed. Machine learning enables real-time pattern analysis and remediation.
Next, security leaders are increasingly facing AI-amplified threats such as deepfake account takeovers. Hao Yang cited a Hong Kong bank incident where fraudsters had mimicked a Chief Financial Officer convincingly in a Zoom call. “Phishing has become way more dangerous with AI, including deep fakes,” he warned. “CISOs must double-check suspicious activity using network signals like source IP.”
Finally, there was the problem with training gig workers in handling sensitive data. This group of employees usually lack data-literacy skills. To this, Hao Yang suggested: “These folks are typically not tech savvy, so you have to make things simple for them, because if you require them to do a lot of steps and processes, these folks may not be able to (cope). You ideally want to make them have basically zero efforts, zero computer and they don’t need to do anything. AI has to play a huge role there making the experience smooth among the gig workers.”
On a concluding note, Hao Yang noted the increase in the number of “Chief AI Officers” in various organizations. As fintech organizations expand their digital footprints and confront sophisticated threats, the question, Yang implied, is no longer whether AI will shape the future of finance — but how responsibly and intelligently it will be governed.



