Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
LRQA Strengthens Cyber Resilience Conversations at Cyber Security Worl...
LRQA Strengthens Cyber Resilience Conversations at Cyber Security Worl...
Scam encounters remained widespread in 2024/25: prevention efforts cha...
GOVWARE 2025: SECURING THE NEXT CHAPTER OF DIGITAL TRUST, INNOVATION, ...
Cohesity Named a Leader Once Again in the IDC MarketScape: Worldwide C...
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Psst… know anyone interested in selling his/her identity?

      Psst… know anyone interested in selling his/her identity?

      Tuesday, October 7, 2025, 10:30 AM Asia/Singapore | Features
    • Featured

      When failure is not an option

      When failure is not an option

      Monday, October 6, 2025, 3:02 PM Asia/Singapore | Features, Newsletter
    • Featured

      Fragmented data, fractured trust

      Fragmented data, fractured trust

      Tuesday, September 30, 2025, 3:09 PM Asia/Singapore | Features
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

Features

Redefining the frontlines of digital defense

By Victor Ng | Tuesday, July 1, 2025, 3:16 PM Asia/Singapore

Redefining the frontlines of digital defense

Clearing the myths and misunderstandings about AI, its role in financial cybercrime, and how it should be leveraged in digital defense and riskops.

At the recent SuperAI event held in Singapore, Nuno Sebastião, CEO and Co-Founder, Feedzai– and one of Europe’s leading voices on AI in financial crime prevention – participated in a panel discussion titled Cybersecurity Redefined: Digital Defense in the Age of AI.

The discussion explored how AI is reshaping the frontlines of digital defense – particularly in sectors like banking, where the stakes are high and the threats increasingly complex. Following up on the subject, we sought further insights from Sebastião in an exclusive Q&A with him:

How is AI reshaping the frontlines of digital defense today?

Sebastião: Across all industries,  and especially the financial services industry, AI is enabling criminals to launch much faster and more realistic scams with deepfakes, synthetic identities, and hyper-personalized techniques. In response, banks are fighting back to protect themselves and their consumers by adopting AI-native “riskops” platforms that automate decision-making processes for new account openings, fraud detection, and money laundering. These platforms help them understand risk to process trillions of transactions with minimal friction for consumers.

Yet, not all AI platforms are created equal. Banks need to be able to trust the data that their AI platforms are trained upon. They need to know their AI systems are ethical and responsible with regards to consumer protection, fair lending, and credit underwriting. Their systems need to be explainable, helping to build trust with both regulators and end users by making every decision transparent and auditable.

And as synthetic content and manipulated communications become more common, AI-native riskops platforms allow banks to spot and neutralize threats before they cause harm.

Is the fear of AI replacing humans in fraud detection a myth? What’s your perspective?

Sebastião: Part myth, part misunderstanding. Our State of AI report revealed that nearly half of fraud professionals worry about being replaced by AI. But here’s the reality: AI isn’t replacing fraud teams — it’s upgrading them.

AI takes the grunt work off their plates — reviewing transactions and triaging alerts — so analysts can focus on what humans do best: complex investigations, nuanced judgment, and strategic decision-making.

Financial institutions, especially, cannot afford to treat AI as a black box. Why is that so, and how is that impacting product design?

Sebastião: In finance, opacity is a liability. Every AI-driven decision, from declining a transaction to flagging a new customer, must be explainable. If a bank can’t justify why a system blocked someone’s account, they’re not just risking a bad customer experience , they’re courting regulatory trouble.

That’s why explainability is no longer a “nice to have”; it’s a design mandate. The smartest AI platforms today don’t just detect risk, they show their work. Feedzai IQ, for example, gives real-time reasoning behind every decision, making it easy for teams to audit, adjust, and trust the system.

We’re also seeing a shift toward privacy-preserving tech like federated learning, which lets banks collaborate on AI training without exposing sensitive data.

The key takeaway is, powerful AI alone isn’t enough. In finance, AI must also be transparent, fair, and accountable — because trust is a product requirement, not a bonus.

Responsible and ethical use of AI is a top-of-mind concern for highly regulated industries and organizations handling sensitive data. What is a practical approach to building trustworthy AI from the ground up — ensuring systems enhance security, fairness, and performance as they become central to decision-making under increasing public and regulatory scrutiny?

Sebastião: Don’t build AI like it’s a shiny object. Build it like it’s going to be subpoenaed.

In highly regulated industries, trust isn’t a buzzword — it’s a survival strategy. You’re not just building models; you’re building systems that will be questioned by regulators, tested by bad actors, and scrutinized by customers who don’t care how smart your tech is if it locks them out of their own money.

So, where do you start? With architecture, not aspiration :

  1. Design for explainability from day one. If a system can’t explain why it made a decision, it’s not a system you should trust  or deploy.
  2. Make fairness and bias mitigation ongoing processes, not one-time audits. Bias creeps in silently. You need tools that detect it before regulators do.
  3. Build privacy into the DNA. Techniques like federated learning and differential privacy shouldn’t be “add-ons.” They’re the baseline in sectors handling sensitive data.
  4. Test for failure, not just success. How does your AI behave under stress, in edge cases, when the data gets weird? That’s where real trust is earned.

At Feedzai, we call this the TRUST framework — Transparent, Robust, Unbiased, Secure, and Tested. But the big idea is simpler: Trustworthy AI doesn’t happen by accident. It’s engineered. Because at the end of the day, your AI doesn’t just need to be powerful. It needs to be defensible.

Share:

PreviousUS security agencies urge migration to memory safe programming
NextCohesity Recognized as a Leader in 2025 Gartner® Magic Quadrant™ for Backup and Data Protection Platforms for the Sixth Time

Related Posts

Plugging the gap between web application firewalls and perimeter controls: RASP

Plugging the gap between web application firewalls and perimeter controls: RASP

Friday, January 21, 2022

Why small businesses in India cannot skimp on cybersecurity anymore

Why small businesses in India cannot skimp on cybersecurity anymore

Monday, March 10, 2025

September 2019’s most wanted malware

September 2019’s most wanted malware

Friday, October 11, 2019

Colonial Pipeline attack: lessons from the last 3 years

Colonial Pipeline attack: lessons from the last 3 years

Tuesday, May 7, 2024

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper
  • Mitigating Ransomware Risks with GRC Automation

    Mitigating Ransomware Risks with GRC Automation

    In today’s landscape, ransomware attacks pose significant threats to organizations of all sizes, with increasing …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • HOSTWAY gains 73% operational efficiency for private cloud operations  

    HOSTWAY gains 73% operational efficiency for private cloud operations  

    With NetApp storage solutions, the Korean managed cloud service provider offers a lean, intelligent architecture, …Read more
  • CISOs can navigate emerging risks from autonomous AI with a new security framework

    CISOs can navigate emerging risks from autonomous AI with a new security framework

    See how security leaders can adopt layered strategies addressing intent, governance, and oversight to manage …Read more
  • MoneyMe strengthens fraud prevention and credit decisioning

    MoneyMe strengthens fraud prevention and credit decisioning

    Australian fintech strengthens risk management with SEON to scale lending operations securely and efficiently.Read more
  • PT Kereta Api Indonesia announces nationwide email and communication overhaul

    PT Kereta Api Indonesia announces nationwide email and communication overhaul

    The state railway operator’s upgraded email system improves privacy, operational reliability, and regulatory alignment for …Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.