Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Hackers leverage jailbroken AI to probe OT systems in Mexican Water Br...
Which four tech brands were most exploited in phishing and social medi...
The problem with CAPTCHAs – and the password perception gap
VIVOTEK AI Solutions Enhance Efficiency at Traffic Hubs in Norway
Report: more than half of APAC organizations experienced AI-related in...
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      The problem with CAPTCHAs – and the password perception gap

      The problem with CAPTCHAs - and the password perception gap

      Thursday, May 7, 2026, 11:14 AM Asia/Singapore | Features
    • Featured

      How AI is supercharging insider threats

      How AI is supercharging insider threats

      Wednesday, April 15, 2026, 12:29 PM Asia/Singapore | Features
    • Featured

      Q-Day is coming. Are you ready?

      Q-Day is coming. Are you ready?

      Tuesday, April 14, 2026, 12:40 PM Asia/Singapore | Features
  • Opinions
  • Tips
  • Whitepapers
  • Directory
  • E-Learning

Select Page

Features

Redefining the frontlines of digital defense

By Victor Ng | Tuesday, July 1, 2025, 3:16 PM Asia/Singapore

Redefining the frontlines of digital defense

Clearing the myths and misunderstandings about AI, its role in financial cybercrime, and how it should be leveraged in digital defense and riskops.

At the recent SuperAI event held in Singapore, Nuno Sebastião, CEO and Co-Founder, Feedzai– and one of Europe’s leading voices on AI in financial crime prevention – participated in a panel discussion titled Cybersecurity Redefined: Digital Defense in the Age of AI.

The discussion explored how AI is reshaping the frontlines of digital defense – particularly in sectors like banking, where the stakes are high and the threats increasingly complex. Following up on the subject, we sought further insights from Sebastião in an exclusive Q&A with him:

How is AI reshaping the frontlines of digital defense today?

Sebastião: Across all industries,  and especially the financial services industry, AI is enabling criminals to launch much faster and more realistic scams with deepfakes, synthetic identities, and hyper-personalized techniques. In response, banks are fighting back to protect themselves and their consumers by adopting AI-native “riskops” platforms that automate decision-making processes for new account openings, fraud detection, and money laundering. These platforms help them understand risk to process trillions of transactions with minimal friction for consumers.

Yet, not all AI platforms are created equal. Banks need to be able to trust the data that their AI platforms are trained upon. They need to know their AI systems are ethical and responsible with regards to consumer protection, fair lending, and credit underwriting. Their systems need to be explainable, helping to build trust with both regulators and end users by making every decision transparent and auditable.

And as synthetic content and manipulated communications become more common, AI-native riskops platforms allow banks to spot and neutralize threats before they cause harm.

Is the fear of AI replacing humans in fraud detection a myth? What’s your perspective?

Sebastião: Part myth, part misunderstanding. Our State of AI report revealed that nearly half of fraud professionals worry about being replaced by AI. But here’s the reality: AI isn’t replacing fraud teams — it’s upgrading them.

AI takes the grunt work off their plates — reviewing transactions and triaging alerts — so analysts can focus on what humans do best: complex investigations, nuanced judgment, and strategic decision-making.

Financial institutions, especially, cannot afford to treat AI as a black box. Why is that so, and how is that impacting product design?

Sebastião: In finance, opacity is a liability. Every AI-driven decision, from declining a transaction to flagging a new customer, must be explainable. If a bank can’t justify why a system blocked someone’s account, they’re not just risking a bad customer experience , they’re courting regulatory trouble.

That’s why explainability is no longer a “nice to have”; it’s a design mandate. The smartest AI platforms today don’t just detect risk, they show their work. Feedzai IQ, for example, gives real-time reasoning behind every decision, making it easy for teams to audit, adjust, and trust the system.

We’re also seeing a shift toward privacy-preserving tech like federated learning, which lets banks collaborate on AI training without exposing sensitive data.

The key takeaway is, powerful AI alone isn’t enough. In finance, AI must also be transparent, fair, and accountable — because trust is a product requirement, not a bonus.

Responsible and ethical use of AI is a top-of-mind concern for highly regulated industries and organizations handling sensitive data. What is a practical approach to building trustworthy AI from the ground up — ensuring systems enhance security, fairness, and performance as they become central to decision-making under increasing public and regulatory scrutiny?

Sebastião: Don’t build AI like it’s a shiny object. Build it like it’s going to be subpoenaed.

In highly regulated industries, trust isn’t a buzzword — it’s a survival strategy. You’re not just building models; you’re building systems that will be questioned by regulators, tested by bad actors, and scrutinized by customers who don’t care how smart your tech is if it locks them out of their own money.

So, where do you start? With architecture, not aspiration :

  1. Design for explainability from day one. If a system can’t explain why it made a decision, it’s not a system you should trust  or deploy.
  2. Make fairness and bias mitigation ongoing processes, not one-time audits. Bias creeps in silently. You need tools that detect it before regulators do.
  3. Build privacy into the DNA. Techniques like federated learning and differential privacy shouldn’t be “add-ons.” They’re the baseline in sectors handling sensitive data.
  4. Test for failure, not just success. How does your AI behave under stress, in edge cases, when the data gets weird? That’s where real trust is earned.

At Feedzai, we call this the TRUST framework — Transparent, Robust, Unbiased, Secure, and Tested. But the big idea is simpler: Trustworthy AI doesn’t happen by accident. It’s engineered. Because at the end of the day, your AI doesn’t just need to be powerful. It needs to be defensible.

Share:

PreviousUS security agencies urge migration to memory safe programming
NextCohesity Recognized as a Leader in 2025 Gartner® Magic Quadrant™ for Backup and Data Protection Platforms for the Sixth Time

Related Posts

Data privacy – from understanding to believing to enforcing

Data privacy – from understanding to believing to enforcing

Wednesday, November 20, 2019

When fighting AI with AI, beware of complacency

When fighting AI with AI, beware of complacency

Tuesday, July 2, 2024

Watch out: You’ve got mail

Watch out: You’ve got mail

Tuesday, August 30, 2022

Attaining cyber-resilience in APAC in an era of evolving cyberthreats

Attaining cyber-resilience in APAC in an era of evolving cyberthreats

Friday, February 10, 2023

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • Cyber protection for medical clinics in Singapore

    Cyber protection for medical clinics in Singapore

    As Singapore’s healthcare sector becomes increasingly digital and interconnected, clinics are facing heightened cyber risks, …Read more
  • India’s WazirX strengthens governance and digital asset security

    India’s WazirX strengthens governance and digital asset security

    Revamping its custody infrastructure using multi‑party computation tools has improved operational resilience and institutional‑grade safeguardsRead more
  • Bangladesh LGED modernizes communication while addressing data security concerns

    Bangladesh LGED modernizes communication while addressing data security concerns

    To meet emerging data localization/privacy regulations, the government engineering agency deploys a secure, unified digital …Read more
  • What AI worries keep members of the Association of Certified Fraud Examiners sleepless?

    What AI worries keep members of the Association of Certified Fraud Examiners sleepless?

    This case study examines how many anti-fraud professionals reported feeling underprepared to counter rising AI-driven …Read more

Bottom sidebar

Other News

  • VIVOTEK AI Solutions Enhance Efficiency at Traffic Hubs in Norway

    Wednesday, May 6, 2026
    TAIPEI, May 6, 2026 /PRNewswire/ …Read More »
  • Taoping Reports Fiscal Year 2025 Results

    Thursday, April 30, 2026
    Strategic Transformation Drives Platform Expansion, …Read More »
  • DESILO Launches World’s First Fully Homomorphic Encryption Library Integrating 5th-Generation FHE Scheme ‘GL’, Accelerating the Era of Private AI

    Tuesday, April 28, 2026
    SEOUL, South Korea, April 28, …Read More »
  • Tencent Cloud Cube Sandbox Goes Fully Open-Source, with Five Major Breakthroughs Enabling Large-Scale Agent Deployment

    Thursday, April 23, 2026
    Tencent Cloud’s Cube Sandbox goes …Read More »
  • Sparrow to Demonstrate AI-Driven Security and SBOM Management at Black Hat Asia 2026

    Wednesday, April 22, 2026
    SINGAPORE, April 21, 2026 /PRNewswire/ …Read More »
  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2026 CybersecAsia All Rights Reserved.