Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Defense industrial bases face evolving cyber threats in 2026: analysis
With AI powering seasonal e-shopping fraud and scams, what can CISOs d...
Digital gold for predators on Valentine’s Day
Should we worry about AI agents taking over our world?
Ransomware group exposed as a fake-breach scam operation
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Where are financial fraud and AML regulations heading in S E Asia?

      Where are financial fraud and AML regulations heading in S E Asia?

      Tuesday, February 10, 2026, 2:44 PM Asia/Singapore | Features
    • Featured

      How AI is reshaping dating in Asia

      How AI is reshaping dating in Asia

      Monday, February 9, 2026, 5:33 AM Asia/Singapore | Features, Newsletter
    • Featured

      Emerging third-party cyber risks via agentic AI

      Emerging third-party cyber risks via agentic AI

      Tuesday, February 3, 2026, 10:22 AM Asia/Singapore | Features
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

Features

Redefining the frontlines of digital defense

By Victor Ng | Tuesday, July 1, 2025, 3:16 PM Asia/Singapore

Redefining the frontlines of digital defense

Clearing the myths and misunderstandings about AI, its role in financial cybercrime, and how it should be leveraged in digital defense and riskops.

At the recent SuperAI event held in Singapore, Nuno Sebastião, CEO and Co-Founder, Feedzai– and one of Europe’s leading voices on AI in financial crime prevention – participated in a panel discussion titled Cybersecurity Redefined: Digital Defense in the Age of AI.

The discussion explored how AI is reshaping the frontlines of digital defense – particularly in sectors like banking, where the stakes are high and the threats increasingly complex. Following up on the subject, we sought further insights from Sebastião in an exclusive Q&A with him:

How is AI reshaping the frontlines of digital defense today?

Sebastião: Across all industries,  and especially the financial services industry, AI is enabling criminals to launch much faster and more realistic scams with deepfakes, synthetic identities, and hyper-personalized techniques. In response, banks are fighting back to protect themselves and their consumers by adopting AI-native “riskops” platforms that automate decision-making processes for new account openings, fraud detection, and money laundering. These platforms help them understand risk to process trillions of transactions with minimal friction for consumers.

Yet, not all AI platforms are created equal. Banks need to be able to trust the data that their AI platforms are trained upon. They need to know their AI systems are ethical and responsible with regards to consumer protection, fair lending, and credit underwriting. Their systems need to be explainable, helping to build trust with both regulators and end users by making every decision transparent and auditable.

And as synthetic content and manipulated communications become more common, AI-native riskops platforms allow banks to spot and neutralize threats before they cause harm.

Is the fear of AI replacing humans in fraud detection a myth? What’s your perspective?

Sebastião: Part myth, part misunderstanding. Our State of AI report revealed that nearly half of fraud professionals worry about being replaced by AI. But here’s the reality: AI isn’t replacing fraud teams — it’s upgrading them.

AI takes the grunt work off their plates — reviewing transactions and triaging alerts — so analysts can focus on what humans do best: complex investigations, nuanced judgment, and strategic decision-making.

Financial institutions, especially, cannot afford to treat AI as a black box. Why is that so, and how is that impacting product design?

Sebastião: In finance, opacity is a liability. Every AI-driven decision, from declining a transaction to flagging a new customer, must be explainable. If a bank can’t justify why a system blocked someone’s account, they’re not just risking a bad customer experience , they’re courting regulatory trouble.

That’s why explainability is no longer a “nice to have”; it’s a design mandate. The smartest AI platforms today don’t just detect risk, they show their work. Feedzai IQ, for example, gives real-time reasoning behind every decision, making it easy for teams to audit, adjust, and trust the system.

We’re also seeing a shift toward privacy-preserving tech like federated learning, which lets banks collaborate on AI training without exposing sensitive data.

The key takeaway is, powerful AI alone isn’t enough. In finance, AI must also be transparent, fair, and accountable — because trust is a product requirement, not a bonus.

Responsible and ethical use of AI is a top-of-mind concern for highly regulated industries and organizations handling sensitive data. What is a practical approach to building trustworthy AI from the ground up — ensuring systems enhance security, fairness, and performance as they become central to decision-making under increasing public and regulatory scrutiny?

Sebastião: Don’t build AI like it’s a shiny object. Build it like it’s going to be subpoenaed.

In highly regulated industries, trust isn’t a buzzword — it’s a survival strategy. You’re not just building models; you’re building systems that will be questioned by regulators, tested by bad actors, and scrutinized by customers who don’t care how smart your tech is if it locks them out of their own money.

So, where do you start? With architecture, not aspiration :

  1. Design for explainability from day one. If a system can’t explain why it made a decision, it’s not a system you should trust  or deploy.
  2. Make fairness and bias mitigation ongoing processes, not one-time audits. Bias creeps in silently. You need tools that detect it before regulators do.
  3. Build privacy into the DNA. Techniques like federated learning and differential privacy shouldn’t be “add-ons.” They’re the baseline in sectors handling sensitive data.
  4. Test for failure, not just success. How does your AI behave under stress, in edge cases, when the data gets weird? That’s where real trust is earned.

At Feedzai, we call this the TRUST framework — Transparent, Robust, Unbiased, Secure, and Tested. But the big idea is simpler: Trustworthy AI doesn’t happen by accident. It’s engineered. Because at the end of the day, your AI doesn’t just need to be powerful. It needs to be defensible.

Share:

PreviousUS security agencies urge migration to memory safe programming
NextCohesity Recognized as a Leader in 2025 Gartner® Magic Quadrant™ for Backup and Data Protection Platforms for the Sixth Time

Related Posts

Hackers are automating attack methods using pentesting tools and PowerShell

Hackers are automating attack methods using pentesting tools and PowerShell

Tuesday, October 27, 2020

Preparing for the benefits and risks of quantum computing: Leap or Lag?

Preparing for the benefits and risks of quantum computing: Leap or Lag?

Monday, March 18, 2024

Cybersecurity predictions for 2023: Expect more global attacks, govt regulations, consolidation

Cybersecurity predictions for 2023: Expect more global attacks, govt regulations, consolidation

Tuesday, November 15, 2022

Supply chain attacks at the software development level: what you need to know

Supply chain attacks at the software development level: what you need to know

Thursday, October 27, 2022

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • India’s WazirX strengthens governance and digital asset security

    India’s WazirX strengthens governance and digital asset security

    Revamping its custody infrastructure using multi‑party computation tools has improved operational resilience and institutional‑grade safeguardsRead more
  • Bangladesh LGED modernizes communication while addressing data security concerns

    Bangladesh LGED modernizes communication while addressing data security concerns

    To meet emerging data localization/privacy regulations, the government engineering agency deploys a secure, unified digital …Read more
  • What AI worries keep members of the Association of Certified Fraud Examiners sleepless?

    What AI worries keep members of the Association of Certified Fraud Examiners sleepless?

    This case study examines how many anti-fraud professionals reported feeling underprepared to counter rising AI-driven …Read more
  • Meeting the business resilience challenges of digital transformation

    Meeting the business resilience challenges of digital transformation

    Data proves to be key to driving secure and sustainable digital transformation in Southeast Asia.Read more

Bottom sidebar

Other News

  • Blackpanda Japan Announces Strategic Partnership with SoftBank to Strengthen Cyber Incident Response in Japan

    Wednesday, February 11, 2026
    SINGAPORE, Feb. 10, 2026 /PRNewswire/ …Read More »
  • Cohesity Collaborates with Google Cloud to Deliver Secure Sandbox Capabilities and Comprehensive Threat Insights Designed to Eliminate Hidden Malware

    Saturday, February 7, 2026
    Embedded Google Threat Intelligence capabilities, …Read More »
  • Shield AI, Republic of Singapore Air Force, and Defence Science and Technology Agency Expand Partnership to Progressively Field Autonomy Capabilities

    Thursday, February 5, 2026
    SINGAPORE, Feb. 5, 2026 /PRNewswire/ …Read More »
  • ICAC Commissioner attends APEC anti-corruption meetings in Guangzhou to foster collaborations in the Asia Pacific region

    Thursday, February 5, 2026
    HONG KONG, Feb. 4, 2026 …Read More »
  • VIVOTEK Enhances VORTEX with Generative AI and Safety Detection

    Tuesday, February 3, 2026
    Expanding the cloud security ecosystem …Read More »
  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2026 CybersecAsia All Rights Reserved.