As forensic investigators and courts tackle AI-forged evidence across international borders, evidence collection and analysis protocols have to be revamped.
Generative AI (GenAI) and autonomous agents have shifted the cybersecurity landscape from script-based attacks to advanced threats that destroy evidence trails or create forged footprints.
Malware can now even rewrite its own code in real time, so forensic investigators are moving away from analyzing traditional “digital fingerprints” to collating multiple behavioral patterns and contextual alignment for a bigger picture.
Consequently, judicial systems are now shifting from analyzing isolated digital artifacts to piecing-up a broader narrative of how evidence emerges. This requires greater international police cooperation and updated legislation, as automated threats cross borders rapidly and challenge digital forensics and evidence admissibility, according to Tony Anscombe, Chief Security Evangelist, ESET, through an email interview with CybersecAsia.net.
CybersecAsia: How has the definition of “admissible evidence” changed in court now that AI can easily forge digital trails?
Tony Anscombe (TA): Increasingly, legal systems around the world are becoming more cautious with digital evidence. A file, log entry, or recording alone carries little weight, as AI makes fabrication easier without obvious signs. The focus is less on the artifact in isolation and more on its provenance, access, and alignment with other data.
Investigators check for consistency across system logs, endpoint activity, network behavior, and external inputs. Confidence grows when these aspects align; if they do not, evidence is scrutinized even more.
In destructive malware cases (i.e., involving permanent damage or deletion of data and systems), conclusions on attribution to a particular APT group are not drawn from any single technical indicator. They have to be built back from repeated patterns, corroboration across environments, and behaviour observed over time. Contextual validation now outweighs apparent authenticity.
CybersecAsia: Malware can now rewrite its own code. What does this mean for legal investigations relying on digital fingerprints for leads?
TA: A single digital fingerprint is no longer reliable. Malware evolves quickly, and static signatures are easily bypassed.
Detection now uses layered approaches, monitoring code behavior and characteristics to predict outcomes. Attackers’ habits, such as infrastructure use, target selection, timing, and small mistakes, often repeat, even if malware changes.
CybersecAsia: How are judicial systems around the world prepared to face AI-driven crimes or attacks?
TA: Preparedness varies by region and jurisdiction. A key challenge is classifying AI-driven incidents, which may not fit existing criminal categories, especially with automation or cross-border elements.
Now, courts often require external technical expertise or international input.
CybersecAsia: How do you see the need for international police cooperation when autonomous AI agents move across borders in seconds, and when crimes involve deepfakes, should investigations focus on the victim or the tools used?
TA: International cooperation is essential, as cybercrime spans jurisdictions and automated attacks outpace single-nation responses, while investigations follow formal processes — no agency has full visibility alone.
Early collaboration among researchers, national teams, and international partners helps unpack incidents and limit impact. This applies directly to cases such as deepfakes, where both the victim’s account and the tools matter.
The victim’s perspective provides context on the interaction: what was presented, how the attacker engaged, and their goals. However, examining production methods, distribution, and broader patterns can reveal scale and reuse, enabling authorities to disrupt operations rather than handle cases one by one.
CybersecAsia would like to thank Tony Anscombe for sharing his professional insights with our readers


