Find out how vulnerability research needs to be bolstered by collaborations at every level of industry to keep the world safe

Henry Poh, Director, Vulnerability Research, Ensign InfoSecurity

Central to combating these threats is a robust framework for vulnerability research that aims to uncover critical vulnerabilities before they can cause harm. The approach to uncovering these hidden threats involves several stages:

    • Attack surface mapping: This stage involves identifying how different inputs to a software interact with it, and the pinpointing of components responsible for processing these inputs. Meticulous observation and recording of how software components respond to these inputs will be conducted. In addition, the contribution of various inputs to past vulnerabilities will also be noted. These software components are then prioritized for further analysis, taking into consideration factors such as their recent introduction and the history of reported vulnerabilities.
    • Data protection measures: Given the sensitivity of data processed by AI algorithms, robust data protection measures must be in place. This includes encryption of data at rest and in transit, access controls, and regular audits to ensure compliance with data protection regulations like GDPR and local data protection laws.
    • Static analysis: This stage examines the software code to identify components that are likely to be buggy. If source code is available, a source code audit is performed to discover security issues in the source code. In cases where source code is not available, the binary code is disassembled into a human-readable assembly format for a comprehensive binary audit. Additionally, this stage identifies functions susceptible to input injections for dynamic analysis.
    • Dynamic analysis: Techniques such as fuzzing involve sending specially crafted inputs to the software to trigger unexpected behavior patterns — a primary indicator of underlying vulnerabilities. The effectiveness of fuzzing is significantly enhanced by insights gained from the static analysis phase, which has already identified potential functions for input injections and the data structures that these functions expect. In-house expertise in file-format and grammar-based fuzzing has led to the development of proprietary tools that expedite the identification of problematic code.

Research insights also key