Foo Siang-Tse, Senior Partner (Cyber), NCS

For example, the country recently updated its National AI Strategy to “uplift and empower” her people and businesses, recognizing that AI may lead to more profound challenges, including moral and ethical issues, and that this will have wide-ranging implications for regulations and governance. Also:

    • Rather than impose strict rules that may stifle innovation, the country will underscore the need to strike a pragmatic balance. In this aspect, it will create regulatory sandboxes as the catalyst for AI innovation to flourish, while the necessary guardrails should be implemented to ensure this does not result in systemic risks.
    • As AI development progresses, these rules must continue to evolve as the government adapts to technological changes.
    • Businesses in the country also can play their part by putting in place an AI and data governance framework, specific to their unique needs, to guide their adoption of the technology, including GenAI.
    • While there is currently no regulatory framework specifically for AI in cybersecurity, organizations can refer to the National Institute of Standards and Technology AI Risk Management Framework 1.0 and Cybersecurity Framework v1.1 to build their own AI governance model. These aim to incorporate trustworthiness considerations in the development of AI systems and provide best practices on cyber protection.
    • IT/Cybersecurity vendors involved in AI projects need to work closely with clients to help mitigate potential security risks in the development and deployment process. To do this, they should conduct extensive testing of clients’ AI systems for vulnerabilities through penetration testing and adversarial attacks before and after deployment to ensure their AI models are resilient against malicious inputs.