‘Membership Inference attacks’ allow hackers to obtain useful data from AI systems. A new tool can now seal up this loophole.
AI-powered applications need to be trained with relevant data such before they can automate tasks or provide intelligence for action.
For security reasons, once an AI model has been trained, it does not retain any of the original training data. This ensures that even if hackers pry open the internal workings of these AI programs, they could not harvest any sensitive information.
However, in recent years, security and privacy researchers have shown that AI models are vulnerable to inference attacks that enable hackers to extract sensitive information about training data.
Such attacks involve hackers repeatedly asking the AI service to generate information and analyzing the resultant output for discernible patterns. Once such patterns are recognized, hackers can deduce if a specific type of data was used for training the AI program. Using these attacks, hackers can even reconstruct the original dataset that was most likely used to train the AI engine.
In 2009, such reverse-engineering ‘inference attacks’ against the National Institutes of Health (NIH) in the United States caused the agencies to change their access policies to sensitive medical data. Since then, AI reverse engineering has continued to be a concern for many organizations globally.
According to Assistant Professor Reza Shokri, School of Computing, National University of Singapore: “Inference attacks are difficult to detect as the system just assumes the hacker is a regular user while supplying information. As such, companies currently have no way to know if their AI services are at risk because there are currently no full-fledged tools readily available.”
ML privacy meter
To address this problem, Prof Shokri and his team have developed a full-fledged open-source tool that can help companies determine if their AI services are vulnerable to such inference attacks.
The analysis, based on what is known as Membership Inference Attacks, aims at determining if a particular data record was part of the model’s training data. By simulating such attacks, the privacy-analysis algorithm can quantify how much the model leaks about individual data records in its training set. This reflects the risk of different attacks that try to reconstruct the dataset completely or partially. It generates extensive reports that, in particular, highlight the vulnerable areas in the training data that were used.
By analyzing the result of the privacy analysis, the tool can provide a scorecard which details how accurately attackers could identify the original datasets used for training. The scorecards can help organizations to identify weak spots in their datasets, and show the results of possible techniques that they can adopt to pre-emptively mitigate a possible Membership Inference Attack.
The NUS team have coined this tool the “Machine Learning Privacy Meter” (ML Privacy Meter), and the innovation is the development of a standardized general attack formula. This general attack formula provides a framework for their AI algorithm to properly test and quantify various types of membership inference attacks.
The tool is based on the research led by the NUS team in the last three years. Before the development of this method, there had been no standardized method to properly test and quantify the privacy risks of machine learning algorithms, which had made it difficult to provide any tangible analyses.
“When building AI systems using sensitive data, organizations should ensure that the data processed in such systems are adequately protected. Our tool can help organizations perform internal privacy risk analysis or audits before deploying an AI system. Also, data protection regulations mandate the need to assess the privacy risks to data when using machine learning. Our tool can aid companies in achieving regulatory compliance by generating reports for Data Protection Impact Assessments,” explained Prof Shokri.
Moving forward, Prof Shokri is leading a team to work with industry partners to explore integrating the ML Privacy Meter into their AI services. His team is also working on algorithms that enable training AI models which preserve privacy natively.