Artificial Intelligence (AI) is a powerful technology that can be used to combine multiple weak signals and draw unexpected conclusions. A study conducted by Carnegie Mellon University (CMU) revealed that social security numbers were surprisingly predictable, and AI algorithms could reconstruct an SSN based on information such as date of birth and gender. This highlights the importance of security when outsourcing AI. When outsourcing data collection, storage, or analysis to an external partner, security should be a top priority. To reduce the risk of attacks on AI systems, companies should adopt best practices such as assessing the risks and attack surfaces when implementing AI systems, implementing IT reforms to make it difficult to execute attacks, and creating attack response plans.
Additionally, public policy that creates “AI security compliance” programs will reduce the risk of attacks on AI systems and minimize the impact of successful attacks. The methods that underpin next-generation artificial intelligence systems are systematically vulnerable to a new type of cybersecurity attack called “artificial intelligence attack”. These attacks are intentional manipulations of an AI system with the ultimate goal of causing it to malfunction. As a result, the current set of state-of-the-art artificial intelligence algorithms are, in essence, pattern comparators. The Federal Government's National Strategic Plan for Research and Development of Artificial Intelligence (AI) explicitly requires the open exchange of data between agencies. This situation often occurs in practice, as companies offer artificial intelligence as a service through a public API.
The unrestricted incorporation of artificial intelligence into these critical aspects of society is weaving a fabric of future vulnerability. At the same time, artificial intelligence as a service is becoming more common. These areas are attractive targets for attacks and are becoming more vulnerable due to the increasing adoption of artificial intelligence for critical tasks. Therefore, being able to ensure that AI cannot reconstruct your personally identifiable information (PII) is an unsolved problem and will likely rely heavily on data. Organizations can take steps to protect themselves from AI attacks by using threat intelligence sources from Anomali partners. These sources provide information tailored to an organization's industry, location, type of threat, and more.
The trade-off between security and predictive power is likely to be a difficult problem for data owners. However, best-effort strategies can help mitigate most concerns.