New approach needed for defining AI standards in cybersecurity, say Oxford academics

Leading experts in cybersecurity and ethics from Oxford Internet Institute, University of Oxford Dr Mariarosaria Taddeo and Professor Luciano Floridi and Professor Tom McCutcheon from Defence Science and Technology Laboratories’  believe the current approach to defining standards and certification procedures for Artificial Intelligence (AI) systems in cybersecurity is risky and should be replaced with an alternative method.

The new paper ‘Trusting Artificial Intelligence in Cybersecurity: a Double-Edged Sword’’, published in the journal Nature Machine Intelligence argues that defining standards based on placing implicit trust in AI systems to perform as expected, without any degree of any monitoring or control, could leave us at risk of new forms of AI attacks, disrupting systems and changing their behaviour.

Current ‘trust’ based standards and certification procedures in AI typically see tasks being carried out with either no or minimal control on the way the AI driven tasks are performed.

In their paper, the cybersecurity experts present the case for developing ‘reliable’ rather than trustworthy AI in cybersecurity. It envisages some form of control over the execution of cybersecurity task. The experts argue that reliable AI has greater potential to ensure the successful deployment of AI systems for cybersecurity tasks, making them less vulnerable to cyber-attacks.

Dr Mariarosaria Taddeo, Research Fellow, Oxford Internet Institute and lead author of the paper said: “Cyber-attacks are among the top five most severe global risks facing the world today according to the latest report from the World Economic Forum.  The current level of trust placed in AI systems to deliver robust, responsive and resilient systems to help prevent cyber-attacks is a double-edged sword.  AI can certainly improve cybersecurity practices, but it can also facilitate new forms of attacks to the AI applications themselves, which may generate new categories of vulnerabilities posing severe security threats. That is why some form of control is needed to mitigate the risks due to lack of transparency of AI systems and lack of predictability of their robustness”.

[rand_post]

The cybersecurity experts suggest three new requirements that focus on the design, development and deployment of AI systems and how to improve the robustness, response and resilience of AI systems for cybersecurity practices.

Key requirements for developing reliable AI in cybersecurity are:

  • In-house development – reliable suppliers should design and develop their models in-house, data for system training and testing should be collected, curated and validated by systems providers directly and maintained securely thus ruling out likely attacks leveraging internet connections to access data and models
  • Adversarial training – training should take place in-house between AI systems to improve their robustness and identify vulnerabilities in the system
  • Parallel and dynamic monitoring – monitoring is required to ensure any divergence between the expected and actual behaviour of a system is captured early and promptly so it can be adequately addressed

Dr Mariarosaria Taddeo, Research Fellow, Oxford Internet Institute co-author of the paper adds: “The three requirements we advocate are preconditions for AI systems performing any of the robustness, response or resilience tasks reliably and should become essential preconditions for AI systems deployed for the security of national critical infrastructures.  The risks posed by attacks to AI systems underpinning critical infrastructures justify the need for more extensive controlling mechanisms and hence higher investments, and as such we urge policymakers to meet these recommendations when considering national security and defence cybersecurity. The sooner we focus standards and certification procedures on developing reliable AI and more we adopt an inhouse, adversarial and always on strategy, the safer AI applications will be”.