Home IssueArtificial Intelligence Event Recap: Integrating Europe’s AI and Cybersecurity Strategies

Event Recap: Integrating Europe’s AI and Cybersecurity Strategies

by Daniel Castro
by
Panelists discussing AI and cybersecurity

Cybersecurity and AI are both high priorities for the EU, but how can policymakers’ approach to each complement the other? This was the question put to the panel at the Center for Data Innovation’s event in Brussels on September 26, 2018. The conversation centered on how AI could ease skill shortages in the cybersecurity workforce and correct humans’ mistakes, but might create a false sense of security and open up new attack vectors.

Ilias Chantzos, senior director for government affairs and global advisor for critical infrastructure and data protection at cybersecurity firm Symantec, said that AI can help cybersecurity companies to process much larger quantities of data and to free-up time for human security analysts. However, he also highlighted the need to build AI systems that are resilient to attacks that fall outside of traditional ways of thinking about cybersecurity, such as those that attack underlying data to mislead AI systems. For example, he said a small amount of duct tape on a roadside stop sign can fool a self-driving car into mistaking it for a 70 kph sign.

Vivian Loonela, member of cabinet for Vice President Andrus Ansip at the European Commission, stressed the importance of both AI and cybersecurity to the Commission’s agenda, and acknowledged the valuable role AI could play in a cybersecurity strategy. However, Loonela warned of the danger that over-reliance on any technology can create a false sense of security: improving cybersecurity with AI is good, as long as it does not lead to complacency. On the other hand, she said setting the cybersecurity bar too high would mean shutting down services, because we know systems are not perfect.

Discussion of human complacency and responsibility gave rise to several parallels with aviation. To illustrate her point about setting the bar too low or too high, Loonela said that if people flew airplanes with the same level of security as everyday systems, every tenth plane would crash. Nick Wallace, a senior analyst with the Center for Data Innovation, pointed out that on-board computers in airplanes are programmed to ask pilots to carry out tasks that could easily be automated, simply to keep pilots engaged and ready to deal with potential problems—suggesting that similar techniques in cybersecurity might help to avert a false sense of security. On the other hand, Chantzos pointed to fly-by-wire systems in fighter jets, which automatically make small adjustments to correct for the instability that arises from the extreme maneuvers such aircraft are designed to execute: automation might give rise to similar micro-corrections for human errors in cybersecurity, without taking away the user’s responsibility altogether.

On privacy, Roberto Cascella, senior policy manager at the European Cybersecurity Organization, said the rise of AI heightens the risk of cyber attackers using large amounts of personal data to carry out highly-targeted attacks, and argued that data protection law—particularly the EU’s General Data Protection Regulation—had an important role to play. However, Chantzos retorted that hackers and cyber criminals do not follow data protection law, and said defenders need to be able to put up personalized defenses to protect against personalized attacks.

All panelists agreed that the human element was crucial, and that both AI and cybersecurity strategies should emphasize digital skills, which are lacking across the board. No matter how well-secured systems are from a technical standpoint, humans will remain a point of failure. Loonela highlighted the problem that even the most basic digital skills are lacking among many European adults. Cascella said there is a lack not only of the necessary skills, but also of the people qualified to teach them. Chantzos emphasized that cybersecurity skills in particular are rare, highly sought, and for that reason, highly paid—urging those in the audience thinking about their futures to consider a career in cybersecurity.

However, Chantzos also said that the rate of change in how humans interact with machines is such that working with coders and engineers is no longer enough for the cybersecurity industry. As connected technologies become more pervasive and embedded in more everyday items, Chantzos argued, cybersecurity companies need to work with psychologists and others from “human sciences” in order to make safe computing more intuitive: because while most people can quickly get a feeling for which streets in Brussels not to go down at night, the equivalent in the digital space comes less naturally. The challenge for cybersecurity, he said, is how to enable users who are not cybersecurity experts to serve as the first line of defense, instead of trying to lock users out of every cybersecurity problem as the “first vulnerability.” Loonela concurred, and spoke of the importance of “cyber-hygiene,” teaching users basic rules such as not clicking on links in dubious emails—something which may have played a role in the recent “WannaCry” ransomware attacks.

On the policy front, the central role of the member states in both AI and cybersecurity became clear at the end, when an audience member asked about the role of national AI strategies. Loonela pointed out that while several member states are voluntarily working on national strategies for AI, they are also obliged to have national strategies for cybersecurity. Loonela also stressed the link between cybersecurity and defense policy, which is also firmly within member states’ sphere of responsibility.

However, Chantzos argued that cybersecurity may be where the practical limits of current EU policy become clear. If service providers are held fully liable for cybersecurity risks, on the rationale that users do not understand cybersecurity, those service providers will want a full view of each user’s behavior in order to control the risks arising from that behavior, raising obvious privacy problems. On the other hand, Chantzos argued that if the as-yet unfinalized ePrivacy Regulation requires users to give their consent before critical security updates can be installed on their devices—which could include components of critical infrastructure, such as smart meters—that would unavoidably shift a huge responsibility onto users who may well not understand the risks.

You may also like

Show Buttons
Hide Buttons