The European Union Agency for Network and Information Security (ENISA) recently published its report on ‘Security and privacy considerations in autonomous agents’.
Artificial intelligence (AI) and complex algorithms offer unlimited opportunities for innovation and interaction, but they also bring a number of challenges that should be addressed by future policy frameworks at the EU level – especially in light of the amount of available data.
One of the objectives of the study was to provide relevant insights for both security and privacy for future EU policy-shaping initiatives. We have summarised some of the key security and privacy recommendations from the report below.
The report identified the following key security concerns in relation to AI.
- Malicious AI. Malicious AI can be presented as legitimate AI to avoid detection. AI should be verifiable throughout its life cycle. Developers therefore need to be able to provide the means to authorise AI.
- Hijacking AI. The integrity of AI may be compromised in its operation. Developers need to provide evidence that a security-by-design approach has been adopted, including documenting the following: i) secure software development; ii) quality management; and iii) information security management processes.
- Interference. As well as being susceptible to hijacking, AI is vulnerable to interference. The report highlights the vulnerability of self-driving cars to contactless attacks as a notable example of the need for the ongoing verification of AI.
- Transparency and accountability. The behaviour of AI is uncertain as autonomous agents are able to develop processing techniques and reach decisions beyond what was initially coded in the software. The report requires manufacturers to offer “comprehensive and understandable documentation” setting out i) the overall design of the agent; ii) a description of its architecture, functionalities and protocols; iii) a description of its hardware and software components; and iv) the interfaces and interactions of components.
The report also identified the following concerns about AI and the processing of personal data.
- Data minimisation. AI utilises large data sets to process and learn. Much of this data, however, proves to be unnecessary. To reduce the risk of harm to data subjects, the report recommends that more needs be done to limit the collection of personal data to only what is necessary.
- Data retention. Personal data is usually retained for longer than necessary. Even when deleted, such personal data may leave trace information which can be exploited. Further, researchers have established that such personal data is vulnerable to inference attacks.
- Data aggregation and repurposing. Data derived from an individual device is often transmitted back to the device manufacturer, which can analyse and process that personal data. Such data may be ‘repurposed’ and undergo processing outside of the original remit for collection, such as for marketing purposes.
- ‘Black box’ processing. Traditional data processing systems implement well-defined algorithms. Machine learning processing, however, operates as a ‘black box’ to the user. Machine learning algorithms provide no explanation for their results, which makes it difficult to demonstrate lawfulness, fairness and transparency of personal data processing.
Adoption of security- and privacy-by-design principles
The report emphasises the need for security and privacy to be incorporated into the design of the whole AI life cycle. This is to ensure that key security properties are maintained, including “availability, confidentiality, integrity and accountability”. These security principles should be supported by default, from initial deployment of the AI. The ongoing capability to verify these security properties is a key recommendation of the report.
It is interesting that the report recommends that public and private sector stakeholders should foster a collaborative approach on identifying and exchanging best practices to develop a set of security and privacy standards. The Information Commissioner’s Office recently invited anyone involved in AI to help develop a framework for future auditing. This shows that the regulators want help from the industry to help shape the framework for AI.
We expect further guidance from ENISA and other relevant regulatory bodies on the issues raised in this report, so watch this space.