Protection of LEA AI solutions using AI and other forms of cybersecurity

Protection of LEA AI solutions using AI and other forms of cybersecurity

This post marks the start of a new series for STARLIGHT, where project partners discuss key concepts and recent research outcomes from the project.

Authors: Valentina Del Rio, Project Manager, Pluribus One

As the use of artificial intelligence (AI) grows, law enforcement agencies (LEAs) recognise that they can harness AI to protect themselves against cyber-attacks and that deploying AI tools also requires employing robust cybersecurity practices. Recent research in STARLIGHT has focused on these two crucial elements of cybersecurity. The first provides LEAs with AI-based tools that can analyse, predict, and mitigate against cyber-attacks. The second has initiated a discussion on methodologies for ensuring the security and resilience of current and future AI tools used by LEAs.

The opportunities for supporting LEAs to detect cyber threats are extensive. For example, STARLIGHT's AI capabilities can help LEAs detect and classify cyber-threats from open-source intelligence, such as the Dark Web, enabling them to anticipate cyber-attacks before they occur. Using advanced data analysis methods will allow LEAs to spot patterns in network traffic that indicate malicious activity in real-time. Furthermore, by analysing human behaviour, we can assist LEAs in identifying weaknesses in computer networks. These are only some of the many possibilities that can be provided by STARLIGHT.

Our research has led to elements and methodologies for defending and respecting the requirements of all AI modules developed within STARLIGHT. As part of our analysis of AI models, we used the Risk Assessment Methodology to identify their potential weaknesses, vulnerabilities, and limitations. As a starting point, we analysed well-defined and widely accepted threat and risk assessment methodologies in conjunction with international standards such as those developed by NIST and ISO. After that, ad hoc extensions were made to these frameworks to analyse AI-related threats, followed by various mitigation strategies. Finally, this initial analysis was tested against early versions of some STARLIGHT tools.

To understand how this research can be taken up by LEAs, we also strived to identify their current best practices. Using these results, a novel threat and risk assessment framework was developed, providing a unified and agreed model for all EU LEAs. This was complemented by categorising effective state-of-the-art defence measures to identify the most appropriate mitigation strategies. Furthermore, a study on the current state-of-the-art was used to develop a specific framework that gives LEAs the ability to carry out a risk assessment to identify the various types of vulnerabilities and threats associated with AI-based tools used for investigative and cybersecurity matters.

STARLIGHT has robust ethical standards for all research activities; therefore, several ethical considerations have been incorporated into these research results. For example, even aggregated and network traffic data can be sensitive; therefore, users of such tools should be authorised and the tool only made available to organisations that comply with fundamental values and laws. Other key elements of trustworthy and ethical AI include incorporating explainable AI approaches to assist operators in comprehending the results.

Similarly, the trustworthy AI values of fairness, accountability, and transparency can also be incorporated into risk assessments from a societal perspective at a high level. Fairness requires identifying potential bias in datasets and methods and implementing mitigation measures. Accountability includes those who create, develop, deploy, and use AI technology taking responsibility for their own actions. At the same time, transparency relates back to various socio-technical characteristics, including explainability, indicating how well users understand and trust AI. 

As a result of this research, LEAs can improve their investigation capabilities supported by a comprehensive, safe, and effective set of methods and tools to recognise and protect against cyber-attacks.


Pluribus One is a research-intensive company based in Italy, focused on providing innovative solutions and services for cyber security.

While Pluribus One has been established in August 2015; its staff has more than 25 years of experience in world-class research, and in providing solutions based on pattern-recognition and secure machine-learning technologies for real-world applications.