How the EU can surpass obstacles to leveraging AI within border security
From self-driving vehicles to face recognition, AI is consistently identified as a critical tool for improvement and innovative measures both within the domains of public policy and public services. A latest RAND Europe research observed these matters in the context of the European Border and Coast Guard – the cumulation of border and coast guard entities ensuring the integrated administration of Europe’s borders.
The research found a broad range of present and possible uses for artificial intelligence tech by border security entities, including self-governing border control gate mechanisms, AI-based border monitoring and ML-based optimization and streamlining.
As AI tech experiences ongoing growth, the EU has had to tackle a plethora of questions in relation to how AI development and uptake by EU businesses can stimulated and aligned with appropriate ethics and human rights safety guards. From this perspective, the EU has just now initiated plans to convert Europe into the international center for trustworthy AI, rendering the EU the pioneering regulator to devise major legislation with regards to AI.
With the deployment of this collection of legislations and regulatory provisions, the EU is hoping to spur innovation and the leveraging of AI with a vehement focus and emphasis on ensuring its safety and ethical utilization. This realizes that leveraging AI efficiently needs insight into not just the avenues provided by it in critical sectors and public policy spheres, but also the obstacles and limitations that EU businesses encounter during consideration of how to leverage the technology.
The broad scope of current and possible future use cases demonstrates that border security entities can enlist AI in a supporting role in a broad range of tasks, from operations executed by border security agents on the field to assisting the analytical activities of geospatial and maritime information.
Firstly, matters like lacking transparency within AI algorithms and possible biases have restricted the leveraging of AI tech. This is not just as it restricts the efficient performance of AI systems, but also due to escalating vagueness amongst possible end users and the broader public as to how reliable AI tech really is. The EU’s latest AI regulatory guidelines addresses these obstacles by placing emphasis on the criticality of producing trust in AI, and highlights the needed policy alterations and investments required to fortify the development of human-centered, sustainable, inclusive, secure, and trustworthy AI tech.
The focus on trustworthy and inclusive AI could have a significantly critical role in the domain of border security. As the leveraging of AI-based tech in border security has been critiqued for possibly glossing over human rights and privacy-related safety measures. Comprehensive risk evaluations are required and could have incentives based on the latest guidelines from the EU. These would make sure that tech developments are in compliance with strong ethical and human rights safety measures.
Second, in addition to the technological obstacles, the uptake of AI in EU border security might also be restricted by how well-prepared individual entities are to leverage emergent tech such as AI. A lack of subject matter expertise and deficiencies in skills connected to innovation are typical challenges for businesses functioning in the public sector, restricting their capability to identify and comprehensively exploit the avenues provided by AI tech.
In this aspect, the latest EU AI framework holds promise to stimulate evolution through fostering talent and skills relevant for AI development, while a number of activities are also an option to individual entities like border security entities to enhance their knowledge and skills base. These options vary from fundamental awareness training to develop a strong baseline of comprehension with regards to AI technologies to target recruitment and selection campaigns to entice new talent.
Thirdly, while there has been considerable interest in AI use within border security and connected contexts like law enforcement and national security, evidence-based deficiencies are still present. What is noteworthy is that assessments of AI tech are typically executed in controlled settings, which does not enable an organization to evaluate how these technologies translate into performance in practical scenarios. While these deficiencies presently restrict understanding of what influence AI tech can tangibly have, awareness of them can also give EU entities and businesses like Frontex, the central European border and coast guard agency, with avenues to impart thought leadership and direct research initiatives into spheres of critical interest. For instance, trustworthy, and human rights-oriented AI development.
Tackling these three obstacles would need not just actions from individual border security entities, but engagement in the broader sphere of the AI innovation ecosystem that consists of tech devs, other EU entities, people who frame policy, and academic institutions.
The EU’s latest regulatory framework with regards to Artificial Intelligence could impart a one-of-a-kind opportunity to stimulate this ecosystem, but individual organizations can also contribute to the efficiency this ecosystem illustrates in facilitating efficient use cases of AI across all domains of the EU economy.
This could consist of initiatives that encourage the exchange of data and know-how amongst various end-user communities, directly enabling collaboration amongst these communities, or providing incentives to innovation, like via tech demos. These contributions and a cumulative ambition to stimulate AI across the EU innovation ecosystem could be critical in assisting the EU to accomplish its latest defined ambition to become the international hub for trustworthy AI.