The Balance between Fundamental Rights and Technological Innovation: The Case of Artificial Intelligence

In recent history, traditional parties and the Public Administration have lost the digital battle: they have failed to quickly reap the benefits of the technological revolution.


The economy and society are constantly called upon to respond to digital challenges and Artificial Intelligence (AI) is one of them.


With the sudden and increasingly widespread use and investment into Artificial Intelligence, a debate of AI and its uses has ignited between government institutions and private actors regarding the ethical and legal aspects of its application. Authoritative legal scholars such as Professors Oreste Pollicino and Marco Bassini have, on several occasions, underlined how the information society has now become an algorithm society, in which the most relevant aspect is the relationship between man and machine.


The application of AI in computer or video surveillance systems, a sufficiently widespread phenomenon, raises some questions about the requirements, conditions and related guarantees of companies using AI. In particular, the use of AI in video surveillance systems leads to the development of legal reflections and ethical considerations, making use of two theories formalized in the literature, which are distinguished as deontological and teleological.


The fragile balance between public interests and fundamental rights


According to deontological ethical theories which are consistent with the principles on which the rule of law is based, a public interest will encounter limits when it comes to harming a fundamental right of an individual. This is the reason why there are growing concerns about the threats and risks deriving from the use of biometric recognition in video surveillance systems, because it allows an individual to be identified simply and unambiguously, comparing and analyzing their physiognomic characteristics. Therefore, the possible introduction of AI systems could represent a violation to the right to privacy. However, even when evoking national security, governments are still not allowed to breach an individual’s fundamental rights, including the right to privacy.


When public interest prevails, do the ends justify the means?


Teleological ethical theories justify the introduction of a regulatory act aiming to obtain what the policymaker considers the best possible result regardless of how it is achieved. Therefore, a right or a freedom can be limited in the name of the greater collective well-being. In the present case where national security potentially represents the public’s interest, national security outweighs an individual’s right to privacy. However, one important aspect must be kept in mind: the precautionary principle cannot be invoked superficially, because, if not tempered by the principle of proportionality, there is a risk of overturning the rule of law.


How to reach a solution


It is clear that public interests encounter limits when it comes to violating the fundamental rights of a person, and visa versa. The problem comes down to balancing, which must be found through law. And here, it is necessary to define a sufficiently qualified group of public institutions to guarantee this reciprocal balance. There is a thin line between surveillance aimed at protecting public safety and biometric recognition being used to access unnecessary information. Building liberal and democratic societies was an important choice. Running the risk of crossing the fine line beyond which a surveillance society is established is an equally important choice, with the consequent danger of betraying the founding values of democratic systems.

As Professor Pasquale Annicchino repeatedly writes in the pages of the Domani newspaper, the application of Artificial Intelligence to video surveillance systems in China or even the United States is collecting additional, controversial information when monitoring and digitally surveilling religious minorities for reasons of national security, subordinating the right to privacy and religious freedom to it.

There is no pre-existing solution. It all depends on how we ultimately end up using the technology, on the rank of the public opinion that from time to time is surveyed, as well as on the relationship between the public opinion and the individual right from time to time in specific cases. Depending on the use made of Artificial Intelligence, AI will interfere and clash with other subjective rights, all equally fundamental. Legislative and regulatory intervention must not represent an obstacle to the overwhelming force of innovation, which represents the noblest form of competition, but must be limited to the definition of the levels of protection, consistently with the different risk profiles. On the other hand, the companies that provide these services are required to guarantee high standards of security and transparency in the acquisition and processing of personal data.