Good morning everyone. First of all, I would like to thank Prof. Paola Severino for this special opportunity to participate to the Women Economic Forum on behalf of the Italian National Cybersecurity Agency. I am particularly pleased to talk in front of an audience who is interested in social and economic innovation while strengthening the significant role of women in the world today.
This appointment, too, testifies how much the unstoppable technological progress guides – and on this term, guides, I will return in a moment – our approach to the digital ecosystem and to many aspects of our lives, we can say most of them.
Today it is difficult to imagine a part of our life that does not intertwine and is not a thread in the digital fabric on which we move. That is why it is important to remind ourselves that security, as well as the functionality, efficiency, and productivity of this ecosystem, is strongly influenced by social, economic, and ultimately geopolitical factors.
Terrible events have tragically characterized recent years, first the pandemic, then the conflicts, disrupting every forecast and crafting different structures and risks for our world. This means that today, in this historical moment, it is important to be resilient and ready to understand, anticipate, and manage the technological challenges of the future.
Speaking of challenges, we cannot fail to mention artificial intelligence. The launch of ChatGPT has made the strategic importance of artificial intelligence tangible, and above all, it has made its enormous potential, with both positive and negative aspects, clear to all of us. I do not need to tell you that there is not a day when, opening a newspaper, browsing a review, or simply scrolling through social media, we do not come across articles on artificial intelligence. These potentials of artificial intelligence, ranging from deep learning to natural language, to the high complexity functions that it can perform, generate both enthusiasm and concerns while practically blurring the line between artificial intelligence and human products. One of the risks is precisely the indistinguishability from human intelligence, which poses a future challenge of reliability. We will increasingly wonder whether something was done by a human or a machine. Is this data, this product, real, or is it a deep fake?
However, this is not the only risk. We will face challenges on the job market: automation processes and advanced intelligence could lead to the loss of many jobs before requalification and reorganization processes can absorb the change. Another problem we are already dealing with is the issue of biases and discriminations that the vast amount of data that feeds artificial intelligence carries with it. In fact biases and discriminations inherent in those data could be replicated, influencing the output of certain processes. Since we enter this topic I will mention that the volume of data will not be the only element to which we must pay attention. The amount of data is fundamental for artificial intelligence, but in some sensitive fields, such as medicine, the same emphasis must be placed on data accuracy.
Then there is the issue of privacy and data security. If artificial intelligence feeds on our data, we have a problem.
Also, the complexity of artificial intelligence makes it difficult to understand how it gets to a certain result, which elements have been considered and why. And this is a problem that, together with the issue of ethical and legal responsibility, raises huge questions. We can say we have a concern of control and loss of control.
This leads us to the question of superintelligence, almost science fiction scenarios that see artificial intelligence so advanced that it surpasses human intelligence and human control, allowing the machine to take over. These scenarios do not align with my views, but they are often cited as disadvantages of artificial intelligence.
The advantages are indeed highly relevant and generate enthusiasm. Artificial intelligence can optimize repetitive and routine processes, considering how it processes a vast amount of data in a way that would be impossible for the human mind. This capability is precious, for example, in improving decision-making abilities in a company or in even more delicate sectors. Consider, for instance, public health and healthcare or how it can significantly enhance human performance – think of road safety and how a secure autonomous vehicle can reduce human errors that lead to accidents and tragedies.
Then there’s the specific aspect of cybersecurity, which, as National Cybersecurity Agency, concerns me closely. Artificial intelligence is crucial for early threat identification, protecting networks and systems by efficiently and promptly detecting anomalies and intrusions. These are valuable features considering that artificial intelligence is also being used by the attackers to make their activities more efficient. Incidentally, it’s worth noting that attacks have increased significantly in recent years and in this scenario, it becomes evident to everyone that the real issue is not the progress of technology, nor the development of digital technology; the real issue is its governance. And we are back to the concept of guiding that we started with. Should technological progress guide the actions of society, as often happens, or should society guide technological progress to maximize its benefits and minimize the risk factor? Risk is unavoidable, we all know that, but it should be minimized. Therefore, artificial intelligence must be designed and experienced as an enhancement, an empowerment of human capability, , never to become competitive with humans. Artificial intelligence should be a support and never a replacement for human decision-making processes. The core of governance should be the establishment of rules concerning ethics and human values, transparency, fundamental education and training to benefit society. We need to be critically aware when discussing artificial intelligence, privacy and security and these are all topics on which Europe is currently working on with the AI Act, which aims to classify artificial intelligence based on risk and to agree to or oppose certain manifestations depending on the associated risk.
To get straight to the point, I want to mention just two pieces of news that have caught my attention recently. The Italian Association of Pediatric Hematology and Oncology has presented four projects based on artificial intelligence aimed at more precise diagnoses and better treatments for children affected by these pathologies. The goal is to provide personalized therapies that will significantly increase the chances of recovery. On the opposite front, some humanitarian organizations report that facial recognition artificial intelligence systems are being used in Iran to identify women without veils for subsequent persecution. Between these two symbolic pieces of news, there is an enormous and vast range of applications of artificial intelligence, and one cannot cheer for or against it by considering these two examples. We must find the line, the orientation that allows us to lean more towards one aspect of artificial intelligence than the other. It is up to us, to our societies, through governance, to master this process of orientation.
However, I want to emphasize that the fact that artificial intelligence should not be competitive with humans does not mean at all that countries should not compete for artificial intelligence. No country will step back from the competition because we all know that sovereignty, technological process and development guarantee geopolitical dominance in markets. No country will step back, not Europe, not our country, from investing maximum effort in technological research in general, and artificial intelligence in particular and the National Cybersecurity Agency is ready to play its part in all aspects of this complex scenario.