Home » From cognitive-industrial revolution to superintelligence. AI is testing modernity
Technology and Security

From cognitive-industrial revolution to superintelligence. AI is testing modernity

Decode39 spoke with Professor Pasquale Annicchino to delve into the risks behind AI’s acceleration. From the “cognitive-industrial revolution” evoked by Pope Francis at the G7 to the recent “superintelligence manifesto,” artificial intelligence has become a defining test of modernity.

The mass layoffs announced by Amazon — a direct consequence of automation driven by AI — have reignited concerns about the risks linked to the technology’s rapid integration into society. From the resilience of capitalist and democratic systems to the survival of humanity itself, AI now poses a challenge that demands urgent reflection.

Why he matters: Pasquale Annicchino teaches Law and Religion, Ethics and Regulation of Artificial Intelligence, and Religious Data and Privacy at the University of Foggia. He is among Italy’s most active voices on the political and regulatory implications of artificial intelligence, with a focus on democratic resilience and digital literacy.

Q: Do you think there is widespread awareness of the social risks linked to AI’s development?

A: Partly yes. There is a growing body of literature and debate, but the real issue is methodological. The speed of regulation or social reflection does not match the speed of technological innovation. In many cases, we are more likely to undergo technology rather than govern it — particularly in relation to social risks and the lack of digital literacy.

Q: What do you mean by “digital literacy”?

A: I mean the absence of a widespread understanding of how these technologies impact society. This gap creates a severe misalignment between our ability to understand and to react. When Pope Francis spoke at the Italian G7 about AI, he referred to a “cognitive-industrial revolution” and “epochal transformations.” These are profound shifts in how people and institutions interact. Yet, there has been little in-depth public reflection. That, in my view, is the first significant risk.

Q: And from that risk, others follow?

A: Exactly. Such as those related to labour, surveillance, and civil rights. During periods of rapid technological acceleration, new winners and losers emerge. The crucial question is how to ensure social stability amid this paradigm shift.

Q: Are there best practices already being implemented, or are we starting from scratch?

A: It is difficult to identify best practices when the landscape is constantly changing. A clear trend, however, is the need for digital literacy and education. For instance, we should include modules on AI ethics and regulation in all training programs — for teachers, doctors, engineers, and academics. Every profession will be impacted, so everyone must reflect on the ethical and social consequences.

Q: Is Italy moving in that direction?

A: Unfortunately, not fast enough. The country struggles with education and training in general, as data show. Although the government’s national AI strategy acknowledges these needs, implementation remains weak.

Q: Beyond social concerns, AI also raises political and even existential risks. Let’s start with the political ones.

A: Some scholars call these “epistemic risks.” They relate to the way communication and democratic systems function — how people with differing views on facts can still deliberate and make collective choices. This is especially relevant in the context of “cognitive warfare,” as several studies, including from Italy’s Ministry of Defence, have shown. The danger lies in eroding the very notion of facts, further deepening social polarisation.

Q: And what about the existential risks? The “superintelligence manifesto” recently sparked debate.

A: The manifesto stands out for the diversity and prominence of its signatories. It marks a step beyond the 2023 “pause letter” from the Future of Life Institute, which called for a six-month moratorium on AI systems more powerful than GPT-4. Now, the focus shifts to the concept of “superintelligence” — AI systems with cognitive capacities exceeding human intelligence. That’s a significant leap, but it has drawn criticism.

Q: What kind of criticism?

A: Critics argue that focusing on distant future risks distracts from the urgent challenges AI already poses today. Some view it as a means to circumvent debates on pressing issues. The paradox is that the actors leading the AI race are also the least inclined to impose a pause, as doing so could cost them technological dominance. The key question remains whether global regulation is possible — but many obstacles still stand in the way.

The bottom line: For Annicchino, AI represents a test of human adaptability.

  • As governments struggle to keep pace, the gap between technological power and ethical reflection continues to widen.
  • Without a global framework — and without investing in digital literacy — societies risk not only disruption but also disorientation.

Subscribe to our newsletter