Rome joins in on AI safety guidelines. Italy is among the 18 countries to sign the new international Guidelines for Secure AI System Development, as the National Cybersecurity Agency (ACN) – one of the 23 agencies to adhere – announced on Monday. “The guidelines will help developers create responsible, ethical and safe AI,” reads an official note.
- This is the first international joint document for the secure use of AI, a matter that spurred both tumult in Silicon Valley and government attention around the globe.
All about security. The objective of the Guidelines is to raise the security levels of AI to ensure that it is designed, developed, and deployed in a secure manner. Cybersecurity is an essential precondition of AI and serves to ensure resilience, privacy, fairness and reliability – in short, a safer cyberspace. It is “a challenge that the Agency does not want to and cannot shy away from,’ which is why we have joined this initiative with conviction” stated Bruno Frattasi, the ACN’s Director-General.
Born in the UK, spanning the world. The Guidelines were outlined at the AI Safety Summit held in early November at Bletchley Park, where government and private-sector experts from 18 countries worked together to deliver them.
- Beyond Italy and the United Kingdom, the guidelines are supported by Australia, Canada, Chile, the Czech Republic, Estonia, France, Germany, Israel, Japan, New Zealand, Nigeria, Norway, Poland, Singapore, South Korea, and the United States.
Look at the G-7. The challenge of AI can only be met by working together, stressed DG Frattasi. “We must make available the best energies, both intellectual and instrumental, of our country and of all the other countries that are readying to tackle […] this highly demanding undertaking for the whole of humanity, starting with the next Italian-led G7.”
- Italian Prime Minister Giorgia Meloni, who attended the Bletchley Park summit, vowed to make AI governance a core focus of the 2024 G-7 cooperation (here’s how she intends to do it).