Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Governments Worldwide Will Become More Serious About Regulating AI in 2025; Expect Regulatory Activity in Other Areas, Too

December 2024 by Lila Kee, GlobalSign

The alarming number of cyber-attacks is staggering. By most accounts, there are about 4,000 each and every day, across the entire world. From ransomware and DDoS attacks to phishing, smishing and vishing, there’s a lot to be concerned about. In parallel we have two massive technology shifts - Artificial Intelligence (AI) and Post-Quantum Cryptography (PQC) - both of which are poised to transform business and the daily lives of consumers in ways we never expected. But they’ll also present significant risks.

Add it all up and you’ve got such a massive potential for an endless array of cyber incidents that it will need more than a good cybersecurity stack to stay secure. The answer: Increased regulations.

Despite the concerns and recent warnings of prominent executives around AI regulations - including SAP’s CEO, Christian Klein - the amount of regulation targeting it will only continue to grow in number worldwide in response to its growing capabilities and use. At a time when “the bad guys” are doing a lot of damage, stronger, more comprehensive laws may be one of our best defenses.

Which is why I believe it’s a good thing we’re going to see an increasing number of regulations being passed and implemented by governments in 2025. In fact, I believe that the most crucial of regulations will focus on controlling AI. That’s not to say there aren’t other regulatory areas of interest we’ll see activity on in the coming year though, but first, let’s discuss what steps have been taken on the AI front.

Implementing AI is on everyone’s mind

The regulation activity around AI will kick in early next year in Europe, specifically the European Union (EU). The Artificial Intelligence Act (AI Act) came into force in August, but prohibitions on “unacceptable risk” will come into force on February 2 and that should help limit, or even eliminate, certain aspects of high-risk AI Systems. What unacceptable risk refers to are AI systems that are considered risks to people’s fundamental rights, such as those that manipulate people’s behavior, making them do things they wouldn’t normally do. Or AI systems that exploit people’s vulnerabilities such as age, social or economic status or disability. Later in the year, in August, we’ll also see the transparency requirements take effect, and that should have some bite because it will require companies to inform individuals when they are interacting with AI systems or AI-generated content. With a degree on online unrest over the increasingly convincing ability of AI to generate content and media that mimics humans, this kind of requirement will go a long way in building trust both among users and in the idea that AI tools aren’t being used maliciously or deceptively.

While the US recently signed the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law - US regulation for AI has not been passed. But given all the activity in other parts of the world, we should expect to see a lot more activity in the United States next year on a federal level. I expect the most likely candidates to move ahead are the Algorithmic Accountability Act and the National AI Initiative Act. I also recommend keeping an eye on the US Bipartisan Senate AI Working Group. There will also be plenty of debate and regulation discussion on the state-level, especially in California and New York. Notably even Elon Musk, a proponent of reduced regulations, has supported CA Bill 1047 introducing safety regulations for large AI models to avoid risk to society.

FinTech’s Dive into DORA

As I mentioned earlier, AI is not the only space we’ll see movement around regulation. Fintech is preparing for the January launch of the Digital Operational Resilience Act (DORA). The strict cybersecurity regulation being compared to GDPR is going to introduce significant change within the financial technology market. According to BMC Software, the new regulatory act establishes a framework spanning the entire European financial sector and compliance with it will remove complexities arising from gaps and overlaps in different EU member states. The new regulation is viewed as quite serious, and financial sanctions will be applied to companies who do not comply with it. As a result, we are going to see many organizations re-evaluate and update their existing cybersecurity frameworks. If that’s the case, that will only be a good thing for the general security landscape as we see will see more robust protections in place and hopefully, less breaches as a result.

Stay Educated

There’s no question that regulations are necessary, however they should be crafted very carefully, to avoid unnecessary “red tape” that could become blockers to innovation and positive technology advancements. Once these regulations are in place, no matter the topic or market, companies, or organizations not in compliance will be in jeopardy and pay the price, quite literally, in terms of fines. Going forward it’s going to be important to be aware of existing regulations, whilst also staying up to date on new regulations to ensure your organization isn’t impacted and you’re prepared for any regulatory changes.


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts