EU AI Act Explained

Today´s discussion

EU AI Act Explained

How Will It Affect Your Business?
escribir prompt gpt

Ready for the EU AI Act ?
With the EU AI Act set to enter into action in early 2025, companies worldwide are racing against time to understand and prepare for the most comprehensive AI regulation ever created. During a recent expert panel, industry leaders gathered to discuss the critical implications of this landmark legislation that will affect any company using or developing AI systems for the EU market.

Dan Nechita, who helped shape the regulation as the European Parliament's lead technical negotiator and Lily Li, an AI and data privacy legal expert, shared important insights on what organizations need to know to ensure compliance during this webinar led by Richie Cotton at Datacamp.

The stakes are high with potential fines of up to 7% of global revenue for violations, companies cannot afford to take a wait-and-see approach. Whether you're a startup or a multinational corporation, understanding the EU AI Act's requirements and starting your compliance journey now is not just advisable. It's essential for future business operations in the European market.


Meet the panelists :

  • Dan Nechita, EU Director at Transatlantic Policy Network and former lead technical negotiator for the EU AI Act
  • Lily Li, founder of Metaverse Law specializing in AI

Image

The Imperative Need to Regulate AI

"The amount of data that AI needs to function is unprecedented in human history," explained Lily Li at the start of the discussion. This fundamental reality poses unique challenges in terms of privacy and security that we cannot ignore. "From a data privacy and cybersecurity point of view, we need to have some controls there." But the problem goes beyond data. As human beings, we are programmed to trust what we see and hear, making us particularly vulnerable to technologies capable of manipulating our perception of reality.

Dan Nechita deepened this perspective, noting that AI regulation isn't simply a matter of restriction but of fostering responsible innovation. "The way the AI act works is a little bit more indirect, but it still is meant to foster innovation. The AI act encourages companies to build safe AI. Once you have safe AI, then you have more trust in the technology. Once you have more trust in the technology, you have more adoption. At the societal level, once you have more adoption, then you have economic growth. That economic growth then creates opportunities for innovation."

People's Psychological Tendencies to Rely on Computer Outcome

There is a need to account for human psychology and the tendency to over-trust AI, especially in high-stakes applications, when designing and deploying AI systems to ensure they are used responsibly and their limitations are well-understood.
Those are the people's psychological tendencies when it comes to AI systems:

  • There is a tendency for people to over-rely on computer outcomes or AI system outputs, even if they may be biased or erroneous. This is an important consideration for AI literacy, as users need to understand the risks and limitations of AI systems.

  • The more "high risk" the AI application is (ex: used in policing, justice administration, etc.), the more important it is for the people using the system to have a strong understanding of these psychological biases. They need to know that AI systems can be biased and that they shouldn't blindly trust the outputs.

  • This psychological tendency to over-rely on AI is one of the key reasons why the EU AI Act has literacy requirements : to ensure that users of high-risk AI systems are aware of these risks and can properly interpret and apply the system's outputs.

The EU AI Act: A Revolutionary Approach

"The EU AI Act is the world's most comprehensive AI regulation," explained Richie at the beginning of the discussion. "If your company does business in Europe and you either create or make use of AI in some capacity, then it's going to affect you."

Dan Nechita, who worked on the regulation for 5 years, elaborated on its importance: "At European level, one of the reasons why we have the AI act is to consolidate the single market, is to have one set of uniform rules all across the European Union for artificial intelligence. And I think that this is very important for the European project."

Prohibited Practices: The Red Line

Lily Li detailed the prohibited practices with precision: "Article 5 is the Article you should watch out for for prohibited AI practices. These are things where the EU has said has no value to EU society." She continued explaining: "This includes things like using subliminal or deceptive techniques to encourage people to do things that are harmful to themselves. This includes classification of people on a mass scale based on their special categories of data like their race, age, disability... The EU does not want a mass social scoring system or mass social credit system."

High-Risk Systems: The Heart of the Regulation

Dan Nechita explained the concept of "intended purpose" in high-risk systems: "Assume, for example, let's say Microsoft is building a system to be used at police stations all across Europe with the purpose of making a prediction on crime. That is the logic that connects those who build it and those who deploy it, the intended purpose."

Lily Li added specific examples from the HR sector: "If you're using an automated system in order to decide whether or not someone's qualified for an employment position, or you're using emotion sentiment in the workplace in order to see how people are doing their job, or responding to work criticism, or acting in an interview, these are all high risk uses of AI in the employment context."

High-risk systems include, but are not limited to:

  • Critical infrastructure components
  • Applications in the justice system
  • Border control systems
  • Emergency service allocation
  • Educational placement systems
  • Human resources decision-making systems

The Price of Non-Compliance

"It's similar to GDPR in terms of fines and penalties," warned Lily. "You can be assessed a certain percentage of your overall global revenue for each undertaking. Obviously, if you're engaging in a prohibited AI practice, the penalty is much higher, it's up to 7% of worldwide revenue."

AI Literacy: A New Corporate Requirement

Dan explained the intentional flexibility in literacy requirements: "Article 4 basically says that those who use AI, especially in interacting with natural persons, need to have a certain level of AI literacy. That is not really fully specified in the AI act, and that is by design."

He elaborated on the tiered approach: "The more you move up the risk pyramid, in a sense, the more you want to make sure that the people who are there with their hands on the button or making decisions know what they're doing, know that AI systems can be biased, know that they need to have good data in them, know that there is a psychological tendency to over rely on computer outcomes."

The required level of understanding varies by role and risk:

  • Users need sufficient knowledge to use the systems responsibly.
  • Creators require a deep understanding of biases, data governance and system limitations.
  • High-risk systems demand the highest level of understanding and oversight.

A Cross-Organizational Effort

Lily emphasized the importance of cross-departmental collaboration: "As outside counsel, sometimes it's just delegated straight to legal, or sometimes it's delegated straight to infosec or IT. But when you're working with an actual AI system or AI software company, a lot of the quality management and risk mitigation has to happen at the design stage, the product design stage and at the training stage."

She added a humorous comment about departmental collaboration: "I love the marketing teams that I work with, but I think legal and marketing, we have different love languages. It's very hard to talk to one another."

Active participation is required from:

  • Legal teams to assess scope and ensure compliance
  • Product teams for design and architecture decisions
  • Technical teams for implementation
  • Marketing teams for accurate communication
  • Management for strategy and oversight

The Conformity Assessment

During the panel discussion, Dan outlined key requirements including risk management, data governance, logging capabilities, human oversight and cybersecurity measures. Dan emphasized that while these won't make AI systems "perfectly safe," they make them "safe enough" for the European market, noting the challenge of AI's unpredictability and continuous learning nature.

Lily Li, founder of Metaverse Law, built on this by recommending organizations integrate conformity assessment into existing cybersecurity and privacy frameworks, specifically advising "take a look at the NIST AI Risk Management Framework for kind of a simple guide to the steps an organization can take." She emphasized examining multiple risk dimensions: "you're looking at quality, like accuracy, error rate for system... cybersecurity... continuity and resilience, like uptime... and will this be deployed in a manner that could be risky or unsafe."

These are the main considerations for conformity assessment according to panelists:

From Dan Nechita:

  • Risk management system implementation
  • Data governance and bias prevention measures
  • Logging capabilities for system monitoring
  • Human oversight integration
  • Cybersecurity measures
  • Documentation of compliance with all requirements
  • Pre-market validation procedures

From Lily Li:

  • Integration with existing cybersecurity frameworks
  • System quality and accuracy measurements
  • Error rate monitoring and documentation
  • Continuity and resilience testing
  • Deployment risk assessment
  • NIST AI Risk Management Framework alignment
  • Regular system audits and updates

    Additional cross-panel recommendations:
  • Early compliance planning
  • Cross-team collaboration (legal, technical, product teams)
  • Ongoing monitoring procedures
  • Regular assessment updates
  • Documentation maintenance throughout system lifecycle
  • Risk mitigation strategies
  • Integration with existing privacy programs

The Global Landscape: Beyond Europe

Dan Nechita shared an interesting international perspective: "We've had very good collaboration with the US while developing the AI Act. So while we say, okay, the Brussels effect, there was definitely a two-way discussion with the US in their preparation for the executive order and also influence some of the thinking that we've done here so as to make them as compatible as possible."

Lily Li detailed the situation in the United States: "I'm based in California, licensed in California, and here we're seeing a two-track system. The California Privacy Protection Agency is the privacy regulator in California, and they just started the rule-making process on automated decision making technologies."

Preparing for the Future

Dan offered particularly valuable advice for startups: "If you're a startup or a small and medium enterprise, the AI act has a lot of provisions that help you navigate it and that help you comply in a very simplified fashion. For startups and smaller companies, my advice would be try to be compliant from day one because it's really, really easy if you understand what you need to do."

Lily added a crucial practical tip: "Please label your data sets. Label, know where they're from. So if there are issues later, you know where you should be cutting off the training."

How can organizations prepare for this new regulatory landscape ?

  • Integrating AI risk management into existing cybersecurity and privacy programs
  • Using established frameworks such as NIST and ISO standards
  • Starting compliance work early, especially for high-risk systems
  • Maintaining proper documentation of data lineage
  • Considering global requirements when developing AI systems

Global Reach

The EU AI Act's extraterritorial scope is significant. As Lily Li emphasized, "The EU AI act has a really, really broad scope. It's broader than the GDPR and it applies to any providers of AI systems that are high risk or prohibited that place the AI into the EU market."

This means organizations worldwide need to consider compliance if they:

  • Develop AI systems used in the EU market
  • Deploy high-risk AI systems within the EU
  • Provide services to EU clients using AI systems

Looking to the Future

For organizations worldwide, the message is clear: starting early with compliance efforts and building safe AI practices from the beginning won't just prevent future headaches - it could become a competitive advantage in an increasingly regulated world.

The AI revolution is underway, and with these regulations, we have the opportunity to ensure it develops in a way that benefits all of society. The challenge now is to implement these regulations effectively while maintaining the spirit of innovation that makes AI so promising.

The experts emphasized that starting compliance efforts early and building safe AI practices into development processes from the beginning will save significant headaches later. Organizations should view compliance with the EU AI Act as setting a gold standard that will likely help them meet other emerging regulations worldwide.

You watch the full video of the Datacamp webinar.


If you would like to access the EU regulations, please visit our Resources section and additional blog articles on Regulations. Feel free to contact us for any questions you may have.