Insights The New EU AI Act: What You Need to Know

The New EU AI Act: What You Need to Know

Part 1: What is the new EU AI act and what does it include?

The European Commission has recently implemented a new regulation on artificial intelligence (AI), which aims to ensure that AI is trustworthy, human-centric, and aligned with the EU’s values and fundamental rights. The regulation covers both the development and the use of AI systems within the EU, as well as the import and export of AI systems to and from the EU.

The regulation defines four categories of AI systems based on their level of risk to human rights, safety, and democracy. These are:

·        Prohibited AI systems: These are AI systems that are considered to pose an unacceptable threat to fundamental rights, such as those that manipulate human behavior, exploit vulnerabilities, or enable social scoring by governments. These AI systems are banned from being developed, used, or placed on the market in the EU.

·        High-risk AI systems: These are AI systems that are used in critical sectors or contexts, such as health, education, justice, law enforcement, or public administration, and that have a significant impact on the rights and safety of individuals or groups. These AI systems are subject to strict obligations, such as ensuring transparency, accuracy, robustness, human oversight, and compliance with data protection rules. These AI systems also have to undergo a conformity assessment before being placed on the market or put into service in the EU.

·        Limited-risk AI systems: These are AI systems that pose a limited risk to fundamental rights, but that still require some transparency measures, such as those that use chatbots, voice assistants, or video games. These AI systems have to inform users that they are interacting with an AI system and not a human, and allow them to opt out of the interaction.

·        Minimal-risk AI systems: These are AI systems that pose no or minimal risk to fundamental rights, such as those that are used for spam filtering, online shopping, or entertainment. These AI systems are not subject to any specific obligations, but are encouraged to follow voluntary codes of conduct and best practices.

There are penalties for violating the EU AI act.  The EU AI act introduces a system of sanctions and remedies for non-compliance with the regulation, which vary depending on the type and severity of the infringement. The sanctions include administrative fines, which can reach up to 6% of the annual worldwide turnover or €30 million, whichever is higher, for the most serious violations, such as using prohibited AI practices or manipulating data or algorithms. The sanctions also include injunctions, which can order the cessation or modification of the AI system, or the recall or withdrawal of the AI system from the market or service. Moreover, the regulation provides for the possibility of imposing civil liability on the providers or users of AI systems that cause harm or damage to third parties, in accordance with national laws and international conventions.

Is this only for companies in the EU?  Consider this a bellwether for future protections implemented in other countries, states, and industries.   Your organization will need to consider this a warning shot to take your projects seriously and your customer’s even more seriously.

Part 2: What is the impact of the EU AI act on everyday businesses creating AI capabilities?

If a business is creating a high risk system the EU AI act will have to be a focus area, especially if the business functions internationally.  These businesses will have to prepare comply with the new requirements and obligations, such as ensuring data quality, documenting the AI system’s development and functioning, providing clear and meaningful information to users, implementing appropriate human oversight and intervention mechanisms, and establishing effective monitoring and reporting systems. These businesses may have to undergo a conformity assessment by a notified body, which is an independent organization that verifies the compliance of the AI system with the regulation.  At minimum

The EU AI act also introduces a new governance framework for AI, which involves the establishment of a European Artificial Intelligence Board, which will provide guidance and advice on the implementation of the regulation, and national competent authorities, which will monitor and enforce the compliance of AI systems with the regulation. The regulation also provides for a system of cooperation and information exchange between the EU and the member states, as well as with third countries and international organizations.

The EU AI act aims to create a single market for AI in the EU, which will facilitate the free movement of AI systems and services across the member states, and foster innovation and competitiveness in the AI sector. The regulation also aims to increase the trust and acceptance of AI among consumers and citizens, by ensuring that AI is used in a lawful, ethical, and beneficial manner.

Part 3: How does the EU AI act relate to responsible AI in the context of a business adopting AI tools?

The EU AI act essentially adds teeth to the concept of responsible AI, which refers to the development and use of AI in a way that respects human dignity, autonomy, and rights, and that promotes social good, fairness, and sustainability. The regulation is based on the principle that AI should serve humans and the common good, and that AI systems should be subject to human control and oversight. The regulation also incorporates the key ethical principles of AI, such as transparency, accountability, non-discrimination, and privacy.

The EU AI act provides a legal framework and a set of standards for responsible AI, which can help businesses that adopt AI tools to ensure that they are complying with the EU’s values and rules, and that they are minimizing the risks and maximizing the benefits of AI for their stakeholders and society. The regulation also provides an opportunity for businesses to demonstrate their commitment to responsible AI, and to enhance their reputation and trustworthiness among their customers and partners. The regulation also encourages businesses to adopt a culture of continuous learning and improvement, and to engage in dialogue and collaboration with the relevant authorities and stakeholders.

So, read these things as activities businesses should be implementing anyway. The new rules are making it not just a “should”, but a “need”. Every business building a high priority AI system should be thinking about the full lifecycle of AI, including ML Ops, responsible AI practices, security, and user data.

Part 4: So, what does a business do about it?

It would be easy to say, “well… just wait for the dust to settle”.  You’ll be waiting for a long time.  Instead, understand that the practices that create excellent and responsible AI systems (both ML and GenAI) need to exist in your projects.  This includes:

1.      Start with the fundamental “why” for your AI initiatives.  The EU act further encourages companies to focus on real value, since the bar continues to be raised in the stewardship of AI initiatives.

2.      Build responsible AI into every AI project as the core of how the initiative takes shape.  This is no longer a “nice to have”, but a requirement for how companies think about the data, impact, and security associated with AI systems

3.      Understand where AI is applied in the business and be able to trace back the data used to provide consistent lineage.  This will be come increasingly difficult as AI systems become ubiquitous.  Simply maintaining a registry won’t be good enough.  Tools that map data connections, secure data, and manage the security of outputs will be critical.

4.      Know that not all AI systems are created the same.  There is a negative possibility that businesses see the EU act and freeze in place.  Freezing isn’t the answer. There is a necessary balance between enablement and governance.  This is an excellent place to both encourage AI, trace its outcomes, and engage responsible characteristics along the way. Know this raises the bar on building systems that are responsible and operationalized.

5.      Education will continue to be an element here.  This includes the builders and the users of AI systems, both of which need to have a better understanding of what they are interacting with. A builder need to consider all the elements of creating an AI system, or have those elements injected via a low-code interface. The user of the AI system needs to know what they are interacting with… is it a bot, is it a human, is it a hybrid system? There needs to be clarity and expectations set on what a system will and will-not do.

6.      Businesses need an internal AI CoE or strategy organization that is aiding the adoption cycle of one of the most important technologies of our lifetimes. I’m seeing more companies standing up AI CoE structures that facilitate adoption and governance.

Where to go now? Really, it’s similar to yesterday, just with a bit more responsibility. We’ve been talking for a while about operationalized AI systems… that they need to be run with high quality, security, responsibility, and transparency. This makes it clear that it is necessary, either from the tools you adopt or those that you build.  Get started, experiment, innovate, and build a responsible AI program.  You can do this!

Nathan Lasnoski