Latest Stories

Stay up-to-date with everything at Approach

Publication

AI is here to stay, and so is its regulation

Publication date

05.01.2026

The EU AI Act is changing how organisations can deploy AI — depending on the risk level and their role in the value chain. Our GRC expert Kevin Lavrijssen provides a clear overview of what’s coming, when it applies, and how to take the first steps toward compliance and stronger AI governance. 

AI increases productivity 

Artificial Intelligence is no longer a futuristic concept but increasingly embedded in all parts of everyday life. Tools like Copilot and ChatGPT have transformed how we work, learn, and communicate. From drafting emails to advanced analytics, AI has become a core driver of productivity and innovation. Businesses are even leveraging the technology for customer service, fraud detection, and even strategic decision-making.  

AI poses societal risk 

Unfortunately, this class of technologies also poses real societal risk. Models might be used to determine whether you are eligible for a mortgage or suitable for a job. It is easy to see then that the course of your life might be seriously altered by a statistical model. This becomes problematic if it turns out that that model was trained on biased data and uncovered false patterns. All of this without even considering more exotic use cases like emotion recognition or predictive policing.  

Trade-off between innovation and risk 

This trade-off between opportunities for economic growth and societal risk has proven to be quite the headache for policymakers all over the world. On the spectrum ranging from full-on capitalization of economic growth to an outright ban on AI, the European Commission decided to opt for a balanced, risk-based approach in the form of the AI Act.  

The EU AI Act 

The EU AI Act starts by recognising that not all AI use cases pose the same level of societal risk. Next, it distinguishes several risk classes:  

  • Unacceptable Risk (Prohibited): e.g. Individual predictive policing 
  • High-Risk: e.g. Credit scoring models 
  • Transparency Risk: e.g. Customer service chatbots 

When looking at a specific use case, the Commission also recognized that not all participants in the underlying system’s value chain carry the same responsibilities.  

Hence, under the AI Act, the obligations an organization must comply with depend both on the risk classification of their AI use case and their role in the underlying system’s value chain. 

Roll-out 

Building on the recognition that not all AI use cases pose the same level of risk, the Commission has opted for a staged, risk-based rollout of the regulation. It gives businesses and public bodies ample time to prepare.  

Earlier this year, the obligations regarding prohibited use cases and general-purpose models came into force. Next year in August, the remaining parts of the regulation will come into force excluding only one paragraph that extends the scope of high-risk systems. This final paragraph will apply from August 2027. 

What it means for your organisation 

All of this begs the question: “What does the AI Act mean for my organisation?”. In short, regardless of the size or nature of your organisation, the following three steps will allow you to assess your exposure and ensure compliance. 

  1. Build an overview: The first step towards assessing compliance exposure consists of listing all use cases structurally integrated into your organisation’s processes. 
  2. Classify use cases: The second step is to classify each of the identified use cases under the AI Act taxonomy, taking into consideration your organisation’s role in the underlying system’s value chain.
  3. Provide AI Literacy: The AI Act requires all organisations that use AI to provide training and awareness to their personnel to ensure they are able to make informed decisions based on outputs and understand the risks and limitations. 

Conclusion 

To wrap up this brief introduction to the AI Act, it is fair to say that AI is here to stay, and so is its regulation. Organisations that succeed in this new era will be those that build solid AI governance practices from the start. By aligning with the AI Act today, you are building the foundation of tomorrow’s success. 

 

➡️ Want to go further? Join our colleague Kevin during the webinar organised with Yields on 20/01 at 11:00 CET and walk away with clear steps to start building AI governance in your organisation.  

OTHER STORIES

The European Union’s Cyber Resilience Act (CRA) entered into force on December 10, 2024, and will fundamentally reshape how businesses approach cyber security for products with digital elements. Unlike NIS2, which focuses on organizational security measures, the CRA targets the products themselves.
Dorian Pacquet shares how FinTechs can move beyond compliance to build true cyber confidence through proactive risk management and resilience.
In an interview for Dynam!sme, the digital magazine for Union Wallonne des Entreprises (UWE), David Vanderoost, CEO at Approach Cyber, discusses the Walloon cyber security landscape. 

Contact us to learn more about our services and solutions

Our team will help you start your journey towards cyber serenity

Do you prefer to send us an email?