If you’re a Business Owner, Director, or Manager in the UK, the words “new AI regulation” might not fill you with joy.
We get it, AI already feels complicated enough without the new legislation.
But here’s the thing: whether you use AI tools to speed up admin or you’re building AI-powered products for clients, the new EU AI Act could still apply to you. And it’s better to be aware now, rather than caught out later.
Let’s unpack what’s going on in plain English, and what you might need to do next.
What is the EU AI Act?
The EU AI Act is the first big attempt to make sure artificial intelligence is safe, fair, and used responsibly. Think of it like GDPR – but for AI.
It splits AI tools into four categories:
- Banned -Things that shouldn’t be used at all (social scoring).
- High-risk – Things that could cause harm if they go wrong (CV-screening tools or facial recognition).
- Limited risk – Tools, such as chatbots, where you just need to be transparent.
- Minimal risk – Everyday AI, such as spam filters, that need no real oversight.
The more “risky” the AI system, the more rules you have to follow.
But we’re in the UK. Does it even apply to us?
Here’s the important bit: yes, it might.
Even though the UK isn’t in the EU anymore, these rules can still apply if you:
- Sell AI products or services to EU customers
- Use AI systems in the EU
- Work with EU-based partners or suppliers who rely on your AI tools
So if you’re a UK company using AI in finance, healthcare, recruitment, or customer data, it’s worth paying attention.
How is the UK approaching AI regulation?
The UK is taking a more flexible path. Instead of one big law like the EU’s, we’re letting existing regulators (e.g. ICO or FCA) guide how AI is used in their sectors.
The UK government has set out five core principles for using AI:
- Make it safe and secure
- Be fair
- Be transparent
- Be accountable
- Allow people to challenge AI decisions
There’s no single “UK AI Act” yet, and the focus is on encouraging innovation rather than strict enforcement – especially for SMEs.
Quick Example
If you’re a Brighton-based software company using AI to score CVs and sell to clients in France, you’ll likely fall under the “high-risk” category in the EU. That means you’ll need to show how your system avoids bias, explain decisions, and document your processes.
If you only sell in the UK? You don’t have to do any of that yet, but it’s still smart to follow those good practices anyway.
What Should UK SMEs Do?
Here’s our no-nonsense list to help you stay ahead:

1. Know if the EU rules apply to you
If you trade or operate in the EU, even just online, there’s a good chance they do.
2. Understand how your AI is used
Is your AI helping people, making decisions, or analysing sensitive data? You may be in a higher risk category.
3. Keep good records
Write down what your AI does, how it works, and what data it uses. You might need this info later.
4. Be transparent with users
If a customer is interacting with AI, tell them. Simple labels go a long way.
5. Keep an eye on UK changes
The UK may not have a full AI law yet, but regulators are watching closely. Change is coming.
Regulation isn’t about stopping you from using AI, it’s about making sure it’s used fairly, safely and responsibly, At ERGOS, we believe AI can be a huge boost for small and mid-sized businesses. But as with any new tool, it pays to know the rules.
If you’re unsure where to start or whether these rules apply to your business, we’re happy to walk you through it.
Let’s Make It Simple
Need help understanding what this means for your business?
Email us to discuss further: Contact – ERGOS Technology Partners

