Key Takeaways
Summary: AI regulation is fragmented and fast-moving. Employers must start to build their own robust AI compliance frameworks or risk legal exposure and reputational damage down the line.
- Without clear and consistent AI laws, companies are navigating a maze of conflicting state rules. This inconsistency means businesses risk breaking laws they didn’t even know existed—especially if they operate across multiple regions
- High-risk AI systems like those influencing hiring, firing, or promotions are increasingly regulated with laws mandating transparency, bias audits, and employee rights.
- Employers must develop formal AI compliance frameworks with clear governance structures, including responsible principles, documentation, and regular monitoring.
- Ethics frameworks help guide how you design, deploy, and manage AI. Embedding fairness, transparency, and human oversight isn’t just morally sound but it helps prevent lawsuits and regulatory penalties.
- Beyond compliance, ethical AI provides a competitive advantage that builds trust, strengthens culture, and mitigates long-term risk.
Why should you care? Because AI already shapes real decisions about people’s careers—and without proper oversight, it can encode bias and cause harm. In a world racing toward automation, ethical AI isn’t just a technical issue; it’s a moral and strategic one.
Introduction
AI has become common terminology in corporate today. Its not something new, but its still quite scary as its a big black box that no one understands but everyone wants to use. In most organizations that i consult for, I see that AI is being used for filtering our peoples CVs, evaluating employee performance, allocating workloads to people and in some cases, even managing employee wellbeing. But as AI becomes more powerful and become more deeply ingrained in our work-lives, so do the ethical and legal risks.
The problem, however, is that there are no clear and consistent legal frameworks, policies or guidelines to regulate AI at work. The EU was the first large scale political institution to draft and sign into law an AI regulations act. However, most other countries are doing “piece and patch” work. No where is this more apparent that in the US, and those doing business in or for companies in the US should take note of its implications.
In 2025 alone, companies in the US are navigating a fragmented, fast-evolving landscape where AI regulation is inconsistent, high-stakes, and, in some places, still very much undefined. The federal government under President Trump has signaled a rollback of federal level regulatory oversight in order to start prioritizing innovation over… well regulation and restrictions. Yet, this vacuum hasn’t left the corporate world free… in fact, it made doing business in the US alot more difficult!
A Patchwork of Compliance
With the Biden-era executive order on “Safe and Trustworthy AI” being revoked a few weeks ago and a new mandate focused on “Removing Barriers to American Leadership in Artificial Intelligence,” driving federal policy.. it seems there has been a shift towards deregulation and chaos. But that doesn’t mean companies in the US can relax. Nor does it mean those who do business in the US can keep a blind eye to it…
In fact, I see quite the opposite is happening. The different States in the US have started to step in to fill the void the federal government has created which has resulted in a patchwork of state-level AI laws… Each with its own interpretation of fairness, risk, and responsibility. And each contradicting those of other states.
In 2024 alone, lawmakers introduced almost 700 different AI-related bills across 45 of their states. And over 30 states have launched their own independent task forces to study the social, financial and corporate impact of AI.
Taken together the message is clear: regulation is coming, just not in one neat package and you are going to treed fine lines to ensure that you don’t brake laws if you work in, or for companies in different states in the US.
The New Frontlines of AI Accountability
Let’s look at whats happening Colorado where my friend Bryan Dik lives. The Colorado Artificial Intelligence Act, which is set to take effect in early 2026, is aimed at regulating what it calls “consequential decisions” and implications of AI. In other words, those decisions that affect hiring, promotions, or disciplinary action for example. Under this new law, employers must take “reasonable care” to prevent algorithmic discrimination, conduct annual risk assessments, and notify employees when AI is involved in decision-making. But what does “reasonable care” mean if you are not involved in the development and training of these models and merely a user?
In another state, it seems like Illinois is also leading the charge in a more granular way. Under their new bill (HB 3773), which is effective from January 2026, companies are obligated to inform employees when AI is used in employment decisions and are banned from using geographic data as a proxy for protected attributes like race or socioeconomic status. In New York City, they brought Local Law 144 into effect in late 2023 which requires employers using automated hiring tools to conduct independent bias audits every year and publish the results publicly
These aren’t fringe laws though. They reflect a growing consensus that if AI is influencing people’s careers, lives, or paychecks, it must be transparent, accountable, and fair.
Understanding “High-Risk” AI
But not all AI tools, platforms and models are created equal. Some tools are innocuous like those helping you to schedule appointments or chatbots helping you with customer service questions. Others, like hiring algorithms or performance assessment systems, hold real power over people’s lives. These are labeled High-Risk AI Systems (HRAIS). They can affect whether someone gets a job, a raise, or a second chance and when used carelessly, they can perpetuate bias or institutionalize discrimination.
It’s therefore no surprise then that ten other states are considering similar bills to that of Colorado and New Yorks. So if your organization uses AI to make employment decisions, or to manage people… you’re no longer just a tech adopter. You’re a regulated entity.
Developing an AI Compliance Framework
Let’s be blunt: if your company doesn’t already have an AI compliance framework in place, it’s behind and you are probably already open for litigation!
In a recent report by Deloitte, they found that over half of global organizations report that they don’t have a formal AI use policy or strategy in place. That’s not just a policy gap… it’s a legal liability and a reputational risk waiting to happen.
So its important to start getting a AI Compliance framework in place, and to ensure that it meets not only your needs but covers all your basis. So what would such a framework look like?

Those we are building for our clients tend to focus on 5 areas:
- Having Clear Governance. Governance starts with assigning responsibility and accountability. Ask questions like… Who owns AI risk in your company? Is it HR? Is it Legal? is it ICT? Someone needs to be accountable and formal policies and procedures around this need to be in place.
- Define Responsible Use Principles . Next, you need to define what what responsible AI use is within your context. These principles need to be fair, transparent and ensure accountability and above all else, make sure there is human oversight throughout the AI use value chain. Make sure that these principles are more than just corporate values or posters on a wall. They need to be built into the entire organization’s workflows.
- Keep Thorough Documentation. Track where your AI models come from, what data they’re trained on, and how they’re used. Maintain “model cards” like you would employee files.
- Conduct Impact Assessments. Regularly audit your high-risk tools. Run stress tests before launching AI in real-world environments and make sure their outcomes are fair and free of bias.
- Ongoing Monitoring. AI is never “set it and forget it.” Monitor systems post-deployment. Collect feedback from employees and candidates and adjust your workflows and processes accordingly.
Building an Ethical AI Framework
AI may drive efficiency, and help you improve the quality of your work but… and I say this again BUT… it doesn’t replace human dignity. That’s why regulations are increasingly emphasizing worker rights and your policies and procedures should reflect that. The five principles to consider is:

- User Privacy. All AI tools collect and process personal data in some way or form. Are you protecting it?
- Informed Consent. Do employees know when AI is used and for what purpose? You have to get their explicit permission before you can process their data with AI.
- Bias Mitigation. Is your AI tools reinforcing internal or external biases? Are they built around outdated patterns and practices? Are you 100% sure they are bias free?
- Ensure Accountability. Who is responsible for and will answer questions when things go wrong? Is this information publicly available?
- Appeal Process. Can employees challenge AI-driven decisions?Is there a process in place to appeal and review decisions made via or with AI?
If your company can’t answer these questions clearly, you are not ready for using AI responsibly in your organization and you are at a very high risk of getting into alot of hot water.
Why Ethical AI Is a Competitive Edge
Just like with any corporate branding activity, the way in which you manage AI will eventually affect how people view your organization. In my circles, there is a growing belief (especially among younger workers) that how a company uses AI reflects is a clear reflection of their values and how they treat their people.
Companies that adopt ethical, transparent AI frameworks aren’t just avoiding lawsuits, they are building a corporate brand that signifies trust (not just with its employees, but also with the consumers of their products/services) . Those who advocate for the fair and transparent use of AI strengthens their corporate culture and they are showing leadership in a world where responsible innovation is the only kind that lasts.
If you need help designing your Ethical AI Compliance Framework, let’s talk. We’re already helping organizations build them from the ground up.