A practical guide to compliance with Ontario’s Employment Standards Act amendments for AI in hiring

Brought to you by Robb Miller

The landscape of AI governance in Canada is evolving rapidly, and Ontario is leading the charge with targeted legislation that will directly impact how businesses conduct hiring. As we prepare for our November 25th webinar on AI Governance in Canada, where we’ll explore the broader implications of AI regulation including copyright, IP, and privacy issues, it’s crucial to understand one of the most immediate compliance requirements facing Ontario businesses.

Effective January 1, 2026, Ontario’s Bill 149, the Working for Workers Four Act, 2024, (received Royal Assent on March 21, 2024) focuses specifically on AI use in employment decisions. Bill 149 created a new Part III.1 in the Employment Standards Act, 2000 (ESA). Part III.1 contains five new rules related to how employers go about advertising jobs and hiring new employees.

This affects employers with 25 or more employees who use AI in their hiring processes. As we approach the January 1, 2026 effective date, businesses across Ontario (and with operations in Ontario) need to understand not just the letter of the law, but its broader implications for HR practices, bias prevention, and compliance strategies.

What Bill 149 Actually Requires

The Core Disclosure Requirement

The disclosure requirement is straightforward but comprehensive. If you use AI to screen, assess, or select applicants, you must include specific language in your job postings. The required disclosure is: “This employer uses artificial intelligence technology to assist in screening, assessing, or selecting applicants for this position. 

This isn’t optional language you can modify—it’s the exact wording required by the Employment Standards Act amendment. The law applies to employers operating in Ontario with 25 or more employees and requires disclosure if AI is used to screen, assess, or select applicants

Who’s Covered?

As of January 1, 2026, employers with twenty-five (25) or more employees must include a statement on job postings if AI is used to screen, assess, or select applicants. More specifically, these requirements will apply to Ontario employers who employ 25 or more employees on the day the publicly-advertised job posting is posted.

This means:

  • Ontario-based employers (or employers operating in Ontario)
  • With 25+ employees total (company-wide, not just Ontario employees)
  • On the day the job posting is made

The new job posting requirements will only apply to employers operating in Ontario with 25 or more employees on the day the publicly advertised job posting is posted.

The significance of Ontario’s AI employment legislation is reflected in extensive coverage from Canada’s leading law firms. Norton Rose FulbrightOslerBaker McKenzieAird BerlisMiller Thomson, and Cassels have all published detailed analyses, with Baker McKenzie noting this has “created a minefield of increased legal liability for employers” and Osler warning that the broad AI definition “leaves room for inconsistent interpretation” and could lead to “inadvertent failure to disclose AI usage.

Third-Party HR Platforms: You’re Likely Already Using AI

The Greenhouse, Ashby, and Workday Reality

Here’s where many businesses will be surprised: If you’re using platforms like Ashby, Greenhouse, Workday, Lever, or LinkedIn’s recruiting tools, and these platforms have AI features enabled, you likely need to include this disclosure. 

Applies if you use Ashby, Greenhouse, Workday, Lever, LinkedIn, etc.  The legislation doesn’t distinguish between AI you develop in-house and AI embedded in third-party platforms—if AI is being used in your hiring process, disclosure is required.

What Counts as AI in Hiring?

The scope includes any system that uses AI to:

  • Screen resumes automatically
  • Rank or score candidates
  • Parse applications for relevant information
  • Schedule interviews based on algorithmic matching
  • Assess candidate responses in any automated way

In HR, platforms like Ashby and Greenhouse have AI features that might be enabled by default.  Many businesses don’t realize these features are active until they conduct an audit.

The Broader Implications: Beyond Simple Disclosure

Human Rights and Bias Investigations

While Bill 149 focuses on disclosure, algorithmic bias can trigger human rights issues, and Ontario’s human rights framework already prohibits discrimination in employment.

Ontario Human Rights Code (the “Code”): The 17 Protected Grounds

Discrimination based on 17 different personal attributes – called grounds – is against the law under the Code. The grounds are: citizenship, race, place of origin, ethnic origin, colour, ancestry, disability, age, creed, sex/pregnancy, family status, marital status, sexual orientation, gender identity, gender expression, receipt of public assistance (in housing) and record of offences (in employment).

Employment under the Code includes “job ads, application forms, job interviews, work assignments, work environment, training, promotions, discipline, terminations, volunteer duties, etc.”

When AI systems in hiring produce discriminatory outcomes against any of these protected grounds, employers face potential human rights complaints regardless of whether the discrimination was intentional. Importantly, intent is not required to establish a breach of the code. Discriminatory outcomes resulting from neutral policies or practices, including AI, can constitute “adverse effect discrimination” if they disproportionately impact a protected group.

AI-Specific Human Rights Risks

In Ontario, employers can be held legally responsible for discriminatory hiring practices resulting from the use of AI, even if a third-party vendor developed the algorithm. Under the code, an employer cannot delegate or outsource human rights obligations.

The disclosure requirement creates a paper trail that could become relevant in discrimination investigations. When candidates know AI was used in hiring decisions, they have a clearer basis for challenging those decisions if they believe bias occurred against any of the 17 protected grounds.

The Ontario Human Rights Tribunal (HRTO) considers whether the employer took reasonable steps to prevent discriminatory outcomes. Ignorance of how an algorithm works, or blind reliance on a vendor’s claims of fairness, is unlikely to provide a successful defense.

The “Shadow AI” Problem

The reality is that your employees and contractors are using AI for routine tasks across every business function. This “shadow AI” adoption is happening organically, often without formal approval or oversight. 

Even if your official HR platform doesn’t use AI, individual recruiters might be using AI tools to:

  • Draft job descriptions
  • Screen resumes
  • Prepare interview questions
  • Analyze candidate responses

This unofficial use still creates legal risk and may require disclosure under Bill 149’s broad language.

Enforcement and Penalties: The Financial Reality

Employment Standards Act Violations

Non-compliance with Bill 149’s disclosure requirements can trigger Employment Standards Act enforcement action. The maximum fine for individuals convicted of violating the ESA or failing to comply with an order has been increased from $50,000 to $100,000. For corporations, penalties can be significantly higher, and enforcement action can include an order to pay, a compliance order, a ticket, a notice of contravention with a monetary penalty, an order to reinstate and/or compensate, or prosecution.

Quebec Law 25 Violations

Quebec Law 25 has the broadest reach of any AI-specific regulation in Canada. If your business processes personal information of Quebec residents, you’re subject to these requirements regardless of where your company is located or incorporated. Violations can result in administrative monetary penalties of up to $10 million or 2% of worldwide turnover for the preceding fiscal year, whichever is higher. The automated decision-making provisions in Section 12.1 are particularly significant. When decisions are made “exclusively through automated processing,” you must disclose the personal data used, explain the decision factors, provide correction rights, and offer a human review channel.

Human Rights Code Violations

The financial exposure under human rights legislation can be far more severe.  Human rights complaints can result in orders for financial compensation to affected individuals, mandatory policy changes, and ongoing monitoring requirements. Unlike Employment Standards Act violations, there are no caps on human rights damages, and awards can include compensation for injury to dignity, feelings, and self-respect, as well as lost income and benefits.

The disclosure requirement creates a paper trail that could become relevant in discrimination investigations. When candidates know AI was used in hiring decisions, they have a clearer basis for challenging those decisions if they believe bias occurred against any of the 17 protected grounds.

Practical Compliance Steps

Phase 1: Immediate Assessment (Now}

  1. Audit Your Current Tools Start auditing your hiring processes now to understand where AI is being used, often in ways that aren’t immediately obvious to hiring managers. 
  2. Contact Your Vendors – Reach out to Greenhouse, Ashby, Workday, or other HR platform providers to understand exactly which AI features are enabled in your account.
  3. Document Everything Your AI governance framework needs to cover all AI systems used, developed, or procured by your company, including internal development projects, third-party AI services and software, AI-enabled business processes, and employee use of AI tools. 

Phase 2: Policy Development

  1. Update Job Posting Templates Prepare standardized disclosure language for all positions where AI might be used.
  2. Train Your HR Team – Training Requirements: Annual AI ethics and compliance training, Role-specific AI tool training, Incident reporting procedures, Data protection obligations 
  3. Implement Bias Testing  Best practice is proactive bias testing, traceability to identify and address discriminatory outcomes before they become legal issues.

Phase 3: Implementation and Monitoring

  1. Deploy New Processes Roll out updated job posting procedures and AI governance policies.
  2. Monitor Compliance – Quarterly AI risk reviews, regulatory update tracking. Escalation triggers: Discriminatory outcomes, data breaches, new regulations 

The Broader Context: Quebec Law 25 and Federal Developments

Ontario’s Bill 149 doesn’t exist in isolation.  Quebec Law 25 has the broadest reach of any AI-specific regulation in Canada. If your business processes personal information of Quebec residents – and most businesses do – you’re subject to these requirements regardless of where your company is located or incorporated. 

 The automated decision-making provisions in Section 12.1 are particularly significant. When decisions are made “exclusively through automated processing,” you must disclose the personal data used, explain the decision factors, provide correction rights, and offer a human review channel. 

Key Takeaways for Business Leaders

Start Now, Not Later

The January 1, 2026 effective date gives you time to prepare, but don’t wait.  The complexity of modern HR technology means most businesses will need several months to fully understand their AI usage and implement compliant processes.

Think Beyond Compliance

AI governance isn’t a nice-to-have anymore – it’s business-critical. The regulatory landscape is evolving rapidly, and the business risks are real and immediate. 

Build for the Future

Remember that this is an ongoing process, not a one-time project. AI technology evolves rapidly, regulations change, and your business needs shift. Build review and update processes into your governance framework from the beginning. 

Join Us for a Deeper Dive

Mark your calendar for November 25th for our comprehensive webinar on Navigating AI Governance in Canada.” We’ll explore the full spectrum of AI legal challenges facing Canadian businesses, including:

  • Copyright and IP protection strategies
  • Privacy compliance across multiple jurisdictions
  • Risk assessment frameworks
  • Vendor management best practices
  • Incident response planning

This session goes far beyond employment law to address the complete legal landscape of AI adoption in Canadian business.

Professional AI Risk Assessment for Ontario Companies

We have developed a systematic, effective approach to managing your Legal Risk arising from AI. $2500 – AI Risk Management Engagement specifically designed for Ontario businesses navigating Bill 149 and broader AI governance requirements.

Our comprehensive assessment includes:

  • Complete AI inventory and risk classification
  • Regulatory compliance gap analysis
  • Policy development and implementation roadmap
  • Vendor agreement review and recommendations
  • Employee training materials and protocols

Next Steps

Most importantly, don’t let perfect be the enemy of good. Start with basic protections and build sophistication over time. The biggest risk is doing nothing while waiting for the perfect solution.

Immediate Actions:

  1. Audit your current HR technology stack
  2. Contact your vendors about AI features
  3. Register for our November 25th webinar
  4. Consider our AI Risk Assessment package
  5. Begin developing AI governance policies

Our fractional general counsel model is designed to provide ongoing legal support without the overhead of a full-time hire, which is particularly valuable for AI governance because it requires continuous monitoring and strategic guidance.

For specific guidance on your AI governance challenges, whether you need help with policy development, vendor negotiations, compliance assessments, or our comprehensive AI Risk Assessment package, CEO Law is here to help you build a governance framework that works for your business.

Discover more from CEO Law Canada

Subscribe now to keep reading and get access to the full archive.

Continue reading