Employment Hero CEO Ben Thompson warns against “drowning AI in red tape”

This post was originally published on this site.

Australian business leaders developing AI tools for the workplace have warned against regulatory overreach, after a major parliamentary report said “high-risk” AI-powered systems should face mandatory guardrails.

The House of Representatives Standing Committee on Employment, Education, and Training tabled its Future of Work report on Tuesday, calling for new regulations in the fast-growing AI sector.

It found AI systems used by employers in key areas — recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer, or termination — could harm employees, if those systems produce the wrong advice.

To minimise the chance of harm, the committee said businesses creating and deploying those “high-risk” systems should face new transparency and testing guardrails, spanning development to real-world implementation.

The committee also argued employers should be liable under the Fair Work Act for any harm caused to staff as a result of AI-driven workplace decisions.

Updates to the Privacy Act should also address the risks posed by AI to workers’ privacy, it added.

If enacted by the federal government, the guardrail recommendations would likely affect businesses like Employment Hero, which is deploying AI in its SmartMatch recruitment tool and employer-facing chatbots.

In a statement provided to SmartCompany, Employment Hero CEO Ben Thompson said labelling employment-related AI as risky “feels like regulatory overreach”.

“Yes, we need to ensure these systems are fair, neutral, and performance-based, but many AI tools are just improving efficiency—not creating existential risks,” said Thompson.

“Sensible” guardrails in the fast-moving industry make sense, he added, “but drowning AI in red tape will only stifle innovation”.

The real challenge is striking the right balance: enough oversight to prevent harm, but enough freedom to let AI drive progress and keep Australian businesses competitive.

While focusing on the risk of AI in the workplace, the report suggests AI tools can be used to help small business employers stay on top of their regulatory requirements.

Dominic Woolrych, co-founder of legal-tech platform Lawpath, said AI tools designed for employers can increase, not decrease, compliance with workplace law.

“While we fully support appropriate oversight of AI systems, it’s crucial to recognise that properly implemented AI-powered legal tools can actually enhance protection for both employers and employees,” he said.

Related Article Block Placeholder

Article ID: 310673

Lawpath offers AI tools that allow users to draft documents and employment contracts, one of the key areas of concern highlighted in the committee report.

But Woolrych argued that affordable AI tools can help small businesses who cannot retain the services of a lawyer each time they draft a new employment contract.

“AI-assisted legal services, when combined with lawyer oversight like our hybrid model, can dramatically reduce these barriers while maintaining high standards of accuracy and compliance,” he said.

Combining those AI-powered tools with human oversight is paramount, Woolrych continued.

Beyond support for new AI guardrails, the committee recognised that new regulations alone are not sufficient to prevent harm in a rapidly-evolving sector.

“Supporting measures will be needed such as raising awareness of employers’ obligations, as well as reviewing the resourcing requirements of the Fair Work Commission and Fair Work Ombudsman to address these concerns,” committee chair and Labor MP Lisa Chesters told the Lower House on Tuesday.

Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.