Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
The UK needs new laws to protect workers whose employers use artificial intelligence tools to hire, fire or manage them, the Trades Union Congress said on Thursday, warning that Britain was “losing the race” to regulate AI at work.
The TUC, the umbrella organisation for 48 trade unions with about 5.5mn members, said its “ready to go” draft bill offered a pragmatic way to “make existing UK employment law fit for the AI era”, even as the EU pressed ahead with more ambitious, overarching rules on the use of AI.
“AI is already making life-changing calls in the workplace . . . We urgently need to put new guardrails in place to protect workers,” said Kate Bell, the TUC’s assistant general secretary, adding: “The UK can’t afford to drag its feet and become an international outlier.”
The TUC’s proposals would require employers to consult workers ahead of introducing any AI systems designed to make “high-risk” decisions and explain how they would be used. There would be a process for the vetting of tools bought off the shelf from outside suppliers and for regular reviews.
They would also create a legal right for job seekers and workers to seek a human audit of decisions made by AI, and put in place protections against unfair dismissal by those systems.
A separate provision would ban the use of emotion recognition technology — described by the TUC as “pseudo-science”.
The proposals also include a new right for workers to “disconnect” from email outside their contracted hours — a provision that would mirror measures recently put into law in other countries, including France and Ireland.
The TUC’s intervention comes as UK regulators grow increasingly concerned about the potential for harm from the rapid development of the models underpinning AI products such as OpenAI’s ChatGPT.
The government has until now been reluctant to regulate the development and rollout of AI models, for fear of stifling innovation — instead asking regulators to use their existing powers. But ministers are now considering putting limits on the most powerful “general purpose” AI models.
The TUC’s proposals are more narrowly aimed at the deployment of AI systems in the workplace. Many large employers already rely heavily on automated tools to help them make hiring and firing decisions; allocate and monitor work; and assess and reward workers’ performance.
Industry groups argue that these tools cut the cost of recruitment and can also open up career pathways and help employees work more efficiently.
Neil Ross, associate director for policy at TechUK, said the trade association agreed with the TUC on the need to regulate the highest-risk applications of AI at work, but was concerned it had “not yet found the right balance for what is very commonplace technology now becoming augmented with AI”.
The TUC defined “high-risk” as any decisions taken or supported by AI that could have legal effects on workers or job seekers.
The examples it gave included a law firm where trainees might have to pass a test conducted and scored by an AI system to qualify; and a large bank using an AI system to generate performance rankings and set bonuses.
Systems used to track workers’ speed and safety in warehouses — and review performance and conduct accordingly — could also be covered.
But business groups worry that defining “high-risk” applications too broadly would mean almost any HR or management software could be covered, including much more basic technologies. This could leave small businesses in particular at risk of unintentionally breaking the law.
The government said in a statement that it was “working with businesses and regulators on the safe and responsible adoption of AI in the workplace.”
It added it had put in place a £10mn support package to help regulators “deliver the skills, expertise, and institutions we need to ensure any legislation when introduced will be at its most effective”.
Discover more from Today Headline
Subscribe to get the latest posts to your email.