New risks in the AI workplace
Published on 11/11/25
Click below for related articles
This article first appeared in Insurance Business

The rapid acceleration in the use of AI tools is already revolutionising almost every aspect of the modern workplace, but it’s not without its pitfalls. Matt Jefferies, Corporate New Business Senior Manager at ARAG takes a look at the risks for employers and what businesses can do to avert them.
Most of us have experienced a ‘wow’ moment when using AI, in which we’ve been simultaneously stunned by the speed with which a complex task is completed and imagined the implications it has for the future of work and everyday life.
It isn’t even three years since Open AI made the first, early demo version of ChatGPT publicly available, but the AI chatbot and its numerous competitors are already embedded in many everyday office tasks.
Using chatbots to summarise a long report, draft a letter or pull together a presentation is just the tip of the AI iceberg. Machine learning has quickly become so ingrained in business processes, from recruitment to workforce management, that it’s almost impossible to tell whether the software, platform or partner we’re using relies on AI.
While the technology is already bringing enormous benefits to workplaces around the world, it brings with it some risks for businesses that can difficult to identify and assess.
Discrimination
Most employers work hard to avoid discrimination in recruitment, promotion, pay, or allocation of work. However, AI recruitment tools used for candidate sourcing, screening, engagement and even interviewing can introduce bias that could lay a business open to allegations of discriminatory behaviour or even legal action.
Machine learning relies on vast repositories of data to train itself to make ‘intelligent’ decisions. If there is bias in the data, then it can be replicated in the decisions made. In 2018, one of the world’s largest employers was widely reported to have scrapped a recruitment application screening tool that showed bias against female applicants.
Humans are equally fallible though, and could inadvertently discriminate against older, disabled or digitally excluded candidates or employees because they have less exposure to such tools.
It’s not just in the workplace that businesses face AI discrimination risks. The Equality Act 2010 protects customers from discrimination when buying or using goods or services. The increasing use of AI-powered tools in sales and customer service, whether over the phone, via screens or in person, could easily disadvantage disabled, neurodiverse or digitally excluded customers.
Data Protection
Another AI risk that could apply to customers as much as employees is the potential for breaching data protection legislation.
Machine learning has a rapacious appetite for data that is typically stored and processed in vast, remote data centres that can be anywhere in the world and relocated at the flick of a switch. It’s easy to see how this could be at odds with legal requirements for personal data to be handled securely, accurately, transparently and for no longer than is necessary.
Any organisation using AI-powered tools with personal data should conduct regular data protection impact assessments to ensure they are staying the right side of the law.
Confidentiality, Copyright and Accuracy
Machine learning tools need our data every bit as much as we need their utility. How conscious are your staff that the report they’ve uploaded to generate that flashy AI-powered presentation doesn’t contain confidential or commercially sensitive data?
Equally, in the race to seize competitive advantage some technology providers have been highly cavalier with others’ intellectual property. Copying an author, artist, coder, musician or another company’s work could prove embarrassing and potentially illegal.
Just as we have been stunned by the power of AI tools, most of us are still regularly surprised by the basic errors and omissions they seem to produce. Whether the product of unreliable data, misleading prompts or some unseen flaw in an algorithm, relying heavily on AI tools without sufficient human oversight could be costly or even catastrophic.
Where does responsibility lie if an employee’s use of AI tools, that a business has enabled and encouraged, results in damage to the company’s reputation or bottom line?
Frontier caution
Employers need to monitor the tools that are used across the business, whether they involve personal data or not, to ensure that AI is deployed in a controlled and legal fashion.
Staff using any AI-powered software or platforms should be consulted and properly trained, and thorough due diligence conducted on any partners or tools deployed. Data protection impact assessments are important, but privacy and data protection policies may need updating to ensure the use of AI is suitably transparent to employees and customers alike.
From the printing press to the blockchain, businesses have harnessed new technologies to create huge commercial advantage. Artificial intelligence can’t be put back in its box, but it needs to be kept on a tight leash.
Disclaimer - all information in this article was correct at time of publishing.
Related articles

