When machines decide, who is responsible?
As algorithms take on decision-making, accountability blurs and creates new risks for businesses and customers.
As algorithms take on decision-making, accountability blurs and creates new risks for businesses and customers.
Artificial intelligence is transforming the way we work, but it may also be reshaping how we think about right and wrong.
Dr Hongmin (Jess) Yan from UNSW Canberra is concerned that AI is changing workplace ethics and could make it easier for employees to avoid feeling responsible for unethical decisions because it blurs who is accountable.
“When an algorithm recommends a decision or automates a choice, employees can rationalise unethical behaviour in new ways by saying, ‘the AI suggested it,’” Dr Yan explained.
“That diffusion of responsibility, where it’s unclear whether the person, the algorithm, the organisation, or the developers are accountable, creates a psychological shield that makes it easier to engage in unethical behaviour without the same moral friction.
“The technology creates distance between people and the ethical implications of their choices. Unless we’re very intentional about maintaining human accountability and building ethical checks into AI systems from the start, we risk amplifying Unethical Pro-Organisational Behaviour in ways we’re only beginning to understand.”
Dr Yan’s research into Unethical Pro-Organisational Behaviour (UPB) predates the rise of AI, and looks at how employees would cross the ethical line at work if doing so helped the organisation succeed, like that of a high-profile case that shocked the world.
“The Theranos scandal was a pivotal moment for me,” Dr Yan said.
“Employees at Theranos genuinely believed they were revolutionising healthcare, yet their actions ultimately put patients at risk. It raised a critical question: how do well-meaning people end up crossing ethical lines when they’re trying to help their organisation?
“What struck me was that this phenomenon extends far beyond high-profile corporate cases. I began noticing similar patterns in everyday service encounters, like the waiter who oversells a mediocre dish or a retail worker who exaggerates product benefits.
“These employees aren’t acting out of personal greed, they’re trying to help their organisation succeed.”
To understand this phenomenon more deeply, Dr Yan focused her research on frontline service roles, those who interact with customers face-to-face. Unlike corporate settings, where unethical behaviour feels abstract, customer-facing employees interact directly with the people affected.
“You’d think this immediate proximity would make unethical behaviour less likely, that the human element would activate stronger moral restraints. Yet our research shows UPB still happens with striking frequency in these settings,” she said.
“It reveals just how powerful organisational loyalty can be, strong enough to override the natural empathy and moral discomfort that comes from face-to-face interaction.”
Dr Yan warns that some organisations may unintentionally encourage UPB through aggressive performance targets.
“They promote values like integrity and customer service but often overlook how these ideals can clash with competitive pressures, creating impossible positions for staff,” Dr Yan stated.
The stakes are rising as AI enters the equation. Dr Yan recommends organisations cultivate genuine psychological connections between employees and stakeholders.
Her research shows such connection acts as a powerful ethical safeguard. But as algorithms increasingly mediate workplace decisions, maintaining human accountability becomes even more critical.
“Unless we’re intentional about building ethical checks into AI systems from the start, we risk amplifying UPB in ways we're only beginning to understand.”