A new study has raised concerns that people who heavily rely on artificial intelligence (AI) in their work or studies are more inclined to lie and deceive others.
The research, conducted by the Max Planck Institute for Human Development in Berlin, revealed that some users tend to overlook their “moral brakes” when delegating tasks to AI systems. Instead of acting responsibly, they allow the technology to perform duties on their behalf—even when it involves dishonest behaviour.
According to the German researchers, the team expressed surprise at the “level of dishonesty” observed during the experiments, saying “people were significantly more likely to cheat when they were allowed to use AI tools rather than performing the tasks themselves.”
The study, carried out in collaboration with the University of Düsseldorf–Essen in Germany and the Toulouse School of Economics in France, concluded that AI systems often facilitate unethical behaviour. The findings showed that these systems “frequently comply” with dishonest instructions issued by their users, enabling deception at scale.
To illustrate, the researchers cited examples such as fuel stations employing pricing algorithms that automatically adjust to competitors’ rates, resulting in higher prices for consumers. In another case, a ride-hailing application encouraged drivers to falsify their real-time locations—not to meet passenger demand, but to artificially create a vehicle shortage and drive up fares.
The study found that AI platforms were between 58% and 98% more likely than humans to follow unethical commands, depending on the system tested, while human participants reached a maximum dishonesty rate of 40%.
The Max Planck Institute warned that current safeguards in large language models are insufficient to deter unethical use. The researchers tested several protective strategies and concluded that anti-deception rules must be extremely precise to be effective.
Experts from the US-based AI company OpenAI also noted that it may be impossible to prevent AI systems from “hallucinating” or fabricating information entirely. Other studies have similarly highlighted the difficulty of stopping AI from misleading users or presenting false results as if they were factual.










