Helena* says she may spend an extra hour or more every day to completely rewrite a simple paragraph. "Em dashes are forbidden. Gerunds and figures of speech are also big red flags, but sometimes the problem is a comma or a word that I have to change," she says.
For over six months, she has been working as a copywriter at a content marketing company that submits all content produced by writers to ZeroGPT, which claims to be able to identify the use of artificial intelligence in texts. She notes that even when her writing is entirely original, the tool indicates that her work contains some AI.
During her first month at the company, she said she was embarrassed when her supervisors sent her a performance report indicating that her writing showed an AI presence of over 20%, which is above the company's acceptable threshold. "I thought that if my writing didn't start getting 0% on ZeroGPT, they would replace me", she says. "That's when I started decreasing the quality of my writing"
This ✷ matters ✷ because...
- AI detecting tools are not reliable;
- Lack of regulation creates gaps in legal implications and potential abuses;
- Bill 2338/2023, which deals with the promotion and ethical and responsible use of AI, is under discussion in the Chamber of Deputies after approval by the Senate
The copywriter shared with Núcleo a handbook she received during company onboarding, which explicitly indicated submission to the detector.
Unlike Helena, who knew about it from the moment she was hired, Mirela* only discovered that her text had been fed into an AI detector when she was confronted by her direct supervisor at the sports website where she had worked for about two years. The incident occurred almost a year after the ChatGPT boom.
"I had to scramble to rewrite everything," she recalls. "I had to double my production time for a simple text. When the initial surprise wore off, I was extremely offended to be accused of using AI."
According to Mirela, the supervisor did not specify which detection software would be used, but hinted that the acceptable threshold of AI presence was approximately 2.5%. "The texts we wrote were very standardized, with lots of statistics, and there wasn't room for creativity or different writing styles," she explained.
After this wake-up call, Mirela reports that she had to rewrite website content multiple times for fear of being fired, especially because she was hired as a freelancer contractor, without the same job protections of a registered employee.
Today, she works in a different position at another company. Helena remains at the same workplace, and both women decided not to challenge the AI detection results, despite feeling wronged.
A tale being told over social media
Stories about the use of AI detectors, like those experienced by Helena and Mirela, are not hard to come by on social media.
Submitting texts written by supervisors or professors themselves (in the case of university students) to AI detectors, or testing published works that predate ChatGPT's emergence — which often erroneously show AI percentages — were some of the methods described by users for challenging these tools' effectiveness.
The concern comes amid widespread use of tools that claim up to 98% accuracy in detecting artificial intelligence in texts, such as ZeroGPT.
"This creates a crisis of trust," says André Fernandes, executive director of the Recife Institute for Research in Law and Technology (IP.rec). "This is caused, on one hand, by the launch of a technology in the form of a product, which is ChatGPT, without due ethical reflection and without proper regulatory parameters, and, on the other hand, by a perspective of 'we will react with terrorism and punishment.'"
Research by the Brazilian Internet Steering Committee (CGI.br) and the Regional Center for Studies for the Development of the Information Society (Cetic.br) estimates that 22% of more than 65,000 Brazilian companies used some type of AI technology for text mining and analysis of written language in 2024.
Big Tech companies typically train their models in English, which are then adapted for other languages through specific datasets – which can be limited and don't capture the full linguistic diversity found in individual countries.
"These models tend, after training, to be very strict with syntax [that is, the role of each word in sentence construction], which makes the text of detectors and chatbots more constrained. It's a statistical issue," explains Fernandes.
Tiago Torrent, Coordinator of the Graduate Program in Linguistics at the Federal University of Juiz de Fora (UFJF), also points out that "prejudices that are common in a given language, in a given culture, end up being transposed to another language, to another culture, where they were not present."
Torrent points out that AI is changing the way we write, and its use shouldn't be "demonized", as these tools can help with editing, proofreading, and rewriting text.
"The journalist arrives with the text, presents it to their editor. The editor says, 'I'll run an AI detector.' And then they think it was produced by AI at some level and tell them to redo it. Well, that shouldn't be a form of evaluation, because the first evaluation is to read the text. Is the text good? Is it fact-checked? Has it been checked? Is it accurate? Honestly, the question should be: could the journalist have used an AI assistant to improve the text?" he argues.
However, Torrent emphasizes that AI should bear no responsibility. While there is debate about how companies use existing content to train these systems, the verification of information and the final content remains a human responsibility that cannot be outsourced to a tool.
Therefore, he advocates that, in addition to regulating the use of these tools, there should be monitoring and guidance on their risks, limitations, and benefits. "People aren't fully informed about this," he warns.
André Fernandes of IP.rec also notes that these tools create "sub-markets," both within the platforms that provide the detectors and other services designed to circumvent AI identification.
One recent example is the paid version of Grammarly, which made nine AI agents available as writing assistants for students, including an artificial intelligence detector.
Can I be required to use an AI detector at work?
Brazil does not yet have regulations on the use of generative artificial intelligence tools. Bill 2338/2023, approved by the Senate in December 2024, is currently under review in the Chamber of Deputies. It addresses the promotion and ethical and responsible use of AI, with the chamber creating a special committee to discuss the matter.
Núcleo contacted the Ministry of Labor and Employment (MTE) to ask whether the Brazilian Artificial Intelligence Plan (PBIA) would include provisions related to the labor field beyond job creation, but the ministry did not respond.
Due to this regulatory gap, corporate lawyer Stephanie Christine de Almeida explains that companies can adopt AI detection tools if they choose to do so.
However, she emphasizes the need for transparency. "The company must have an internal policy related to AI, signed by all employees, so they are aware of this and what happens when, for example, a text indicates AI detection, even though these tools indicate in their terms of use that they are not entirely reliable," she said.
There are also existing legal provisions that have labor implications regardless of AI use. "The boss can ask the employee to rewrite the text, but it all depends on how they ask for it to be redone," she warns. "They cannot be rude, treat the person poorly, humiliate the person, or cause embarrassment, whether individually or in front of other employees, as this could constitute moral harassment."
Almeida also notes that it is possible to file a complaint with the Public Labor Prosecutor's Office to determine whether the work environment is toxic, generating psychological pressure or causing illness.